diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..ea7387a1b38ac2164c2bf8633a37719cf5634486 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,251 @@ +# Divergence-Free Copy-Paste For Fluid Animation Using Stokes Interpolation + +Category: Research + +## Abstract + +Traditional grid-based fluid simulation is often difficult to control and costly to perform. Therefore, the ability to reuse previously computed simulation data is a tantalizing idea that would have significant benefits to artists and end-users of fluid animation tools. We introduce a remarkably simple yet effective copy-and-paste methodology for fluid animation that allows direct reuse of existing simulation data by smoothly combining two existing simulation data sets together. The method makes use of a steady Stokes solver to determine an appropriate transition velocity field across a blend region between an existing simulated outer target domain and a region of flow copied from a source simulation. While prior work suffers from non-divergence and associated compression artifacts, our approach always yields divergence-free velocity fields, yet also ensures an optimally smooth blend between the two input flows. + +## 1 INTRODUCTION + +The ubiquitous copy-paste metaphor from text and image processing tools is popular because it is conceptually simple and significantly reduces the need for redundant effort on the part of the user. A copy-paste tool for fluid simulation could offer similar benefits while reducing the total computational effort expended to achieve a desired results through the reuse of existing simulation data. This paper proposes exactly such a scheme. + +The control of fluids has long been a subject of interest in computer animation: typical strategies that have been explored include space-time optimization (e.g., $\left\lbrack {{18},{35}}\right\rbrack$ ), space-time interpolation (e.g., $\left\lbrack {{26},{33}}\right\rbrack$ ), and approaches that involve the application of some combination of user-designed forces, constraints, or boundary conditions (e.g., $\left\lbrack {{19},{24},{32},{34},{36},{37}}\right\rbrack$ ). Because the last of these families is typically the least expensive and offers the most direct control, it has generally been the most effective and widely used in practice. Our method also falls into this category. + +Inspired by Poisson image editing [23], recent work by Sato et al. [28] hints at the potential power of a copy-paste metaphor for fluids. Unfortunately, their approach suffers from problematic nonzero divergence artifacts at the boundary of pasted regions, which depend heavily on the choice of input fields. We therefore introduce a new approach that provides natural blends between source and target regions yet is relatively simple to set up, requires solving only a standard Stokes problem over a narrow blend region at each time step, and always produces divergence-free vector fields. + +## 2 RELATED WORK + +### 2.1 Controlling fluid animation + +Artistic control of fluid flows has been a subject of interest from the earliest days of three-dimensional fluid animation research. Foster and Metaxas [11] proposed a variety of basic control mechanism through imposition of initial or boundary values on quantities such as velocity, pressure and surface tension. A wide range of subsequent methods have been proposed to enable various control methods, which we review below. + +One quite common approach is to apply forces or optimization approaches to encourage a simulation to hit particular target keyframes for the density or shape of smoke or liquid $\left\lbrack {8,{18},{22},{29},{34},{35}}\right\rbrack$ . Another strategy makes use of multiple scales or frequencies, using a precomputed or procedural flow to describe the low-resolution motion and allowing a new physical simulation to add in high-frequency details $\left\lbrack {{10},{19} - {21},{34}}\right\rbrack$ . + +Other approaches aim to work more directly on the fluid geometry, rather than the velocity field. For example new editing metaphors have been proposed, such as space-time fluid sculpting [17] and fluid-carving [9]; the latter is conceptually similar to seam-carving from image/video editing [1]. Another direct geometric approach seeks to directly interpolate the global fluid shape and motion [26, 33]. These strategies generally require the overall simulation to already be relatively close to the desired target behavior. + +Approaches that rely on the direction application of velocity boundary conditions on the fluid flow are similar to ours in some respects $\left\lbrack {{24},{25},{32},{36}}\right\rbrack$ . Often these have been used to cause liquid to follow a target motion or character, with varying degrees of "looseness" allowed in order to retain a fluid-like effect. They have not been used to combine existing simulations. + +Another useful task in liquid animation is to insert a localized 3D dynamic liquid simulation, such as the region around a ship or swimming character, into a much larger surrounding procedural ocean or similar model. This has been achieved through the use of non-reflecting boundary conditions $\left\lbrack {6,{30}}\right\rbrack$ . These approaches focus on simulating the interior surface region of a liquid and smoothly damping out the surface flow to match the prescribed exterior model. This contrasts with our copy and paste problem, where both the interior and exterior are presimulated flows that must be combined together. + +The closest method to ours is of course that of Sato et al. [28], who first proposed the copy-paste fluid problem. Their work also begins from the Dirichlet energy; however, through an ad hoc substitution of the input field's curl, they arrive at a new energy that minimizes the squared divergence of the velocity field plus the difference of the curl of the output and input vector fields. This formulation penalizes divergence rather than constraining it to be zero, and this likely accounts for the presence of undesired erroneous divergence in their results. By contrast, our approach is always strictly divergence-free. + +### 2.2 Stokes flow in computer graphics + +Steady (time-independent) Stokes flow is an approximation that is appropriate when momentum is effectively negligible, as indicated by a low Reynolds number. In computer graphics this approximation has been used in the context of paint simulation [4] and for design of fluidic devices [7]. The unsteady (time-dependent) variant has also been used as a substep within a more general Navier-Stokes simulator [16] for Newtonian fluids. Closest to our work is that of Bhattacharya et al. [5] who use the Stokes equations to fill a volume with smooth velocities, as an alternative to simple velocity extrapolation or potential flow approximations; they then use the generated field as a force to influence a liquid simulation. Their approach is shown to maintain rotational motion better than those existing alternative interpolants. We expand on this idea to address the copy-paste problem. + +## 3 Method + +Our method takes as input the per-timestep vector field data for two complete grid-based incompressible fluid simulations (denoted source and target), along with geometry information dividing the domain of the final animation into an inner source region ${\Omega }_{s}$ , an outer target region ${\Omega }_{t}$ , and a blending region ${\Omega }_{b}$ . In the final time-varying vector field to be assembled, the data in ${\Omega }_{s}$ and ${\Omega }_{t}$ are simply replayed from the inputs; the central task we must solve is to generate a "natural" vector field for the blend region ${\Omega }_{b}$ in between for all time steps. + +We would like the vector field we generate in the blend region to possess a few key characteristics. First, the velocities at the boundaries of the blend region (on either ${\Gamma }_{t} = {\Omega }_{t} \cap {\Omega }_{b} = \partial {\Omega }_{t}$ or ${\Gamma }_{s} = {\Omega }_{s} \cap {\Omega }_{b} = \partial {\Omega }_{s}$ ) should exactly match the velocities of the corresponding input field - this is essentially the familiar no-slip boundary condition often used for kinematic solids or prescribed inflows/outflows in Newtonian fluids. Second, the vector field should be relatively smooth, since our objective is essentially a special kind of velocity interpolant. With only these two stipulations, a very natural choice is harmonic interpolation [15]. As suggested by Sato et al. [28], this can be expressed as minimizing the Dirichlet energy: + +$$ +\mathop{\operatorname{argmin}}\limits_{{\mathbf{u}}_{b}}{\iiint }_{{\Omega }_{b}}{\begin{Vmatrix}\nabla {\mathbf{u}}_{\mathbf{b}}\end{Vmatrix}}^{2} \tag{1} +$$ + +$$ +\text{subject to}{\mathbf{u}}_{b} = {\mathbf{u}}_{s}\text{on}{\Gamma }_{s}\text{,} +$$ + +$$ +{\mathbf{u}}_{b} = {\mathbf{u}}_{t}\text{on}{\Gamma }_{t}\text{.} +$$ + +![01963e8b-d1ff-7262-ae73-4780aa1f6702_1_650_334_219_202_0.jpg](images/01963e8b-d1ff-7262-ae73-4780aa1f6702_1_650_334_219_202_0.jpg) + +The minimizer satisfies $\nabla \cdot \nabla {\mathbf{u}}_{b} = 0$ , i.e. a componentwise Laplace equation on the velocity. (From here on we diverge from Sato et al. who proceed instead to manipulate the Dirichlet energy into a form that yields a vector Poisson equation.) + +The Dirichlet energy alone is clearly insufficient, because it will prioritize smoothness at the cost of introducing divergence. Because we have assumed an incompressible flow model for our input (and desired output), the velocities in the blend region should not create or destroy material. A natural solution would be to simply apply a standard pressure projection as a post-process to convert the harmonic velocity field above to be incompressible. Unfortunately, this can cause the velocity field to deviate significantly from the harmonic input. Moreover, as we show in Section 5, pressure projection enforces only a free-slip condition (no-normal-flow), which allows objectionable tangential velocity discontinuities at the blend region's boundaries to be introduced. + +We instead simultaneously combine the divergence-free stipulation with harmonic interpolation through the following formulation: + +$$ +\mathop{\operatorname{argmin}}\limits_{{\mathbf{u}}_{b}}{\iiint }_{{\Omega }_{b}}{\begin{Vmatrix}\nabla {\mathbf{u}}_{\mathbf{b}}\end{Vmatrix}}^{2} \tag{2} +$$ + +$$ +\text{subject to}\nabla \cdot {\mathbf{u}}_{b} = 0\text{on}{\Omega }_{b} +$$ + +$$ +{\mathbf{u}}_{b} = {\mathbf{u}}_{s}\text{on}{\Gamma }_{s}\text{,} +$$ + +$$ +{\mathbf{u}}_{b} = {\mathbf{u}}_{t}\text{on}{\Gamma }_{t}\text{.} +$$ + +This optimization problem provides the smoothest velocity field that interpolates the boundary data while preserving incompressibility. If we enforce the constraint with a Lagrange multiplier $p$ , the optimality conditions turn out to yield exactly the (constant viscosity) steady Stokes equations, + +$$ +\nabla \cdot \nabla {\mathbf{u}}_{b} - \nabla p = 0. \tag{3} +$$ + +$$ +\nabla \cdot {\mathbf{u}}_{b} = 0, \tag{4} +$$ + +consistent with Helmholtz's minimum dissipation theorem [2]. We therefore refer to this construction as Stokes interpolation. + +As noted in Section 2, we are not the first to suggest using the Stokes equations as a fluid interpolant: Bhattacharya et al. [5] first proposed steady state Stokes flow interpolation. However, our derivation and discussion above provides additional justification and insight into the variational nature of this approach. More importantly, Bhattacharya et al. did not consider the fluid cut-and-paste problem that we address in the current work. + +A minor issue is that, for there to exist a valid solution, the boundary conditions must satisfy a compatibility condition; that is, the integrated flux across the two boundaries must be consistent with the condition of incompressibility on the blend region's interior: + +$$ +{\iiint }_{{\Omega }_{b}}\nabla \cdot {\mathbf{u}}^{n + 1}{dV} = 0 = {\iint }_{{\Gamma }_{s}}{\mathbf{u}}_{s}^{n + 1} \cdot \mathbf{n}{dA} + {\iint }_{{\Gamma }_{t}}{\mathbf{u}}_{t}^{n + 1} \cdot \mathbf{n}{dA} \tag{5} +$$ + +Fortunately, since the input vector fields both come from simulations that are themselves incompressible, the divergence theorem ensures that both the source copied patch and the target region to be pasted over have zero net flux across their respective boundaries - hence compatibility is guaranteed. + +We arrive at the following algorithm. For each timestep, extract the boundary velocities from the input source and target simulations. Perform a steady Stokes solve on ${\Omega }_{b}$ as we have described to produce ${\mathbf{u}}_{b}^{n + 1}$ . Finally, directly fill in the inner and outer ${\Omega }_{s}$ and ${\Omega }_{t}$ regions with velocity from the input data ${\mathbf{u}}_{s}^{n + 1}$ and ${\mathbf{u}}_{t}^{n + 1}$ , respectively. The resulting time-varying vector field is divergence-free and offers an attractively smooth blend between source and target flows. + +Note that, since the combined vector field differs significantly from both its inputs, the flow of any passive material (such as smoke density or tracer particles) must be recomputed from scratch by advection through the new field in order to yield a consistent visual result. This can usually be done efficiently and in parallel, since each (passive) particle's motion affects no other particles. + +## 4 IMPLEMENTATION + +While our concept is very general, our implementation assumes that all simulations are arranged on a standard staggered ("MAC") grid [14]. This provides a natural infrastructure on which to discretize the Stokes equations on the blend region, via centered finite differences. The boundary between the blend region and the surrounding source and target flow fields is assumed to lie on axis-aligned grid faces between cells (although this could potentially be generalized to irregular cut-cells if desired $\left\lbrack {3,{16}}\right\rbrack$ ). Where needed to ensure precise no-slip velocities conditions at the exact face midpoints on voxelized boundaries of the blend region, we make use of the usual ghost fluid method [12] for the Laplace operator in (4). To solve the Stokes linear system at each step, we use the Least Squares Conjugate Gradient solver provided by the Eigen library [13], with a tolerance of $5 \times {10}^{-5}$ . (Other options for solving indefinite systems, such as SYMQMR or MINRES would also be appropriate [27].) + +## 5 RESULTS + +We now consider some illustrative scenarios to demonstrate the behaviour of our method. Most of our figures make use of passive marker particles with alternating colors in initially horizontal rows to better highlight the developing flow structure, but we strongly encourage the reader to review our supplemental video to assess the motion more fully. + +Our first scenario (Figure 1) consists of a static solid disk in a vertical wind-tunnel scenario, with inflow at the top and outflow at the bottom (particles leaving the bottom boundary re-enter at the top). We wish to paste the disk and its surroundings from the source simulation into an even simpler empty vertically translating wind-tunnel target simulation. This yields a smooth divergence-free combination of the two flows, where the flow outside of the blend region is completely undisturbed. Our result necessarily differs from the source animation, since in the source animation the presence of the disk globally disturbed the flow; our Stokes interpolation approach must therefore deform the flow more strongly in the blend region to compensate, yet we still achieve a visually plausible flow (Figure 2). + +![01963e8b-d1ff-7262-ae73-4780aa1f6702_2_151_276_717_252_0.jpg](images/01963e8b-d1ff-7262-ae73-4780aa1f6702_2_151_276_717_252_0.jpg) + +Figure 1: Basic Setup: Our simplest scenario involves copying the flow around a disk from its source simulation (left, with region to be copied surrounded in blue) into an obstacle-free target simulation (middle). The result of our method is a new smoothly combined flow (right). The blue lines denote the inner and outer borders of the blend region over which we apply our Stokes interpolation. (The same frame of animation is shown in all three images.) + +![01963e8b-d1ff-7262-ae73-4780aa1f6702_2_152_814_715_249_0.jpg](images/01963e8b-d1ff-7262-ae73-4780aa1f6702_2_152_814_715_249_0.jpg) + +Figure 2: Merged Flow Over Time: A few frames of the edited animation result of our approach based on the scenario described in Figure 1. + +Next, we consider our method in comparison to two other obvious alternatives, as discussed in Section 3: componentwise harmonic interpolation, and post-projected harmonic interpolation. Pure harmonic interpolation seems effective at first glance, but unfortunately suffers from non-negligible divergence, as shown in Figure 3. + +![01963e8b-d1ff-7262-ae73-4780aa1f6702_2_164_1384_695_328_0.jpg](images/01963e8b-d1ff-7262-ae73-4780aa1f6702_2_164_1384_695_328_0.jpg) + +Figure 3: Harmonic Interpolation: Harmonic interpolation of the velocity across the blend region yields a somewhat plausible flow (left), but suffers from large divergence (right). Red indicates positive divergence, blue indicates negative divergence. The divergence gradually induces greater clumping and spreading of the particles, as seen in the middle of the left image. + +A possible improvement is to post-process the harmonic result with a projection to a divergence-free state. Unfortunately, while this successfully removes divergence, the natural free-slip conditions of the pressure projection reintroduce tangential slip along the borders of the blend region leading to objectionable motion artifacts in the flow. In the wind-tunnel scenario the vertical component of velocity suffers from discontinuities at blend region borders, leading to visible grid-aligned shearing of the flow. + +![01963e8b-d1ff-7262-ae73-4780aa1f6702_2_934_253_707_377_0.jpg](images/01963e8b-d1ff-7262-ae73-4780aa1f6702_2_934_253_707_377_0.jpg) + +Figure 4: Projected Harmonic Interpolation: Our Stokes interpolation approach (left) yields continuous velocity fields. However, under projected harmonic interpolation (right), undesirable free-slip conditions introduce tangential discontinuities in the flow velocity at blend region borders, seen here as positional discontinuities in the rows of colored particles at far left and right. + +To further stress-test our method, we consider some challenging scenarios analogous to those suggested by Sato et al. [28]. We combine flows in which the source and target differ in direction or speed. In Sato's approach, both larger speed and angle deviations lead to more severe failures of the divergence-free condition (we refer the reader to the secondary supplemental video accompanying that paper). Figure 5 shows the same test as we performed in our earlier examples, except that we have changed the ambient flow direction of the source simulation to have steadily increasing angles, including an example where the flow direction is completely reversed. While this leads to an increasingly unnatural look, the resulting flow field is still continuous, smooth on the blend region interior, and divergence-free independent, of this artistic decision. Similarly, Figure 6 performs a test in which the speed of the source (inner) simulation is slow or faster than the target (outer) simulation. Once again more severe speed differences lead to more unusual motions in the blend region in order to compensate. For example, when the speed ratio between source and target is 3 , more elaborate interior circulation of the flow in the blend region becomes necessary to satisfy the incompressible condition. However, because the source and target are divergence-free and therefore provide compatible boundary conditions, the result is still correctly divergence-free. + +A further point to note about these stress tests is that the more severe cases induce strong vortices that cause gaps to open in the flow. However, this is not due to divergence; rather, typical small numerical errors in particle trajectories due to interpolation and advection cause the particles to spread out from these points. + +Lastly, we consider a few slightly more complex scenarios. Figure 7 shows our basic scenario again but using a rectangular obstacle instead of a disk. Figure 8 shows a scenario in which the user replaces a rectangular obstacle with a disk. Finally, in Figure 9, we paste a disk obstacle into a scene containing three rectangles, where the disk replaces the middle rectangle. Because of the additional obstacles, the flow structure is more complex. In this example, we tightened the blend region to fit more closely around the paste region. In all cases a plausible flow is constructed. + +Notably, because a disk and a rectangle lead to different downstream motions in their respective wakes, a close inspection of the motion in these regions of our results reveals slightly unnatural motions, as our interpolant diligently tries to transition between two different flow structures. Fortunately, such effects are fairly subtle unless one is specifically looking for them. Ultimately, Stokes interpolation provides the best available solution under the stated constraints (smoothness, incompressibility, interpolation of boundary values), and it is up to the user to apply their judgment regarding whether a proposed flow edit achieves the desired effect. + +Figure 5: Varying Angles: In these scenes, the outer flow is vertical while the pasted inner flow from the source simulation has flow direction with a relative angle of: ${0}^{ \circ }$ (top-left), ${45}^{ \circ }$ (top-right), ${90}^{ \circ }$ (bottom-left), and ${180}^{ \circ }$ (bottom-right). In all cases, the flow remains divergence-free. + +## 6 CONCLUSIONS AND FUTURE WORK + +We have presented an approach to the fluid copy-paste problem that guarantees smooth and divergence-free fields by solving a steady Stokes problem at each time step to fill in a blend region between the source and target flow regions. + +Our work suggests several directions to explore in future work. First, for simplicity we assumed axis-aligned rectangles for the copy-paste region, similar to basic region-selection in image editing, but it could be useful to extend our approach to more general (lasso-type) selection regions, either in a voxelized fashion or using irregular cut-cells $\left\lbrack {3,{16}}\right\rbrack$ for smoother shapes. This would add greater artistic flexibility, and may render the blend-region borders less apparent. + +The mathematics underlying our approach extends naturally to 3D, although providing a manageable user interface for selecting and placing time-dependent volumetric flow regions becomes more challenging. This would be interesting to explore. + +Another intriguing question is whether even better behavior at blend region borders could be achieved by replacing our Dirichlet energy with a higher order energy. At present, the no-slip condition enforces matching of the velocity value at the boundaries, but not its gradient. Minimizing instead a squared Laplacian energy (see e.g., [31]), still subject to incompressibility, would lead to a bilaplacian operator on velocity. This is conceptually similar to replacing linear interpolation with cubic interpolation. While it would lead to a more challenging linear system to solve (in terms of conditioning) it may be able to offer a value- and gradient-matched divergence-free blend field. + +Finally, a challenging unanswered question in fluid animation more broadly is to what makes a fluid motion perceptually "realistic" from a human perspective, and how much deviation from physical accuracy can safely be tolerated in visual applications. A metric + +![01963e8b-d1ff-7262-ae73-4780aa1f6702_3_163_147_1473_698_0.jpg](images/01963e8b-d1ff-7262-ae73-4780aa1f6702_3_163_147_1473_698_0.jpg) + +Figure 6: Varying Speeds: In these scenes, the outer flow has a fixed speed while the pasted inner flow from the source simulation has a speed ratio of: 1.0 (top-left), 0.75 (top-right), 1.5 (bottom-left), and 3.0 (bottom-right). In all cases, the flow remains divergence-free. + +![01963e8b-d1ff-7262-ae73-4780aa1f6702_3_927_1011_718_253_0.jpg](images/01963e8b-d1ff-7262-ae73-4780aa1f6702_3_927_1011_718_253_0.jpg) + +Figure 7: Pasting a rectangle into a flow. From left to right: source scene, target scene, result. + +of this kind could allow one to quantify more concretely whether a proposed flow edit is successful or harmful. + +## REFERENCES + +[1] S. Avidan and A. Shamir. Seam carving for content-aware image resizing. In ACM SIGGRAPH 2007 papers, pp. 10-es. 2007. + +[2] G. K. Batchelor. An introduction to fluid dynamics. Cambridge university press, 2000. + +[3] C. Batty, F. Bertails, and R. Bridson. A fast variational framework for accurate solid-fluid coupling. ACM Transactions on Graphics (TOG), 26(3):100-es, 2007. + +[4] W. Baxter, Y. Liu, and M. C. Lin. A viscous paint model for interactive applications. Computer Animation and Virtual Worlds, 15(3-4):433- 441, 2004. + +[5] H. Bhattacharya, M. B. Nielsen, and R. Bridson. Steady state stokes flow interpolation for fluid control. In Eurographics (Short Papers), pp. 57-60. Citeseer, 2012. + +[6] M. Bojsen-Hansen and C. Wojtan. Generalized non-reflecting boundaries for fluid re-simulation. ACM Transactions on Graphics (TOG), 35(4):1-7, 2016. + +[7] T. Du, K. Wu, A. Spielberg, W. Matusik, B. Zhu, and E. Sifakis. Functional optimization of fluidic devices with differentiable stokes flow. ACM Transactions on Graphics (TOG), 39(6):1-15, 2020. + +[8] R. Fattal and D. Lischinski. Target-driven smoke animation. ACM Trans. Graph. (SIGGRAPH), 23(3):441-448, 2004. + +![01963e8b-d1ff-7262-ae73-4780aa1f6702_4_149_148_720_253_0.jpg](images/01963e8b-d1ff-7262-ae73-4780aa1f6702_4_149_148_720_253_0.jpg) + +Figure 8: Replacing a rectangle with a disk. From left to right: source scene, target scene, result. + +![01963e8b-d1ff-7262-ae73-4780aa1f6702_4_154_517_713_252_0.jpg](images/01963e8b-d1ff-7262-ae73-4780aa1f6702_4_154_517_713_252_0.jpg) + +Figure 9: Replacing a rectangle with a disk in a flow with additional obstacles, using a narrow blend region. From left to right: source scene, target scene, result. + +[9] S. Flynn, P. Egbert, S. Holladay, and B. Morse. Fluid carving: intelligent resizing for fluid simulation data. ACM Transactions on Graphics (TOG), 38(6):1-14, 2019. + +[10] Z. Forootaninia and R. Narain. Frequency-domain smoke guiding. ACM Transactions on Graphics (TOG), 39(6):1-10, 2020. + +[11] N. Foster and D. Metaxas. Controlling fluid animation. In Computer Graphics International, p. 178, 1997. + +[12] F. Gibou, R. P. Fedkiw, L.-T. Cheng, and M. Kang. A second-order-accurate symmetric discretization of the poisson equation on irregular domains. Journal of Computational Physics, 176(1):205-227, 2002. + +[13] G. Guennebaud, B. Jacob, et al. Eigen. URl: http://eigen.tuxfamily.org, 2010. + +[14] F. H. Harlow and J. E. Welch. Numerical calculation of time-dependent viscous incompressible flow of fluid with free surface. The physics of fluids, 8(12):2182-2189, 1965. + +[15] P. Joshi, M. Meyer, T. DeRose, B. Green, and T. Sanocki. Harmonic coordinates for character articulation. ACM Transactions on Graphics (TOG), 26(3):71-es, 2007. + +[16] E. Larionov, C. Batty, and R. Bridson. Variational stokes: a unified pressure-viscosity solver for accurate viscous liquids. ACM Transactions on Graphics (TOG), 36(4):1-11, 2017. + +[17] P.-L. Manteaux, U. Vimont, C. Wojtan, D. Rohmer, and M.-P. Cani. Space-time sculpting of liquid animation. In Proceedings of the 9th International Conference on Motion in Games, pp. 61-71, 2016. + +[18] A. McNamara, A. Treuille, Z. Popović, and J. Stam. Fluid control using the adjoint method. ACM Transactions on Graphics, 23(3):449, 2004. + +[19] M. B. Nielsen and R. Bridson. Guide shapes for high resolution naturalistic liquid simulation. ACM Trans. Graph. (SIGGRAPH), 30(4):1, 2011. + +[20] M. B. Nielsen and B. B. Christensen. Improved variational guiding of smoke animations. In Computer Graphics Forum, vol. 29, pp. 705-712. Wiley Online Library, 2010. + +[21] M. B. Nielsen, B. B. Christensen, N. B. Zafar, D. Roble, and K. Museth. Guiding of smoke animations through variational coupling of simulations at different resolutions. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 217-226, 2009. + +[22] Z. Pan, J. Huang, Y. Tong, C. Zheng, and H. Bao. Interactive localized liquid motion editing. ACM Transactions on Graphics (TOG), 32(6):1- 10, 2013. + +[23] P. Pérez, M. Gangnet, and A. Blake. Poisson image editing. ACM + +Trans. Graph. (SIGGRAPH), 22(3):313-318, 2003. + +[24] N. Rasmussen, D. Enright, D. Nguyen, S. Marino, N. Sumner, W. Geiger, S. Hoon, and R. Fedkiw. Directable photorealistic liquids. In Symposium on Computer Animation, pp. 193-202, 2004. + +[25] K. Raveendran, N. Thuerey, C. Wojtan, and G. Turk. Controlling liquids using meshes. In Proceedings of the 11th ACM SIG-GRAPH/Eurographics conference on Computer Animation, pp. 255- 264, 2012. + +[26] K. Raveendran, C. Wojtan, N. Thuerey, and G. Turk. Blending liquids. ACM Transactions on Graphics, 33(4):1-10, jul 2014. + +[27] A. Robinson-Mosher, R. E. English, and R. Fedkiw. Accurate tangential velocities for solid fluid coupling. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 227-236, 2009. + +[28] S. Sato, Y. Dobashi, and T. Nishita. Editing fluid animation using flow interpolation. ACM Trans. Graph., 37(5):1-12, sep 2018. + +[29] L. Shi and Y. Yu. Taming liquids for rapidly changing targets. In Symposium on Computer Animation, pp. 229-236, 2005. + +[30] A. Söderström, M. Karlsson, and K. Museth. A pml-based nonreflective boundary for free surface fluid animation. ACM Transactions on Graphics (TOG), 29(5):1-17, 2010. + +[31] O. Stein, E. Grinspun, M. Wardetzky, and A. Jacobson. Natural boundary conditions for smoothing in geometry processing. ACM Transactions on Graphics (TOG), 37(2):1-13, 2018. + +[32] A. Stomakhin and A. Selle. Fluxed animated boundary method. ACM Transactions on Graphics (TOG), 36(4):1-8, 2017. + +[33] N. Thuerey. Interpolations of smoke and liquid simulations. ACM Trans. Graph., 36(1):1-16, sep 2016. + +[34] N. Thuerey, R. Keiser, M. Pauly, and U. Rüde. Detail-preserving fluid control. In Symposium on Computer Animation, pp. 7-12, 2006. + +[35] A. Treuille, A. McNamara, Z. Popović, J. Stam, A. Treuille, A. McNamara, Z. Popović, and J. Stam. Keyframe control of smoke simulations. ACM Transactions on Graphics, 22(3):716, 2003. + +[36] M. Wiebe and B. Houston. The Tar Monster: Creating a character with fluid simulation. In SIGGRAPH Sketches, 2004. + +[37] M. Wrenninge and D. Roble. Fluid simulation interaction techniques. In SIGGRAPH Sketches, p. 1. ACM Press, New York, New York, USA, 2003. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..df6f14c9c3318bfb29cf74ee6b1976036a68a0ec --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,165 @@ +§ DIVERGENCE-FREE COPY-PASTE FOR FLUID ANIMATION USING STOKES INTERPOLATION + +Category: Research + +§ ABSTRACT + +Traditional grid-based fluid simulation is often difficult to control and costly to perform. Therefore, the ability to reuse previously computed simulation data is a tantalizing idea that would have significant benefits to artists and end-users of fluid animation tools. We introduce a remarkably simple yet effective copy-and-paste methodology for fluid animation that allows direct reuse of existing simulation data by smoothly combining two existing simulation data sets together. The method makes use of a steady Stokes solver to determine an appropriate transition velocity field across a blend region between an existing simulated outer target domain and a region of flow copied from a source simulation. While prior work suffers from non-divergence and associated compression artifacts, our approach always yields divergence-free velocity fields, yet also ensures an optimally smooth blend between the two input flows. + +§ 1 INTRODUCTION + +The ubiquitous copy-paste metaphor from text and image processing tools is popular because it is conceptually simple and significantly reduces the need for redundant effort on the part of the user. A copy-paste tool for fluid simulation could offer similar benefits while reducing the total computational effort expended to achieve a desired results through the reuse of existing simulation data. This paper proposes exactly such a scheme. + +The control of fluids has long been a subject of interest in computer animation: typical strategies that have been explored include space-time optimization (e.g., $\left\lbrack {{18},{35}}\right\rbrack$ ), space-time interpolation (e.g., $\left\lbrack {{26},{33}}\right\rbrack$ ), and approaches that involve the application of some combination of user-designed forces, constraints, or boundary conditions (e.g., $\left\lbrack {{19},{24},{32},{34},{36},{37}}\right\rbrack$ ). Because the last of these families is typically the least expensive and offers the most direct control, it has generally been the most effective and widely used in practice. Our method also falls into this category. + +Inspired by Poisson image editing [23], recent work by Sato et al. [28] hints at the potential power of a copy-paste metaphor for fluids. Unfortunately, their approach suffers from problematic nonzero divergence artifacts at the boundary of pasted regions, which depend heavily on the choice of input fields. We therefore introduce a new approach that provides natural blends between source and target regions yet is relatively simple to set up, requires solving only a standard Stokes problem over a narrow blend region at each time step, and always produces divergence-free vector fields. + +§ 2 RELATED WORK + +§ 2.1 CONTROLLING FLUID ANIMATION + +Artistic control of fluid flows has been a subject of interest from the earliest days of three-dimensional fluid animation research. Foster and Metaxas [11] proposed a variety of basic control mechanism through imposition of initial or boundary values on quantities such as velocity, pressure and surface tension. A wide range of subsequent methods have been proposed to enable various control methods, which we review below. + +One quite common approach is to apply forces or optimization approaches to encourage a simulation to hit particular target keyframes for the density or shape of smoke or liquid $\left\lbrack {8,{18},{22},{29},{34},{35}}\right\rbrack$ . Another strategy makes use of multiple scales or frequencies, using a precomputed or procedural flow to describe the low-resolution motion and allowing a new physical simulation to add in high-frequency details $\left\lbrack {{10},{19} - {21},{34}}\right\rbrack$ . + +Other approaches aim to work more directly on the fluid geometry, rather than the velocity field. For example new editing metaphors have been proposed, such as space-time fluid sculpting [17] and fluid-carving [9]; the latter is conceptually similar to seam-carving from image/video editing [1]. Another direct geometric approach seeks to directly interpolate the global fluid shape and motion [26, 33]. These strategies generally require the overall simulation to already be relatively close to the desired target behavior. + +Approaches that rely on the direction application of velocity boundary conditions on the fluid flow are similar to ours in some respects $\left\lbrack {{24},{25},{32},{36}}\right\rbrack$ . Often these have been used to cause liquid to follow a target motion or character, with varying degrees of "looseness" allowed in order to retain a fluid-like effect. They have not been used to combine existing simulations. + +Another useful task in liquid animation is to insert a localized 3D dynamic liquid simulation, such as the region around a ship or swimming character, into a much larger surrounding procedural ocean or similar model. This has been achieved through the use of non-reflecting boundary conditions $\left\lbrack {6,{30}}\right\rbrack$ . These approaches focus on simulating the interior surface region of a liquid and smoothly damping out the surface flow to match the prescribed exterior model. This contrasts with our copy and paste problem, where both the interior and exterior are presimulated flows that must be combined together. + +The closest method to ours is of course that of Sato et al. [28], who first proposed the copy-paste fluid problem. Their work also begins from the Dirichlet energy; however, through an ad hoc substitution of the input field's curl, they arrive at a new energy that minimizes the squared divergence of the velocity field plus the difference of the curl of the output and input vector fields. This formulation penalizes divergence rather than constraining it to be zero, and this likely accounts for the presence of undesired erroneous divergence in their results. By contrast, our approach is always strictly divergence-free. + +§ 2.2 STOKES FLOW IN COMPUTER GRAPHICS + +Steady (time-independent) Stokes flow is an approximation that is appropriate when momentum is effectively negligible, as indicated by a low Reynolds number. In computer graphics this approximation has been used in the context of paint simulation [4] and for design of fluidic devices [7]. The unsteady (time-dependent) variant has also been used as a substep within a more general Navier-Stokes simulator [16] for Newtonian fluids. Closest to our work is that of Bhattacharya et al. [5] who use the Stokes equations to fill a volume with smooth velocities, as an alternative to simple velocity extrapolation or potential flow approximations; they then use the generated field as a force to influence a liquid simulation. Their approach is shown to maintain rotational motion better than those existing alternative interpolants. We expand on this idea to address the copy-paste problem. + +§ 3 METHOD + +Our method takes as input the per-timestep vector field data for two complete grid-based incompressible fluid simulations (denoted source and target), along with geometry information dividing the domain of the final animation into an inner source region ${\Omega }_{s}$ , an outer target region ${\Omega }_{t}$ , and a blending region ${\Omega }_{b}$ . In the final time-varying vector field to be assembled, the data in ${\Omega }_{s}$ and ${\Omega }_{t}$ are simply replayed from the inputs; the central task we must solve is to generate a "natural" vector field for the blend region ${\Omega }_{b}$ in between for all time steps. + +We would like the vector field we generate in the blend region to possess a few key characteristics. First, the velocities at the boundaries of the blend region (on either ${\Gamma }_{t} = {\Omega }_{t} \cap {\Omega }_{b} = \partial {\Omega }_{t}$ or ${\Gamma }_{s} = {\Omega }_{s} \cap {\Omega }_{b} = \partial {\Omega }_{s}$ ) should exactly match the velocities of the corresponding input field - this is essentially the familiar no-slip boundary condition often used for kinematic solids or prescribed inflows/outflows in Newtonian fluids. Second, the vector field should be relatively smooth, since our objective is essentially a special kind of velocity interpolant. With only these two stipulations, a very natural choice is harmonic interpolation [15]. As suggested by Sato et al. [28], this can be expressed as minimizing the Dirichlet energy: + +$$ +\mathop{\operatorname{argmin}}\limits_{{\mathbf{u}}_{b}}{\iiint }_{{\Omega }_{b}}{\begin{Vmatrix}\nabla {\mathbf{u}}_{\mathbf{b}}\end{Vmatrix}}^{2} \tag{1} +$$ + +$$ +\text{ subject to }{\mathbf{u}}_{b} = {\mathbf{u}}_{s}\text{ on }{\Gamma }_{s}\text{ , } +$$ + +$$ +{\mathbf{u}}_{b} = {\mathbf{u}}_{t}\text{ on }{\Gamma }_{t}\text{ . } +$$ + + < g r a p h i c s > + +The minimizer satisfies $\nabla \cdot \nabla {\mathbf{u}}_{b} = 0$ , i.e. a componentwise Laplace equation on the velocity. (From here on we diverge from Sato et al. who proceed instead to manipulate the Dirichlet energy into a form that yields a vector Poisson equation.) + +The Dirichlet energy alone is clearly insufficient, because it will prioritize smoothness at the cost of introducing divergence. Because we have assumed an incompressible flow model for our input (and desired output), the velocities in the blend region should not create or destroy material. A natural solution would be to simply apply a standard pressure projection as a post-process to convert the harmonic velocity field above to be incompressible. Unfortunately, this can cause the velocity field to deviate significantly from the harmonic input. Moreover, as we show in Section 5, pressure projection enforces only a free-slip condition (no-normal-flow), which allows objectionable tangential velocity discontinuities at the blend region's boundaries to be introduced. + +We instead simultaneously combine the divergence-free stipulation with harmonic interpolation through the following formulation: + +$$ +\mathop{\operatorname{argmin}}\limits_{{\mathbf{u}}_{b}}{\iiint }_{{\Omega }_{b}}{\begin{Vmatrix}\nabla {\mathbf{u}}_{\mathbf{b}}\end{Vmatrix}}^{2} \tag{2} +$$ + +$$ +\text{ subject to }\nabla \cdot {\mathbf{u}}_{b} = 0\text{ on }{\Omega }_{b} +$$ + +$$ +{\mathbf{u}}_{b} = {\mathbf{u}}_{s}\text{ on }{\Gamma }_{s}\text{ , } +$$ + +$$ +{\mathbf{u}}_{b} = {\mathbf{u}}_{t}\text{ on }{\Gamma }_{t}\text{ . } +$$ + +This optimization problem provides the smoothest velocity field that interpolates the boundary data while preserving incompressibility. If we enforce the constraint with a Lagrange multiplier $p$ , the optimality conditions turn out to yield exactly the (constant viscosity) steady Stokes equations, + +$$ +\nabla \cdot \nabla {\mathbf{u}}_{b} - \nabla p = 0. \tag{3} +$$ + +$$ +\nabla \cdot {\mathbf{u}}_{b} = 0, \tag{4} +$$ + +consistent with Helmholtz's minimum dissipation theorem [2]. We therefore refer to this construction as Stokes interpolation. + +As noted in Section 2, we are not the first to suggest using the Stokes equations as a fluid interpolant: Bhattacharya et al. [5] first proposed steady state Stokes flow interpolation. However, our derivation and discussion above provides additional justification and insight into the variational nature of this approach. More importantly, Bhattacharya et al. did not consider the fluid cut-and-paste problem that we address in the current work. + +A minor issue is that, for there to exist a valid solution, the boundary conditions must satisfy a compatibility condition; that is, the integrated flux across the two boundaries must be consistent with the condition of incompressibility on the blend region's interior: + +$$ +{\iiint }_{{\Omega }_{b}}\nabla \cdot {\mathbf{u}}^{n + 1}{dV} = 0 = {\iint }_{{\Gamma }_{s}}{\mathbf{u}}_{s}^{n + 1} \cdot \mathbf{n}{dA} + {\iint }_{{\Gamma }_{t}}{\mathbf{u}}_{t}^{n + 1} \cdot \mathbf{n}{dA} \tag{5} +$$ + +Fortunately, since the input vector fields both come from simulations that are themselves incompressible, the divergence theorem ensures that both the source copied patch and the target region to be pasted over have zero net flux across their respective boundaries - hence compatibility is guaranteed. + +We arrive at the following algorithm. For each timestep, extract the boundary velocities from the input source and target simulations. Perform a steady Stokes solve on ${\Omega }_{b}$ as we have described to produce ${\mathbf{u}}_{b}^{n + 1}$ . Finally, directly fill in the inner and outer ${\Omega }_{s}$ and ${\Omega }_{t}$ regions with velocity from the input data ${\mathbf{u}}_{s}^{n + 1}$ and ${\mathbf{u}}_{t}^{n + 1}$ , respectively. The resulting time-varying vector field is divergence-free and offers an attractively smooth blend between source and target flows. + +Note that, since the combined vector field differs significantly from both its inputs, the flow of any passive material (such as smoke density or tracer particles) must be recomputed from scratch by advection through the new field in order to yield a consistent visual result. This can usually be done efficiently and in parallel, since each (passive) particle's motion affects no other particles. + +§ 4 IMPLEMENTATION + +While our concept is very general, our implementation assumes that all simulations are arranged on a standard staggered ("MAC") grid [14]. This provides a natural infrastructure on which to discretize the Stokes equations on the blend region, via centered finite differences. The boundary between the blend region and the surrounding source and target flow fields is assumed to lie on axis-aligned grid faces between cells (although this could potentially be generalized to irregular cut-cells if desired $\left\lbrack {3,{16}}\right\rbrack$ ). Where needed to ensure precise no-slip velocities conditions at the exact face midpoints on voxelized boundaries of the blend region, we make use of the usual ghost fluid method [12] for the Laplace operator in (4). To solve the Stokes linear system at each step, we use the Least Squares Conjugate Gradient solver provided by the Eigen library [13], with a tolerance of $5 \times {10}^{-5}$ . (Other options for solving indefinite systems, such as SYMQMR or MINRES would also be appropriate [27].) + +§ 5 RESULTS + +We now consider some illustrative scenarios to demonstrate the behaviour of our method. Most of our figures make use of passive marker particles with alternating colors in initially horizontal rows to better highlight the developing flow structure, but we strongly encourage the reader to review our supplemental video to assess the motion more fully. + +Our first scenario (Figure 1) consists of a static solid disk in a vertical wind-tunnel scenario, with inflow at the top and outflow at the bottom (particles leaving the bottom boundary re-enter at the top). We wish to paste the disk and its surroundings from the source simulation into an even simpler empty vertically translating wind-tunnel target simulation. This yields a smooth divergence-free combination of the two flows, where the flow outside of the blend region is completely undisturbed. Our result necessarily differs from the source animation, since in the source animation the presence of the disk globally disturbed the flow; our Stokes interpolation approach must therefore deform the flow more strongly in the blend region to compensate, yet we still achieve a visually plausible flow (Figure 2). + + < g r a p h i c s > + +Figure 1: Basic Setup: Our simplest scenario involves copying the flow around a disk from its source simulation (left, with region to be copied surrounded in blue) into an obstacle-free target simulation (middle). The result of our method is a new smoothly combined flow (right). The blue lines denote the inner and outer borders of the blend region over which we apply our Stokes interpolation. (The same frame of animation is shown in all three images.) + + < g r a p h i c s > + +Figure 2: Merged Flow Over Time: A few frames of the edited animation result of our approach based on the scenario described in Figure 1. + +Next, we consider our method in comparison to two other obvious alternatives, as discussed in Section 3: componentwise harmonic interpolation, and post-projected harmonic interpolation. Pure harmonic interpolation seems effective at first glance, but unfortunately suffers from non-negligible divergence, as shown in Figure 3. + + < g r a p h i c s > + +Figure 3: Harmonic Interpolation: Harmonic interpolation of the velocity across the blend region yields a somewhat plausible flow (left), but suffers from large divergence (right). Red indicates positive divergence, blue indicates negative divergence. The divergence gradually induces greater clumping and spreading of the particles, as seen in the middle of the left image. + +A possible improvement is to post-process the harmonic result with a projection to a divergence-free state. Unfortunately, while this successfully removes divergence, the natural free-slip conditions of the pressure projection reintroduce tangential slip along the borders of the blend region leading to objectionable motion artifacts in the flow. In the wind-tunnel scenario the vertical component of velocity suffers from discontinuities at blend region borders, leading to visible grid-aligned shearing of the flow. + + < g r a p h i c s > + +Figure 4: Projected Harmonic Interpolation: Our Stokes interpolation approach (left) yields continuous velocity fields. However, under projected harmonic interpolation (right), undesirable free-slip conditions introduce tangential discontinuities in the flow velocity at blend region borders, seen here as positional discontinuities in the rows of colored particles at far left and right. + +To further stress-test our method, we consider some challenging scenarios analogous to those suggested by Sato et al. [28]. We combine flows in which the source and target differ in direction or speed. In Sato's approach, both larger speed and angle deviations lead to more severe failures of the divergence-free condition (we refer the reader to the secondary supplemental video accompanying that paper). Figure 5 shows the same test as we performed in our earlier examples, except that we have changed the ambient flow direction of the source simulation to have steadily increasing angles, including an example where the flow direction is completely reversed. While this leads to an increasingly unnatural look, the resulting flow field is still continuous, smooth on the blend region interior, and divergence-free independent, of this artistic decision. Similarly, Figure 6 performs a test in which the speed of the source (inner) simulation is slow or faster than the target (outer) simulation. Once again more severe speed differences lead to more unusual motions in the blend region in order to compensate. For example, when the speed ratio between source and target is 3, more elaborate interior circulation of the flow in the blend region becomes necessary to satisfy the incompressible condition. However, because the source and target are divergence-free and therefore provide compatible boundary conditions, the result is still correctly divergence-free. + +A further point to note about these stress tests is that the more severe cases induce strong vortices that cause gaps to open in the flow. However, this is not due to divergence; rather, typical small numerical errors in particle trajectories due to interpolation and advection cause the particles to spread out from these points. + +Lastly, we consider a few slightly more complex scenarios. Figure 7 shows our basic scenario again but using a rectangular obstacle instead of a disk. Figure 8 shows a scenario in which the user replaces a rectangular obstacle with a disk. Finally, in Figure 9, we paste a disk obstacle into a scene containing three rectangles, where the disk replaces the middle rectangle. Because of the additional obstacles, the flow structure is more complex. In this example, we tightened the blend region to fit more closely around the paste region. In all cases a plausible flow is constructed. + +Notably, because a disk and a rectangle lead to different downstream motions in their respective wakes, a close inspection of the motion in these regions of our results reveals slightly unnatural motions, as our interpolant diligently tries to transition between two different flow structures. Fortunately, such effects are fairly subtle unless one is specifically looking for them. Ultimately, Stokes interpolation provides the best available solution under the stated constraints (smoothness, incompressibility, interpolation of boundary values), and it is up to the user to apply their judgment regarding whether a proposed flow edit achieves the desired effect. + +Figure 5: Varying Angles: In these scenes, the outer flow is vertical while the pasted inner flow from the source simulation has flow direction with a relative angle of: ${0}^{ \circ }$ (top-left), ${45}^{ \circ }$ (top-right), ${90}^{ \circ }$ (bottom-left), and ${180}^{ \circ }$ (bottom-right). In all cases, the flow remains divergence-free. + +§ 6 CONCLUSIONS AND FUTURE WORK + +We have presented an approach to the fluid copy-paste problem that guarantees smooth and divergence-free fields by solving a steady Stokes problem at each time step to fill in a blend region between the source and target flow regions. + +Our work suggests several directions to explore in future work. First, for simplicity we assumed axis-aligned rectangles for the copy-paste region, similar to basic region-selection in image editing, but it could be useful to extend our approach to more general (lasso-type) selection regions, either in a voxelized fashion or using irregular cut-cells $\left\lbrack {3,{16}}\right\rbrack$ for smoother shapes. This would add greater artistic flexibility, and may render the blend-region borders less apparent. + +The mathematics underlying our approach extends naturally to 3D, although providing a manageable user interface for selecting and placing time-dependent volumetric flow regions becomes more challenging. This would be interesting to explore. + +Another intriguing question is whether even better behavior at blend region borders could be achieved by replacing our Dirichlet energy with a higher order energy. At present, the no-slip condition enforces matching of the velocity value at the boundaries, but not its gradient. Minimizing instead a squared Laplacian energy (see e.g., [31]), still subject to incompressibility, would lead to a bilaplacian operator on velocity. This is conceptually similar to replacing linear interpolation with cubic interpolation. While it would lead to a more challenging linear system to solve (in terms of conditioning) it may be able to offer a value- and gradient-matched divergence-free blend field. + +Finally, a challenging unanswered question in fluid animation more broadly is to what makes a fluid motion perceptually "realistic" from a human perspective, and how much deviation from physical accuracy can safely be tolerated in visual applications. A metric + + < g r a p h i c s > + +Figure 6: Varying Speeds: In these scenes, the outer flow has a fixed speed while the pasted inner flow from the source simulation has a speed ratio of: 1.0 (top-left), 0.75 (top-right), 1.5 (bottom-left), and 3.0 (bottom-right). In all cases, the flow remains divergence-free. + + < g r a p h i c s > + +Figure 7: Pasting a rectangle into a flow. From left to right: source scene, target scene, result. + +of this kind could allow one to quantify more concretely whether a proposed flow edit is successful or harmful. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..1ab20cbe4f4852037461790e8024d5b42cb49fc8 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,431 @@ + + +## Slacktivity: Scaling Slack for Large Organizations + +![01963ea9-2e22-7608-bc65-35365d454e0f_0_239_267_1313_707_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_0_239_267_1313_707_0.jpg) + +Figure 1: Slacktivity's galaxy view. Each circle in the visualization is a Slack channel. (Note: division names have been changed to remove identifying information, and select channel names have been replaced with .) + +## Abstract + +Group chat programs, such as Slack, are a promising and increasingly popular tool for improving communication among collections of people. However, it is unclear how the current design of group chat applications scales to support large and distributed organizations. We present a case study of a company-wide Slack installation in a large organization $( > {10},{000}$ employees) incorporating data from semi-structured interviews, exploratory use, and analysis of the data from the Slack workspace itself. Our case study reveals emergent behaviour, issues with exploring the content of such a large Slack workspace, and the inability to keep up with news from across the organization. To address these issues, we designed Slacktivity, a novel visualization system to augment the use of Slack in large organizations and demonstrate how Slacktivity can be used to overcome many of the challenges found when using Slack at scale. + +Keywords: Group chat, visualization. + +## 1 INTRODUCTION + +As organizations become larger and more distributed, communication becomes increasingly difficult. Even when workers are co-located, technologies like email and instant messaging are frequently used as an alternative to direct face-to-face communication. Email is often thought of as asynchronous and useful for long-term searching, while instant messaging is useful for synchronous communication and quicker conversations. More recently, group chat systems (such as Slack [1] and Microsoft Teams [2]) have become popular, combining benefits of both email and instant messaging, while offering the potential for increased collaboration and coordination. + +When used over a period of time, a group chat workspace has the potential to be a useful "knowledge base" for a company, storing a history of communication about a given topic. However, the design of current group chat systems is optimized for near real-time communication, making it difficult to derive insights from this wealth of historical information [3]. This problem is exacerbated for large organizations using Slack where there may be thousands of channels and millions of messages. Additionally, with so many channels and messages, it is neither feasible nor possible to keep up with all activity across all channels to get a sense of what is going on throughout the organization, and the task of figuring out which channels to join or post questions to becomes more difficult. + +In this paper we present a case study of the internal usage of Slack at BigCorp, ${}^{1}$ a ten thousand person software company which has been using a unified Slack workspace for the past 3+ years, with over 15,000 channels created and 65 million messages sent over that time. Our case study combines exploratory use of the Slack workspace, formal and informal interviews, and data analysis of the use of Slack by BigCorp. Using this data, we identified strategies employees use to cope with Slack at scale and also the pain points of using Slack such as the inability to find historical information, and the need to keep up with their organization better. + +Using the findings from our case study we present Slacktivity, a tool to address the limitations of Slack when being used within a large organization. Slacktivity gives an overview of the channels across the entire organization with a cluster view, and also allows for detailed exploration of the entire history of a channel. + +This paper makes two main contributions: a case study to better understand how group chat is used in large organizations, and a novel interactive visualization system to augment the use of Slack in a workplace. + +--- + +${}^{1}$ Name anonymised for submission. + +--- + +## 2 Background and Related Work + +### 2.1 Introduction to Group Chat + +Group chat works by combining the functionality of instant messaging and Internet Relay Chat (IRC) with rich communication. The most basic form of communication in group chat is a message. Messages can contain text, images or other attachments and can reply to one another, forming a thread. However, threads in group chat are typically limited to one level of replies. In addition to replying to a message, a user can also react. Reactions are small emoji-like glyphs that are counted just below the message. When responding with a reaction there is no notification sent, making this a non-intrusive form of communication. Messages can be sent to three different locations: direct messages, private channels, and public channels. Direct messages can be sent to groups of up to 8 people. Channels on the other hand can hold any number of people and are identified by a channel name (eg. #general). Channels can be either public, meaning anybody can join, or private, which require an invitation. Finally, all channels and direct messages are part of a workspace, the highest level of hierarchy in group chat. + +Our work explores the use of group chat in the workplace, and the use of visualization to improve its usage. This work looks at, and refers to Slack as the group-chat system, but the concepts and ideas apply equally to other systems (such as Microsoft Teams [2]). + +### 2.2 Communication in the Workplace + +Technology plays a key role in supporting communication in the workplace, but its use is varied and complicated. When face to face interaction is needed but not possible, video conference software like Skype or Google Hangouts is used. However, most communication does not require such an expressive communication style and can instead be textual. Recently, instant messaging has been a popular communication tool in the workplace [4], [5]. A case study [5] of a large tech company from 2005 showed that ${38}\%$ of communication took place over instant messaging, nearly equal to the ${39}\%$ with email. Only ${23}\%$ of communication was face-to-face or using the phone. Email is also used extensively and many people view their email as more than a communication tool, as a "personal information store" [6]. + +Another aspect of communication in the workplace is social media. Traditional non-workplace social medias like Facebook and Twitter have been demonstrated to be useful to organizations through the creation of weak ties [7]. Furthermore, some social networks have been designed specifically for use in the workplace. WaterCooler [8] was designed for use at HP to aggregate various content from across the organization. Another more commercialised approach to social networking is the Yammer tool, which is aimed at bringing a social network more comparable to Facebook into the workplace. + +In recent years group chat has been used increasingly in the workplace, beginning with the inception of Slack in 2013 which had over 10 million active daily users in 2019 [9]. Despite the rapid adoption of group chat in the workplace, Zhang and Cranshaw [10] identify several issues associated with group. They report that employees struggle to find old information; chat history is overwhelming to newcomers; and employees fail to keep up with multiple channels. Unlike email, it is unclear if group chat is also used as a "personal information store". + +Communication tools have the opportunity to be a critical part of an organization's knowledge management strategy as critical discussions often occur over email, instant messaging, or group chat. Research has shown how an organization can use instant messaging [11], [12] and email [13] as a part of their knowledge management strategy, but it is unknown how group chat can fit in. + +### 2.3 Visualization of Conversation + +Conversation exists in many mediums, such as forums, emails and instant messaging, and the academic literature has explored many different ways to visualize it either during the creation of the conversation, or after the conversation has occurred. Pioneering work began with the visualisation of forums, with a special focus on Usenet. Smith and Fiore [14] visualized Usenet threads by displaying the structure of the thread as a tree augmented with information about the users involved in the thread. Turner et. al [15] visualize Usenet as a hierarchical strategy to make recommendations on how to cultivate and manage Usenet groups. Wikum [16] uses recursive summarisation and visualisation to make the overcome the scale of online discussions. Other techniques have explored visualizing conversations by taking advantage of threads, including Thread Arcs [17], ThemeRiver [18], tldr [19], iForum [20], and Newman's work [21]. + +Venolia and Neustaedter [22] visualized email by creating conversations from the sequence and reply relationships, building on the idea of threading. Conversation thumbnails [23] visualize each email in a conversation as a rectangle, displayed in the order they were received and representing the complexity of the conversation. + +Bergstrom and Karahalios [24] designed Conversation Clusters as a method of archiving instant message conversations in a manner that is easy to retrieve a desired conversation. Conversation Clusters visualize the groups of salient words by using colour. However, most work in instant messaging has looked at augmenting communication by changing the interface you interact with, such as giving people movable circles for their avatars [25], [26] or using a visualization to foster positive behavioural changes [27]. Our work differs from prior work visualising conversation in that we augment each mark with more information from the chat and also break the sequential linear relationship with time. + +### 2.4 Understanding Group Chat + +Efforts to understand group chat better are relatively rare. Our work is most related to T-Cal [28], which visualizes Slack conversations using a calendar-based visualization and allows for in-depth exploration using their thread-pulse design. However, T-Cal was designed to address the needs of Slack workspaces for massively open online courses (MOOCs) that are only used for a set amount of time. Another approach was taken by Zhang and Cranshaw [10], who aimed to improve sensemaking of group chat by creating a Slack bot, Tilda, that assisted in collaborative tagging of conversations. Both our case study of Slack and Slacktivity build on and draw inspiration from T-Cal and Tilda, however our focus is discovering and addressing the particular problems faced by deploying Slack at a large scale. + +## 3 CASE STUDY: SLACK AT BIGCORP + +To get a better understanding of how Slack is used "at scale", we studied Slack usage at BigCorp - a multinational design software company with over 10,000 employees distributed at dozens of locations around the world. BigCorp has officially adopted and encouraged the use of Slack as a platform for group chat. Further, BigCorp has consolidated Slack activity into a single unified Slack workspace, rather than allowing teams to maintain their own individual Slack workspaces. + +### 3.1 Methodology + +Our case study incorporates data from several sources: aggregate analysis from an export of the entire (non-private) history of the Slack workspace, extensive exploratory analysis, an interview with the Director in charge of Slack at BigCorp, informal discussions with dozens of employees about their Slack usage, and formal interviews with 5 employees ( 3 heavy users of Slack, 1 occasional user, and 1 infrequent user - referred to as P1-P5). + +Our aggregate data analysis uses all Slack messages sent on all internally-public channels (that is, channels that all employees of BigCorp can see). This includes all of the data about threads and reactions. Every public channel is included and its metadata such as descriptions and creation time. We analyse this data using only simple summary statistics and visualisation. We also combined the Slack user profiles with data from HR sources for properties like job title and organizational division. + +We employed exploratory use of the Slack workspace for some of our results. This consisted of primarily the first author using the workspace and exploring channels while taking notes of their contents. For some results, like the types of channels, an open coding scheme was used. Results stemming from exploratory use are not intended to be generalizable nor complete. Instead it is aimed to demonstrate some of the ways Slack can be used at scale. + +The formal semi-structured interviews were each 1 hour long. The interviews looked to answer three key questions: how do employees use Slack during the workday; how do employees find information on Slack; and, how do employees keep up to date with what is happening at BigCorp (not necessarily using Slack). We analysed these interviews by transcribing them and finding interesting and recurring themes throughout the interviews. + +#### 3.1.1 Data Anonymization + +Given the private nature of communication tools we took several measures to preserve employee privacy. We only analysed "public" data that all employees at BigCorp have access to. We also did not analyse archived channels because people may "archive" a channel in an attempt to "delete" it. Additionally, we have anonymized all employees in the paper and accompanying material by blurring profile images and replacing all names with generated pseudonyms. We carefully examined each message before allowing it to appear in the paper or accompanying video. Anytime a channel name would be easily identifiable outside the company we replace it with a vegetable or fruit name and wrap it in angle brackets (eg. ). + +### 3.2 Origins of Slack at BigCorp + +BigCorp has been running a company wide Slack workspace since May 2016 (3.5 years at the time of data analysis). Before that date, there were over 50 fragmented Slack workspaces being maintained by individual teams. Responding to the grass-roots demand for Slack, BigCorp decided to officially support Slack and consolidated all activity into a single shared Slack workspace and encouraged its use as an "approved" communication mechanism. + +#### 3.2.1 Public by Default + +The vice president at BigCorp who "sponsored" (paid for) the initial consolidation of Slack activity under a single Slack workspace agreed to do so under the condition that it would be run with a "public by default" policy, that is, that all channels would be set to "public", so they could be viewed and joined by everyone in the company - not just those in a particular team. The hope with this policy was that it would encourage openness and collaboration across the company and break down historic organizational "silos" where communication between separate teams and divisions had been limited. (Note: "public" channels are still only visible to employees of BigCorp, not the "general public") + +Though the policy dictates that all channels are open to everyone by default, channels do still have "members" who are subscribed to a channel. For example, a channel for a specific small development team might (and often does) only have members from that team. This can lead to the feeling that a channel is in fact private, even though it is public and anyone in the company could potentially find it. Our raw data analysis identified numerous cases of people using language inappropriate for a corporate environment, suggesting some users might not appreciate how "open" their communication on these Slack channels is. However, participant P3 was well aware of the privacy of their messages and stated that: "I'll use [private messages] ... because there's just some things that don't need a public forum." + +### 3.3 Analysis of Slack Usage Data + +At the time of writing the Slack workspace has a total of 12,084 members and 8,204 public channels. The number of public channels has nearly doubled over the past year (Figure 2Error! Reference source not found.). There are also a small number of private channels, limited to channels where confidential HR discussions occur. We do not analyse private channels to respect employee privacy. + +![01963ea9-2e22-7608-bc65-35365d454e0f_2_922_961_723_279_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_2_922_961_723_279_0.jpg) + +Figure 2: Number of channels over time. + +In total 82 million messages have been sent in the Slack workspace with ${88}\%$ of those shared as private direct messages, ${11}\%$ in public channels, and $1\%$ in private channels. Users are able to use the "direct messaging" functionality of Slack to have private conversations between 2 to 8 people. The relatively high percentage of Slack activity occurring via direct messaging is from the combined effect of people using Slack as a one-to-one instant messaging tool, and people creating "private DM groups" to essentially circumvent the mandate that all channels be public. + +Despite the volume of private direct messages, the ${11}\%$ of messages being sent in public channels means there have been over 9 million Slack public messages sent which are visible and searchable to all BigCorp workers, serving as a potentially rich source of company information. With such a large number of users and channels, the usage patterns among them is considerably varied. + +#### 3.3.1 Channels + +In addition to the 8,204 active public channels, there are an additional 6,891 archived public channels whose content (over 3.3 million messages) is still accessible through Slack search. Of the un-archived and technically "active" channels, their level of activity varies greatly with the least active ${20}\%$ of channels generating less than one post per month, while the ${90}\%$ percentile channel is generating 6.8 messages per month. Further, the most active channels are generating over 200 messages per month (Figure 3). + +Channel membership counts are also widely distributed with half of all channels having less than 13 members, while the 40 most popular channels have over 500 members each, including the three channels to which all employees are automatically subscribed Error! Reference source not found.Figure 4). + +![01963ea9-2e22-7608-bc65-35365d454e0f_3_156_331_698_296_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_3_156_331_698_296_0.jpg) + +Figure 3: The number of messages posted per week, per channel. (Each dot represents one channel.) + +![01963ea9-2e22-7608-bc65-35365d454e0f_3_151_727_715_294_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_3_151_727_715_294_0.jpg) + +Figure 4: Channel membership distributions. + +#### 3.3.2 Users + +The 12,084 members of the Slack workspace represents nearly every worker type at BigCorp, ranging from the CEO and VPs, to temporary contractors and outside collaborators (with limited access). Unsurprisingly, usage patterns vary among members. There are 9,417 weekly active members (have read at least one public channel in the past week), and 7,973 members who have posted at least one message in the past week. On the high-usage side, an average of 77 users post more than 100 messages in a week, and the most prolific members posting more than 400 messages in a week (Figure 5, horizontal axis). + +![01963ea9-2e22-7608-bc65-35365d454e0f_3_150_1421_715_448_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_3_150_1421_715_448_0.jpg) + +Figure 5: Plot of the number of channels subscribed to vs. the average number of messages posted each week. (Note: each dot represents a single user.) + +We also see a wide range in how members subscribe to channels (Figure 5, vertical axis), with the median user subscribed to 16 channels. However, there are 168 users subscribed to over 100 channels, and 7 users subscribed to more than 200 channels. For comparison, when using the native Slack client on a 1920x1080 resolution display, only 17 channel names will fit before scrolling is required. + +#### 3.3.3 Reactions + +A distinguishing feature of Slack compared to more "traditional" or formal means of communication in corporate environments is the use of reactions to posts. Reactions are a quick way to respond to a message in Slack and take the form of a small emoji-like glyph or animation (Figure 6). In the corpus of public messages, a total of 422,094 reactions have been left on 220,032 unique messages. Of all reactions the :+1 : "thumbs up" emoji(-)is the most frequent with over 166,000 uses (39% of all reactions), while the popular party parrot( )has been used nearly 22,000 times (5% of all reactions). Each of the 2,774 reactions used are shown in Figure 6, with their area scaled proportional to their relative usage rates. + +### 3.4 Emergent Behaviour + +During our exploration of the Slack workspace we discovered some interesting behaviours which had emerged, in part to facilitate using Slack at this scale. + +#### 3.4.1 Naming scheme + +The volunteer Slack administration team has instituted a fairly strict naming scheme. Words are separated by a dash (-) and should be as short as possible. Non-business channels are prefixed with #fun- and technology channels such as #tech-git are prefixed with #tech-. Most other channels are prefixed by their organizational unit, such as #research- or #hr-. + +![01963ea9-2e22-7608-bc65-35365d454e0f_3_931_1053_710_361_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_3_931_1053_710_361_0.jpg) + +Figure 6: Emoji-style reactions used on messages in the Slack workspace, scaled by area proportional to usage. + +#### 3.4.2 Types of Channels + +Channels naturally fell into a series of groupings. We determined the groupings by using an open coding scheme during our exploratory use of the Slack workspace. + +Q&A channels are very structured channels for providing help. These channels involve a specific format for sending every message and strictly follow rules for creating threads. + +News channels typically contain a collection of links and other small resources, and they often receive little engagement. + +Management Team channels are composed of the direct reports of a manager and contain messages such as an employee declaring they are working from home. + +Project Team channels are used to collect people on a specific project, who may come from different divisions and management chains. + +Issue/Event channels are a limited lifetime channel spun up to collect communication about one specific issue. Their name corresponds directly to the issue id in the bug management system. These channels have rapid, bursty communication and fall into disuse after the issue is resolved (typically in a matter of days). + +#### 3.4.3 Reactions + +Several norms were formed around the way reactions were used within channels. For instance, in Q&A channels, the organizers often use eyes (*) to indicate that a request is being addressed (essentially assigning the task). After a request is complete the message will be marked with a checkmark ( Reference source not found.). + +#### 3.4.4 Events + +One interesting and effective event that occurs are ask me anything (AMAs) sessions. In these events, upper management blocks off a portion of their time for employees to ask them any questions they would like. Employees engage strongly with these events and even read AMAs from divisions they are not associated with. Some of our interview participants (P2, P3) reported being very interested in hearing what upper management had to say. Other events like product launches also occur on Slack. + +Bryan Withings $4 : {32}\mathrm{{PM}}$ Hi @agskn, can you add my account to the sg-devops group please? Thanks! ... + +1 reply Today at 5:06 PM + +Figure 7: Asking a question in a help-specific channel with reactions. + +#### 3.4.5 Email vs Slack + +Although a large proportion of users engage regularly with Slack, there is still a population of people who continue to use email, and even the employees who primarily use Slack still revert to email. Anecdotally, we have found that Slack is often more quickly adopted by newer/younger employees. P1 uses email "when I want to send something a bit more official". P4 preferred email almost exclusively because it is "Easier to search for things". Interview participants also reported using email to communicate with people higher up in the management hierarchy. In fact, we noticed a trend in the data suggesting that upper-management level employees tend to use Slack less often than the rank-and-file. Although there is a need for email, P3 prefers Slack because "email threads [become], you know, untenable, they would grow and they would fork." Both Slack as well as email have their own niche in which they are useful. + +### 3.5 Primary Problems + +Our case study identified many interesting types of Slack usage in BigCorp; however, it also identified many problems employees encounter in day-to-day use. Two problems were particularly common throughout the case study: finding historical information, and keeping up with all the activity happening on Slack. These issues align strongly with Zhang et. al's [10] findings that looked at smaller Slack workspaces. + +#### 3.5.1 Finding Historical Information + +The first common theme throughout the case study was users having difficulty finding historical information in the Slack workspace(P1, P2, P3). So even though the workspace contains lots of potentially useful information, it is not easily accessible. P1 specifically mentioned "I find the search tool in Slack a bit cumbersome, I usually avoid having to use it" and "I might search in Slack [to find information]..., but usually that is not very productive." It is fairly unsurprising that finding historical information is difficult given the millions of messages and thousands of channels. When the huge quantity and timeline of messages is taken into account, it becomes obvious that an interface that only allows you to see one channel's messages, and only a small sample of messages at one time, will make finding historical information difficult. + +#### 3.5.2 Keeping up with the Workspace + +The second especially common problem was with people trying to "keep up" with everything happening in the workspace. With so many channels and messages, it can understandably be overwhelming. P1 and P2 both subscribe to a large number of channels, but then rarely check them - leading to hundreds of unread messages. We found that people liked to subscribe to many channels for topics they were interested in but could not practically read all of the messages posted there. A related problem is trying to "catch up" with your channels after a vacation - with some people opting to simply "mark all messages as read" rather than trying to read or skim what they missed. + +## 4 SLACKTIVITY + +Slack's focus is to provide a communication tool; however, our case study shows that users need more than just communication. Employees need to be able to access historical information and also keep up with their organization. We designed a system called Slacktivity to augment the use of Slack without hindering the ability to use Slack as a communication tool. + +### 4.1 Design Goals + +The design of Slacktivity was guided by a set of five design goals. + +## D1: Maintain Consistent Interface Vocabulary with Slack + +Our system should share the same interface vocabulary already used in Slack. By reusing concepts users are familiar with, they can transfer knowledge they already have to learn our system faster and more naturally. We found this principle especially important when sorting information. We decided on this design goal because it is important users can reuse the information they already know from Slack and are not confused. + +## D2: Encourage Exploration + +The linear, time-centric nature of Slack makes exploration difficult. Our system should support exploration by breaking out of the linear consumption of messages without breaking time continuity. Encouraging exploration also stands to improve finding historical information, as some information such as entire topics or unknown query terms are not directly searchable. + +## D3: Enable and Support Emergent Behaviours + +Emergent behaviour is difficult to predict and as such it is impossible to design directly for it. However, our system should enable and support emergent behaviour by allowing for flexibility. By supporting emergent behaviour, we hope to enable events like AMA's in addition to other behaviour like the use of reactions for specific purposes. + +## D4: Enable Consuming Information at Varying Detail Levels + +Slack's goal as a communication tool necessitates a focus around individual messages, however users interact and visualize Slack at a higher level when they are seeking information. As a result, our system should support consumption at varying degrees of detail including those not explicitly part of Slack. Supporting varying levels of detail can also assist users in keeping up with the workspace because they do not have to look at individual messages. Organizations have other sources of data, such as HR databases. Wherever possible, we aim to take advantage of that information to help contextualize the information found within Slack within these other, existing, sources of data. + +--- + +D5: Take Advantage of Organizational Data + +--- + +### 4.2 Implementation + +The system is implemented as a web application using React combined with D3.js. Because the dataset is so large, we serve aggregate data using a Node.js server from a MongoDB. + +### 4.3 Galaxy View + +The galaxy view is the home page of the visualization, and one of the two main screens (Figure 8). It consists of four sections: clusters, trending, stream, and search. The galaxy view is designed to support exploration of the Slack workspace, as well as keeping up with the rest of the organization. Additionally, it acts as a jumping off point for accessing the messages view, the other main screen. + +![01963ea9-2e22-7608-bc65-35365d454e0f_5_152_705_714_441_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_5_152_705_714_441_0.jpg) + +Figure 8: Galaxy view of Slacktivity. + +#### 4.3.1 Clusters (D4) + +In the center of the screen are a series of hierarchical clusters (Figure 1). There are three concepts when describing the visualization to keep in mind: channel, prefix groups, and divisions. We will describe the clustering from the bottom upwards starting at the channels. Each circle is a channel. The radius of the circle is proportional to the square root of the number of members in the channel. The colour of the circle is determined by the organization from which the employees in that channel are a part of. If there is no majority ( $< {50}\%$ of workers from a single organization), then the circle is coloured grey. The data regarding employee organization is obtained from an HR data source (D5), as Slack does not provide it. Additionally, the saturation of each circle is determined by the magnitude of the employee composition. For example, a pure dark blue circle will consist almost entirely of employees from the blue division. + +The next level of clustering, prefixes, is formed by the prefix of the channel. shows the #spg- group of channels. Large channel prefixes are identified by their label, but smaller prefixes are unlabelled to improve readability. + +There is one final level of clustering, divisions. This final level clusters the prefixes by the majority of the channel colours in a prefix. For example, the #tech- channels are mostly grey, therefore that group of channels appears with other Misc channels. This last level of clustering is meant to represent the different divisions in the organization. + +The visualization is generated using a circle packing algorithm from D3.js, with the hierarchy levels described here. It also supports zooming to allow users to more carefully select a channel they might be interested in, and to explore some of the smaller channels (D2). + +The galaxy visualization reveals interesting features about the organization. For example, the green channels consist of support teams. This explains why there are often green channels inside of other divisions. These channels bridge the gap between support and product teams. Additionally, although most of the grey channels inside of "Misc" such as #tech- or #fun- channels are expressly created for all employees, there are some groups of channels that represent a product of BigCorp, like #- . These products are inter-division products that combine employees from across BigCorp, hence why they are classified as Misc. + +#### 4.3.2 Thumbnails (D2, D4) + +Upon hovering over a channel in the cluster visualization a thumbnail will appear displaying useful information (Figure 9). It shows a line graph with the number of messages sent since the Slack workspace was created, the channel name, number of members, the total number of messages, and the top reactions sent in this channel. A user can use these features to easily determine if they would like to join a channel or explore it further. It provides information about how casual the channel is (eg. if the reaction contains a party parrot the channel is likely to be casual), how active the channel is, and how many people a user will need to interact with. Thumbnails encourage exploration by allowing users to interactively investigate channels they might find interesting. Furthermore, it provides a level of detail that Slack does not by showing the lifetime of a channel. + +![01963ea9-2e22-7608-bc65-35365d454e0f_5_922_1013_661_314_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_5_922_1013_661_314_0.jpg) + +Figure 9: Channel thumbnail, the channel is listed across the top, the current number of members, total number of messages, a small trend line indicating messages sent over time, and the top 5 used reactions. + +#### 4.3.3 Stream + +On the right side of the screen is a stream of all messages as they come into the Slack workspace (Figure 8). This is a livestream of all messages; no messages are filtered. However, they are grouped into their respective channels with the previous messages kept for context. As a message comes into the stream it is also highlighted in the galaxy view with a sonar effect that slowly dissipates as the message grows older. Some sonar effects are seen in the blue division. The clustering design of the visualization gives a few benefits can be used to see which parts of the organization are more active than others. By seeing which parts of the company are active an employee can keep up with the news of the company. + +#### 4.3.4 Trending + +On the left side of the screen, a single trending channel is chosen to be displayed (Figure 8, left). We compute the importance score for each message and take the sum of the importance of all messages sent to a channel weighted inversely by the seconds since it was sent. This means recent messages will have a higher weighting. For each individual message we calculate four metrics: the length of the message in characters (capped at 80), the total number of replies the message has received, the total number of reactions, the number of unique reactions, and the number of attachments the message has. Each of these values is normalised by taking the largest values and dividing them out so ] all numbers fall between 0 and 1 . We then take the sum of these values. + +importance $= \left| \text{replies}\right| + \min \left( {\text{len}\left( \text{msg}\right) ,{80}}\right) + \left| {\text{reactions}}_{\text{unique}}\right| + \left| \text{reactions}\right|$ + +By capturing a trending channel, we give employees the ability to keep up with the company, including paying attention to AMA's as they happen. + +#### 4.3.5 Search (D2) + +The user can also search for a channel name by typing into the text box across the top of the screen (Figure 8). The search results contain the same visualization as the thumbnails used in the cluster visualization (Figure 9). Users can search to discover new channels they might not already be a part of as well as explore the Slack workspace as a whole. + +### 4.4 Messages View + +While the galaxy view gives an overview of the Slack workspace, the messages view (Figure 10), is designed to support detailed exploration and finding historical information. The messages view can be accessed by either searching for a channel from the galaxy view or clicking on a channel in the cluster visualization. + +#### 4.4.1 Channel Overview + +Across the top of the page is the channel overview (Figure 10A). This includes the channel name, topic, and description. Additionally, there is a deep link beside the channel name to go directly into the Slack application to that channel (D1). + +#### 4.4.2 All Message Visualization + +The primary view of this visualisation is the all message visualisation (Figure 10C). It represents all of the messages visible in the current time slice. Each message is represented as a horizontal bar, where the colour represents the user who sent the message and the length of the bar is proportional to the length of the message. Only the 5 users who have sent the most messages across the history of the channel are coloured to reduce visual clutter. Threaded replies appear as smaller boxes underneath the message they belong to. Each column of messages represents either a week, or month depending on the period of time that is currently selected. The y-vertical position of each message is decided by simply stacking bars upwards until all of the messages have been stacked. The height of each set of messages therefore represents how active the channel was for that time bucket. Hovering over a message will display a thumbnail showing the message and any threaded replies it has. + +![01963ea9-2e22-7608-bc65-35365d454e0f_6_149_1641_721_375_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_6_149_1641_721_375_0.jpg) + +Figure 10: Messages view for a single channel with sections for A) channel overview, B) filters, C) all message visualization, D) timeslice, and E) message pane. + +#### 4.4.3 Message Pane (D1) + +Along the right side is the messages pane which is designed to anchor the user back into the Slack workspace (Figure 10E). It is sorted reverse chronologically just like Slack, with newest messages appearing at the bottom. Each message is deep linked back into Slack. There is a light grey viewfinder in the channel visualization to represent the scroll position of where the message pane currently is. Clicking on a message in the visualization will scroll to that message in the message pane. + +#### 4.4.4 Timeslice (D4) + +The timeslice is chosen using the timeline across the bottom of the page (Figure 10D). The line graph displays how many messages were sent during that month. Each point in the graph is a month, with the y-axis representing the total number of messages sent that month. There are two preset buttons for choosing "this month" as well as "this year". The timeslice also limits the number of messages displayed in the message pane. + +#### 4.4.5 Filtering + +All filtering is done by the filter bar just below the channel overview (Figure 10B and Figure 11). Filters include message sender, important messages, reaction, and exact text matching by message. When a message is filtered out it is removed from the messages pane and also given reduced opacity in the visualization. Clicking a user profile image filters to only show messages from that user. This filter also acts as a legend. + +![01963ea9-2e22-7608-bc65-35365d454e0f_6_928_966_710_122_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_6_928_966_710_122_0.jpg) + +Figure 11: Filters for the messages view, allowing a user to explore a Slack channel by reducing the number of messages they need to read. + +The slider filters messages using the same importance score used to detect trending. It works by adjusting the slider to control a percentile of the messages displayed. This allows users to quickly scan and review a channel by adjusting the slider to condense a channel and read only the most relevant messages. + +Users can also filter by the 5 most common reactions used in the channel. Clicking one of the reactions filters to display only messages with that reaction. Supporting reaction filters allows for emergent behaviour by giving flexible filters (D3). Filtering by exact text matching is done by typing into the search bar on the far right side of the filters. + +![01963ea9-2e22-7608-bc65-35365d454e0f_6_938_1535_693_404_0.jpg](images/01963ea9-2e22-7608-bc65-35365d454e0f_6_938_1535_693_404_0.jpg) + +Figure 12: The messages view displaying a person rather than a channel. + +#### 4.4.6 User Messages View + +Although we have presented the messages view as built for channels, it is also possible to invert the relationship so that the view displays a user's messages rather than a channel's messages (Figure 12Error! Reference source not found.). Oftentimes when searching for information a person is more important than content [8]. By allowing exploration of messages sent by a user we enable exploration of a user (D2, D4). + +The user messages view can be entered by clicking on a username or profile picture from any message in Slacktivity, for example the message stream from the galaxy view or the messages pane in the messages view. + +This view differs from the messages view of a channel only in that the colours in the all message visualization reflect the channel a message was sent in rather than the user who sent it. Additionally, the filters support filtering by channel rather than the message sender. + +### 4.5 Example Tasks + +Slacktivity can be used in several ways, but we present three sample walkthrough tasks to illustrate some useful tasks. + +#### 4.5.1 Task 1: A New Employee Joins AI Research + +Often people transfer teams or join the company as new employees. New and transferring employees have many questions they need to answer, such as understanding how their new team functions, and discovering how their team interacts with the rest of the organization. The galaxy view can give the new employee a broad overview of the scale of the company they have joined, and how big of a niche their team is in. They can also easily compare how their coworkers use Slack by watching for activity from the prefixes their team channel belongs to and contrasting that with other divisions and prefixes. + +#### 4.5.2 Task 2: Exploring a Topic Across the Organization + +Another employee might be interested in exploring work BigCorp has done with a particular machine learning technique. The employee might know there is a research division that researches artificial intelligence, so they begin at the galaxy view by searching for channels that begin with #research. Upon seeing an active channel, they can navigate to the messages view for that channel. The channel has thousands of messages and they do not have time to scroll through them all, so they move the importance slider to show only the top 5% of messages in the channel. From here they scroll the messages pane and read the full text of the remaining messages. Finally, they notice that about half of the important messages were sent by a single person. From this exploration the employee learned about a new channel they could join, found a knowledgeable person in the organization to ask further questions, and learned specific details from reading important chat messages. + +#### 4.5.3 Task 3: Keeping Up with the Organization + +Rather than actively exploring Slacktivity, a user can keep a window or tab open in the background to the galaxy view. The user can then go back to the tab occasionally throughout their day and read only the trending channel. Through this, the user can discover if there is an ongoing AMA, or important discussion they can either follow or take part in. They might also see new channels they are interested in. This changes the paradigm of Slack from push communication to pull, allowing a user to keep up with the company by only spending the amount of time they would like. + +## 5 Discussion + +In this paper we described a case study of Slack use at BigCorp. Our case study identified several interesting behaviours and pain points with Slack use at scale. To address this, we designed Slacktivity to augment Slack at BigCorp. However, there are still interesting implications and unanswered questions regarding both our case study and Slacktivity, such as generalisability and evaluation. + +### 5.1 Increased Slack Use + +We hope that Slacktivity helps employees get value out of using Slack. However, this might have the indirect effect of increasing the amount of time employees spend on Slack. Although increased group chat use might yield many benefits for communicating in the workplace, encouraging its use could have drawbacks. In a study on instant messaging use, Cameron and Webster [29] reported that the use of instant messaging results in more interruptions throughout the day. It is unclear whether maximising the effectiveness of Slack would produce a net benefit when contrasted with increased use. + +### 5.2 Slacktivity's Effect on Privacy + +Slacktivity increases the number of ways an employee can view and discover a message, essentially making all messages a little bit less private. For example, the stream in the galaxy view has the added effect of showing employees how public their messages really are. Our case study revealed that some employees might not understand how private their messages are. Depending on the perspective, helping employees understand how public their messages are could be seen as either a positive or a negative. It is negative from an organizational perspective because users might be discouraged from using public channels. BigCorp explicitly wants employees to participate in public channels to encourage an environment of collaboration and openness. However, from an employee's perspective having a realistic view of how private they are is important to maintain trust in the organization. + +### 5.3 Scaling Up + +BigCorp is a large organization, having more than 10,000 employees. However, there exist even larger organizations with over 100,000 employees. It is unclear whether our system scales to such a large organization, or if an organization that large would use Slack in a similar fashion. For example, if the ratio between channels(8,204)and users(12,084)remains constant then one can expect a workspace with 100,000 employees to have 67,891 channels. It seems likely so many channels require another form of organization, maybe by adding another level of hierarchy to organize channels on top of divisions and prefixes. + +We suspect it is unlikely that the median channel gets larger as an organization gets larger. Project and team channels are probably the same size whether an organization has 1000 employees or 100,000 employees. However, some channels are necessarily company wide. These channels already suffer because of the huge population that messages to them address. Sometimes mistakes happen and somebody notifies the entire channel by erroneously using @channel. This problem might be exacerbated when even more employees are in a channel. + +### 5.4 Scaling Down + +On the opposite end of the spectrum it is interesting to consider whether our case study and Slacktivity also generalise to small businesses with 100 or even medium businesses with 1,000 employees. The messages view probably has the same value because it was designed to work with small channels as well as with massive channels. Most of our findings regarding how Slack is used probably scale to medium sized organizations, as managing that many channels will require strict naming schemes and other forms of organization. + +### 5.5 Generalisability + +We have presented a case study of the use of Slack in a large organization. The question stands: do our results generalise to other organizations? This is a difficult question to answer because most organizations are unwilling to publicly disclose the detailed information described in our case study. However, we have some confidence that some aspects do generalise. 65 of the Fortune 100 companies use Slack, indicating that other companies do use Slack at a large scale. Other large organizations like IBM have channels "public by default" [30]. Not only do some of the more critical components of our case study generalise, but other parts such as issue specific channels are used in companies like Intuit [31] and other companies host events using Slack [32], [33]. It stands to reason that many of the other problems and descriptions of Slack use at scale described in this paper generalise to other large organizations. + +A limitation of our work is that we do not evaluate Slacktivity with a user study. However, our example walkthroughs make an argument as to how Slacktivity might be used to solve some of the problems in our case study. Furthermore, this paper introduced novel methods of visualising Slack at scale and a detailed case study of how Slack is used at scale today. Future iterations of this work will be deployed at BigCorp, allowing us to study the long term impacts a deployment of Slacktivity might have. + +## 6 CONCLUSION + +In this paper we presented a case study of how Slack is used at BigCorp, a large organization with over 10,000 employees. Our case study highlighted interesting behaviour of Slack usage at a large corporation, and problems encountered by employees using Slack. Based on the case study we introduced a novel visualization technique, Slacktivity, designed to augment Slack in the workplace and enable its use at scale. + +## REFERENCES + +[1] "Where work happens | Slack." https://slack.com/intl/en-ca/ (accessed Sep. 19, 2019). + +[2] "Microsoft Teams - Group Chat software." https://products.office.com/en-us/microsoft-teams/group-chat-software (accessed Sep. 19, 2019). + +[3] H. McCracken and H. McCracken, "How Slack's search finally got good," Fast Company, Jul. 10, 2018. https://www.fastcompany.com/90180551/how-slacks-search-finally-got-good (accessed Sep. 20, 2019). + +[4] J. D. Herbsleb, D. L. Atkins, D. G. Boyer, M. Handel, and T. A. Finholt, "Introducing Instant Messaging and Chat in the Workplace," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2002, pp. 171-178, doi: 10.1145/503376.503408. + +[5] A. Quan-Haase, J. Cothrel, and B. Wellman, "Instant Messaging for Collaboration: a Case Study of a High-Tech Firm," J Comput Mediat Commun, vol. 10, no. 4, Jul. 2005, doi: 10.1111/j.1083- 6101.2005.tb00276.x. + +[6] S. Whittaker, V. Bellotti, and J. Gwizdka, "Email in Personal Information Management," Commun. ACM, vol. 49, no. 1, pp. 68- 73, Jan. 2006, doi: 10.1145/1107458.1107494. + +[7] M. M. Skeels and J. Grudin, "When social networks cross boundaries: a case study of workplace use of facebook and linkedin," in Proceedings of the ACM 2009 international conference on Supporting group work, Sanibel Island, Florida, USA, May 2009, pp. 95-104, doi: 10.1145/1531674.1531689. + +[8] M. J. Brzozowski, "WaterCooler: exploring an organization through enterprise social media," in Proceedings of the ACM 2009 international conference on Supporting group work, Sanibel + +Island, Florida, USA, May 2009, pp. 219-228, doi: 10.1145/1531674.1531706. + +[9] "With 10+ million daily active users, Slack is where more work happens every day, all over the world," Several People Are Typing, Jan. 29, 2019. https://slackhq.com/slack-has-10-million-daily-active-users (accessed Sep. 19, 2019). + +[10] A. X. Zhang and J. Cranshaw, "Making Sense of Group Chat Through Collaborative Tagging and Summarization," Proc. ACM Hum.-Comput. Interact., vol. 2, no. CSCW, p. 196:1-196:27, Nov. 2018, doi: 10.1145/3274465. + +[11] A. D. Marwick, "Knowledge management technology," IBM Systems Journal, vol. 40, no. 4, pp. 814-830, 2001, doi: 10.1147/sj.404.0814. + +[12] C.-Y. Wang, H.-Y. Yang, and S. T. Chou, "Using peer-to-peer technology for knowledge sharing in communities of practices," Decision Support Systems, vol. 45, no. 3, pp. 528-540, Jun. 2008, doi: 10.1016/j.dss.2007.06.012. + +[13] P. Tyndale, "A taxonomy of knowledge management software tools: origins and applications," Evaluation and Program Planning, vol. 25, no. 2, pp. 183-190, May 2002, doi: 10.1016/S0149-7189(02)00012-5. + +[14] M. A. Smith and A. T. Fiore, "Visualization Components for Persistent Conversations," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2001, pp. 136-143, doi: 10.1145/365024.365073. + +[15] T. C. Turner, M. A. Smith, D. Fisher, and H. T. Welser, "Picturing Usenet: Mapping Computer-Mediated Collective Action," $J$ Comput Mediat Commun, vol. 10, no. 4, Jul. 2005, doi: 10.1111/j.1083-6101.2005.tb00270.x. + +[16] A. X. Zhang, L. Verou, and D. Karger, "Wikum: Bridging Discussion Forums and Wikis Using Recursive Summarization," in Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, Portland, Oregon, USA, Feb. 2017, pp. 2082-2096, doi: 10.1145/2998181.2998235. + +[17] B. Kerr, "Thread Arcs: an email thread visualization," in IEEE Symposium on Information Visualization 2003 (IEEE Cat. No.03TH8714), Oct. 2003, pp. 211-218, doi: 10.1109/INFVIS.2003.1249028. + +[18] S. Havre, E. Hetzler, P. Whitney, and L. Nowell, "ThemeRiver: visualizing thematic changes in large document collections," IEEE Transactions on Visualization and Computer Graphics, vol. 8, no. 1, pp. 9-20, Jan. 2002, doi: 10.1109/2945.981848. + +[19] S. Narayan and C. Cheshire, "Not Too Long to Read: The tldr Interface for Exploring and Navigating Large-Scale Discussion Spaces," in 2010 43rd Hawaii International Conference on System Sciences, Jan. 2010, pp. 1-10, doi: 10.1109/HICSS.2010.288. + +[20] S. Fu, J. Zhao, W. Cui, and H. Qu, "Visual Analysis of MOOC Forums with iForum," IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 201-210, Jan. 2017, doi: 10.1109/TVCG.2016.2598444. + +[21] P. S. Newman, "Exploring Discussion Lists: Steps and Directions," in Proceedings of the 2Nd ACM/IEEE-CS Joint Conference on Digital Libraries, New York, NY, USA, 2002, pp. 126-134, doi: 10.1145/544220.544245. + +[22] G. D. Venolia and C. Neustaedter, "Understanding Sequence and Reply Relationships Within Email Conversations: A Mixed-model Visualization," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2003, pp. 361-368, doi: 10.1145/642611.642674. + +[23] M. Wattenberg and D. Millen, "Conversation Thumbnails for Large-scale Discussions," in CHI '03 Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA, 2003, pp. 742-743, doi: 10.1145/765891.765963. + +[24] T. Bergstrom and K. Karahalios, "Conversation Clusters: Grouping Conversation Topics Through Human-computer Dialog," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2009, pp. 2349-2352, doi: 10.1145/1518701.1519060. + +[25] J. Donath and F. B. Viégas, "The Chat Circles Series: Explorations in Designing Abstract Graphical Communication Interfaces," in Proceedings of the 4th Conference on Designing Interactive + +Systems: Processes, Practices, Methods, and Techniques, New York, NY, USA, 2002, pp. 359-369, doi: 10.1145/778712.778764. + +[26] F. B. Viégas and J. S. Donath, "Chat Circles," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 1999, pp. 9-16, doi: 10.1145/302979.302981. + +[27] G. Leshed et al., "Visualizing Real-time Language-based Feedback on Teamwork Behavior in Computer-mediated Groups," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2009, pp. 537-546, doi: 10.1145/1518701.1518784. + +[28] S. Fu, J. Zhao, H. F. Cheng, H. Zhu, and J. Marlow, "T-Cal: Understanding Team Conversational Data with Calendar-based Visualization," in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2018, p. 500:1-500:13, doi: 10.1145/3173574.3174074. + +[29] A. F. Cameron and J. Webster, "Unintended consequences of emerging communication technologies: Instant Messaging in the workplace," Computers in Human Behavior, vol. 21, no. 1, pp. 85-103, Jan. 2005, doi: 10.1016/j.chb.2003.12.001. + +[30] "How the engineering team at IBM uses Slack throughout the development lifecycle," Several People Are Typing, May 22, 2017. https://slackhq.com/how-the-engineering-team-at-ibm-uses-slack-throughout-the-development-lifecycle (accessed Apr. 25, 2020). + +[31] Slack, "Intuit | Customer Stories," Slack. https://slack.com/intl/en-ca/customer-stories/intuit (accessed Apr. 25, 2020). + +[32] "From desert sun to virtual space: Taking our annual global sales event fully digital," Several People Are Typing, Mar. 16, 2020. https://slackhq.com/staging-digital-events-at-slack (accessed Apr. 25, 2020). + +[33] "Twitter goes remote and hosts global all-hands in Slack," Several People Are Typing, Mar. 18, 2020. https://slackhq.com/twitter-goes-remote-with-slack (accessed Apr. 25, 2020). \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..cc422c712fc5f8f46d8b3a38c22551ca58e6febf --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,349 @@ +§ SLACKTIVITY: SCALING SLACK FOR LARGE ORGANIZATIONS + + < g r a p h i c s > + +Figure 1: Slacktivity's galaxy view. Each circle in the visualization is a Slack channel. (Note: division names have been changed to remove identifying information, and select channel names have been replaced with .) + +§ ABSTRACT + +Group chat programs, such as Slack, are a promising and increasingly popular tool for improving communication among collections of people. However, it is unclear how the current design of group chat applications scales to support large and distributed organizations. We present a case study of a company-wide Slack installation in a large organization $( > {10},{000}$ employees) incorporating data from semi-structured interviews, exploratory use, and analysis of the data from the Slack workspace itself. Our case study reveals emergent behaviour, issues with exploring the content of such a large Slack workspace, and the inability to keep up with news from across the organization. To address these issues, we designed Slacktivity, a novel visualization system to augment the use of Slack in large organizations and demonstrate how Slacktivity can be used to overcome many of the challenges found when using Slack at scale. + +Keywords: Group chat, visualization. + +§ 1 INTRODUCTION + +As organizations become larger and more distributed, communication becomes increasingly difficult. Even when workers are co-located, technologies like email and instant messaging are frequently used as an alternative to direct face-to-face communication. Email is often thought of as asynchronous and useful for long-term searching, while instant messaging is useful for synchronous communication and quicker conversations. More recently, group chat systems (such as Slack [1] and Microsoft Teams [2]) have become popular, combining benefits of both email and instant messaging, while offering the potential for increased collaboration and coordination. + +When used over a period of time, a group chat workspace has the potential to be a useful "knowledge base" for a company, storing a history of communication about a given topic. However, the design of current group chat systems is optimized for near real-time communication, making it difficult to derive insights from this wealth of historical information [3]. This problem is exacerbated for large organizations using Slack where there may be thousands of channels and millions of messages. Additionally, with so many channels and messages, it is neither feasible nor possible to keep up with all activity across all channels to get a sense of what is going on throughout the organization, and the task of figuring out which channels to join or post questions to becomes more difficult. + +In this paper we present a case study of the internal usage of Slack at BigCorp, ${}^{1}$ a ten thousand person software company which has been using a unified Slack workspace for the past 3+ years, with over 15,000 channels created and 65 million messages sent over that time. Our case study combines exploratory use of the Slack workspace, formal and informal interviews, and data analysis of the use of Slack by BigCorp. Using this data, we identified strategies employees use to cope with Slack at scale and also the pain points of using Slack such as the inability to find historical information, and the need to keep up with their organization better. + +Using the findings from our case study we present Slacktivity, a tool to address the limitations of Slack when being used within a large organization. Slacktivity gives an overview of the channels across the entire organization with a cluster view, and also allows for detailed exploration of the entire history of a channel. + +This paper makes two main contributions: a case study to better understand how group chat is used in large organizations, and a novel interactive visualization system to augment the use of Slack in a workplace. + +${}^{1}$ Name anonymised for submission. + +§ 2 BACKGROUND AND RELATED WORK + +§ 2.1 INTRODUCTION TO GROUP CHAT + +Group chat works by combining the functionality of instant messaging and Internet Relay Chat (IRC) with rich communication. The most basic form of communication in group chat is a message. Messages can contain text, images or other attachments and can reply to one another, forming a thread. However, threads in group chat are typically limited to one level of replies. In addition to replying to a message, a user can also react. Reactions are small emoji-like glyphs that are counted just below the message. When responding with a reaction there is no notification sent, making this a non-intrusive form of communication. Messages can be sent to three different locations: direct messages, private channels, and public channels. Direct messages can be sent to groups of up to 8 people. Channels on the other hand can hold any number of people and are identified by a channel name (eg. #general). Channels can be either public, meaning anybody can join, or private, which require an invitation. Finally, all channels and direct messages are part of a workspace, the highest level of hierarchy in group chat. + +Our work explores the use of group chat in the workplace, and the use of visualization to improve its usage. This work looks at, and refers to Slack as the group-chat system, but the concepts and ideas apply equally to other systems (such as Microsoft Teams [2]). + +§ 2.2 COMMUNICATION IN THE WORKPLACE + +Technology plays a key role in supporting communication in the workplace, but its use is varied and complicated. When face to face interaction is needed but not possible, video conference software like Skype or Google Hangouts is used. However, most communication does not require such an expressive communication style and can instead be textual. Recently, instant messaging has been a popular communication tool in the workplace [4], [5]. A case study [5] of a large tech company from 2005 showed that ${38}\%$ of communication took place over instant messaging, nearly equal to the ${39}\%$ with email. Only ${23}\%$ of communication was face-to-face or using the phone. Email is also used extensively and many people view their email as more than a communication tool, as a "personal information store" [6]. + +Another aspect of communication in the workplace is social media. Traditional non-workplace social medias like Facebook and Twitter have been demonstrated to be useful to organizations through the creation of weak ties [7]. Furthermore, some social networks have been designed specifically for use in the workplace. WaterCooler [8] was designed for use at HP to aggregate various content from across the organization. Another more commercialised approach to social networking is the Yammer tool, which is aimed at bringing a social network more comparable to Facebook into the workplace. + +In recent years group chat has been used increasingly in the workplace, beginning with the inception of Slack in 2013 which had over 10 million active daily users in 2019 [9]. Despite the rapid adoption of group chat in the workplace, Zhang and Cranshaw [10] identify several issues associated with group. They report that employees struggle to find old information; chat history is overwhelming to newcomers; and employees fail to keep up with multiple channels. Unlike email, it is unclear if group chat is also used as a "personal information store". + +Communication tools have the opportunity to be a critical part of an organization's knowledge management strategy as critical discussions often occur over email, instant messaging, or group chat. Research has shown how an organization can use instant messaging [11], [12] and email [13] as a part of their knowledge management strategy, but it is unknown how group chat can fit in. + +§ 2.3 VISUALIZATION OF CONVERSATION + +Conversation exists in many mediums, such as forums, emails and instant messaging, and the academic literature has explored many different ways to visualize it either during the creation of the conversation, or after the conversation has occurred. Pioneering work began with the visualisation of forums, with a special focus on Usenet. Smith and Fiore [14] visualized Usenet threads by displaying the structure of the thread as a tree augmented with information about the users involved in the thread. Turner et. al [15] visualize Usenet as a hierarchical strategy to make recommendations on how to cultivate and manage Usenet groups. Wikum [16] uses recursive summarisation and visualisation to make the overcome the scale of online discussions. Other techniques have explored visualizing conversations by taking advantage of threads, including Thread Arcs [17], ThemeRiver [18], tldr [19], iForum [20], and Newman's work [21]. + +Venolia and Neustaedter [22] visualized email by creating conversations from the sequence and reply relationships, building on the idea of threading. Conversation thumbnails [23] visualize each email in a conversation as a rectangle, displayed in the order they were received and representing the complexity of the conversation. + +Bergstrom and Karahalios [24] designed Conversation Clusters as a method of archiving instant message conversations in a manner that is easy to retrieve a desired conversation. Conversation Clusters visualize the groups of salient words by using colour. However, most work in instant messaging has looked at augmenting communication by changing the interface you interact with, such as giving people movable circles for their avatars [25], [26] or using a visualization to foster positive behavioural changes [27]. Our work differs from prior work visualising conversation in that we augment each mark with more information from the chat and also break the sequential linear relationship with time. + +§ 2.4 UNDERSTANDING GROUP CHAT + +Efforts to understand group chat better are relatively rare. Our work is most related to T-Cal [28], which visualizes Slack conversations using a calendar-based visualization and allows for in-depth exploration using their thread-pulse design. However, T-Cal was designed to address the needs of Slack workspaces for massively open online courses (MOOCs) that are only used for a set amount of time. Another approach was taken by Zhang and Cranshaw [10], who aimed to improve sensemaking of group chat by creating a Slack bot, Tilda, that assisted in collaborative tagging of conversations. Both our case study of Slack and Slacktivity build on and draw inspiration from T-Cal and Tilda, however our focus is discovering and addressing the particular problems faced by deploying Slack at a large scale. + +§ 3 CASE STUDY: SLACK AT BIGCORP + +To get a better understanding of how Slack is used "at scale", we studied Slack usage at BigCorp - a multinational design software company with over 10,000 employees distributed at dozens of locations around the world. BigCorp has officially adopted and encouraged the use of Slack as a platform for group chat. Further, BigCorp has consolidated Slack activity into a single unified Slack workspace, rather than allowing teams to maintain their own individual Slack workspaces. + +§ 3.1 METHODOLOGY + +Our case study incorporates data from several sources: aggregate analysis from an export of the entire (non-private) history of the Slack workspace, extensive exploratory analysis, an interview with the Director in charge of Slack at BigCorp, informal discussions with dozens of employees about their Slack usage, and formal interviews with 5 employees ( 3 heavy users of Slack, 1 occasional user, and 1 infrequent user - referred to as P1-P5). + +Our aggregate data analysis uses all Slack messages sent on all internally-public channels (that is, channels that all employees of BigCorp can see). This includes all of the data about threads and reactions. Every public channel is included and its metadata such as descriptions and creation time. We analyse this data using only simple summary statistics and visualisation. We also combined the Slack user profiles with data from HR sources for properties like job title and organizational division. + +We employed exploratory use of the Slack workspace for some of our results. This consisted of primarily the first author using the workspace and exploring channels while taking notes of their contents. For some results, like the types of channels, an open coding scheme was used. Results stemming from exploratory use are not intended to be generalizable nor complete. Instead it is aimed to demonstrate some of the ways Slack can be used at scale. + +The formal semi-structured interviews were each 1 hour long. The interviews looked to answer three key questions: how do employees use Slack during the workday; how do employees find information on Slack; and, how do employees keep up to date with what is happening at BigCorp (not necessarily using Slack). We analysed these interviews by transcribing them and finding interesting and recurring themes throughout the interviews. + +§ 3.1.1 DATA ANONYMIZATION + +Given the private nature of communication tools we took several measures to preserve employee privacy. We only analysed "public" data that all employees at BigCorp have access to. We also did not analyse archived channels because people may "archive" a channel in an attempt to "delete" it. Additionally, we have anonymized all employees in the paper and accompanying material by blurring profile images and replacing all names with generated pseudonyms. We carefully examined each message before allowing it to appear in the paper or accompanying video. Anytime a channel name would be easily identifiable outside the company we replace it with a vegetable or fruit name and wrap it in angle brackets (eg. ). + +§ 3.2 ORIGINS OF SLACK AT BIGCORP + +BigCorp has been running a company wide Slack workspace since May 2016 (3.5 years at the time of data analysis). Before that date, there were over 50 fragmented Slack workspaces being maintained by individual teams. Responding to the grass-roots demand for Slack, BigCorp decided to officially support Slack and consolidated all activity into a single shared Slack workspace and encouraged its use as an "approved" communication mechanism. + +§ 3.2.1 PUBLIC BY DEFAULT + +The vice president at BigCorp who "sponsored" (paid for) the initial consolidation of Slack activity under a single Slack workspace agreed to do so under the condition that it would be run with a "public by default" policy, that is, that all channels would be set to "public", so they could be viewed and joined by everyone in the company - not just those in a particular team. The hope with this policy was that it would encourage openness and collaboration across the company and break down historic organizational "silos" where communication between separate teams and divisions had been limited. (Note: "public" channels are still only visible to employees of BigCorp, not the "general public") + +Though the policy dictates that all channels are open to everyone by default, channels do still have "members" who are subscribed to a channel. For example, a channel for a specific small development team might (and often does) only have members from that team. This can lead to the feeling that a channel is in fact private, even though it is public and anyone in the company could potentially find it. Our raw data analysis identified numerous cases of people using language inappropriate for a corporate environment, suggesting some users might not appreciate how "open" their communication on these Slack channels is. However, participant P3 was well aware of the privacy of their messages and stated that: "I'll use [private messages] ... because there's just some things that don't need a public forum." + +§ 3.3 ANALYSIS OF SLACK USAGE DATA + +At the time of writing the Slack workspace has a total of 12,084 members and 8,204 public channels. The number of public channels has nearly doubled over the past year (Figure 2Error! Reference source not found.). There are also a small number of private channels, limited to channels where confidential HR discussions occur. We do not analyse private channels to respect employee privacy. + + < g r a p h i c s > + +Figure 2: Number of channels over time. + +In total 82 million messages have been sent in the Slack workspace with ${88}\%$ of those shared as private direct messages, ${11}\%$ in public channels, and $1\%$ in private channels. Users are able to use the "direct messaging" functionality of Slack to have private conversations between 2 to 8 people. The relatively high percentage of Slack activity occurring via direct messaging is from the combined effect of people using Slack as a one-to-one instant messaging tool, and people creating "private DM groups" to essentially circumvent the mandate that all channels be public. + +Despite the volume of private direct messages, the ${11}\%$ of messages being sent in public channels means there have been over 9 million Slack public messages sent which are visible and searchable to all BigCorp workers, serving as a potentially rich source of company information. With such a large number of users and channels, the usage patterns among them is considerably varied. + +§ 3.3.1 CHANNELS + +In addition to the 8,204 active public channels, there are an additional 6,891 archived public channels whose content (over 3.3 million messages) is still accessible through Slack search. Of the un-archived and technically "active" channels, their level of activity varies greatly with the least active ${20}\%$ of channels generating less than one post per month, while the ${90}\%$ percentile channel is generating 6.8 messages per month. Further, the most active channels are generating over 200 messages per month (Figure 3). + +Channel membership counts are also widely distributed with half of all channels having less than 13 members, while the 40 most popular channels have over 500 members each, including the three channels to which all employees are automatically subscribed Error! Reference source not found.Figure 4). + + < g r a p h i c s > + +Figure 3: The number of messages posted per week, per channel. (Each dot represents one channel.) + + < g r a p h i c s > + +Figure 4: Channel membership distributions. + +§ 3.3.2 USERS + +The 12,084 members of the Slack workspace represents nearly every worker type at BigCorp, ranging from the CEO and VPs, to temporary contractors and outside collaborators (with limited access). Unsurprisingly, usage patterns vary among members. There are 9,417 weekly active members (have read at least one public channel in the past week), and 7,973 members who have posted at least one message in the past week. On the high-usage side, an average of 77 users post more than 100 messages in a week, and the most prolific members posting more than 400 messages in a week (Figure 5, horizontal axis). + + < g r a p h i c s > + +Figure 5: Plot of the number of channels subscribed to vs. the average number of messages posted each week. (Note: each dot represents a single user.) + +We also see a wide range in how members subscribe to channels (Figure 5, vertical axis), with the median user subscribed to 16 channels. However, there are 168 users subscribed to over 100 channels, and 7 users subscribed to more than 200 channels. For comparison, when using the native Slack client on a 1920x1080 resolution display, only 17 channel names will fit before scrolling is required. + +§ 3.3.3 REACTIONS + +A distinguishing feature of Slack compared to more "traditional" or formal means of communication in corporate environments is the use of reactions to posts. Reactions are a quick way to respond to a message in Slack and take the form of a small emoji-like glyph or animation (Figure 6). In the corpus of public messages, a total of 422,094 reactions have been left on 220,032 unique messages. Of all reactions the :+1 : "thumbs up" emoji(-)is the most frequent with over 166,000 uses (39% of all reactions), while the popular party parrot( )has been used nearly 22,000 times (5% of all reactions). Each of the 2,774 reactions used are shown in Figure 6, with their area scaled proportional to their relative usage rates. + +§ 3.4 EMERGENT BEHAVIOUR + +During our exploration of the Slack workspace we discovered some interesting behaviours which had emerged, in part to facilitate using Slack at this scale. + +§ 3.4.1 NAMING SCHEME + +The volunteer Slack administration team has instituted a fairly strict naming scheme. Words are separated by a dash (-) and should be as short as possible. Non-business channels are prefixed with #fun- and technology channels such as #tech-git are prefixed with #tech-. Most other channels are prefixed by their organizational unit, such as #research- or #hr-. + + < g r a p h i c s > + +Figure 6: Emoji-style reactions used on messages in the Slack workspace, scaled by area proportional to usage. + +§ 3.4.2 TYPES OF CHANNELS + +Channels naturally fell into a series of groupings. We determined the groupings by using an open coding scheme during our exploratory use of the Slack workspace. + +Q&A channels are very structured channels for providing help. These channels involve a specific format for sending every message and strictly follow rules for creating threads. + +News channels typically contain a collection of links and other small resources, and they often receive little engagement. + +Management Team channels are composed of the direct reports of a manager and contain messages such as an employee declaring they are working from home. + +Project Team channels are used to collect people on a specific project, who may come from different divisions and management chains. + +Issue/Event channels are a limited lifetime channel spun up to collect communication about one specific issue. Their name corresponds directly to the issue id in the bug management system. These channels have rapid, bursty communication and fall into disuse after the issue is resolved (typically in a matter of days). + +§ 3.4.3 REACTIONS + +Several norms were formed around the way reactions were used within channels. For instance, in Q&A channels, the organizers often use eyes (*) to indicate that a request is being addressed (essentially assigning the task). After a request is complete the message will be marked with a checkmark ( Reference source not found.). + +§ 3.4.4 EVENTS + +One interesting and effective event that occurs are ask me anything (AMAs) sessions. In these events, upper management blocks off a portion of their time for employees to ask them any questions they would like. Employees engage strongly with these events and even read AMAs from divisions they are not associated with. Some of our interview participants (P2, P3) reported being very interested in hearing what upper management had to say. Other events like product launches also occur on Slack. + +Bryan Withings $4 : {32}\mathrm{{PM}}$ Hi @agskn, can you add my account to the sg-devops group please? Thanks! ... + +1 reply Today at 5:06 PM + +Figure 7: Asking a question in a help-specific channel with reactions. + +§ 3.4.5 EMAIL VS SLACK + +Although a large proportion of users engage regularly with Slack, there is still a population of people who continue to use email, and even the employees who primarily use Slack still revert to email. Anecdotally, we have found that Slack is often more quickly adopted by newer/younger employees. P1 uses email "when I want to send something a bit more official". P4 preferred email almost exclusively because it is "Easier to search for things". Interview participants also reported using email to communicate with people higher up in the management hierarchy. In fact, we noticed a trend in the data suggesting that upper-management level employees tend to use Slack less often than the rank-and-file. Although there is a need for email, P3 prefers Slack because "email threads [become], you know, untenable, they would grow and they would fork." Both Slack as well as email have their own niche in which they are useful. + +§ 3.5 PRIMARY PROBLEMS + +Our case study identified many interesting types of Slack usage in BigCorp; however, it also identified many problems employees encounter in day-to-day use. Two problems were particularly common throughout the case study: finding historical information, and keeping up with all the activity happening on Slack. These issues align strongly with Zhang et. al's [10] findings that looked at smaller Slack workspaces. + +§ 3.5.1 FINDING HISTORICAL INFORMATION + +The first common theme throughout the case study was users having difficulty finding historical information in the Slack workspace(P1, P2, P3). So even though the workspace contains lots of potentially useful information, it is not easily accessible. P1 specifically mentioned "I find the search tool in Slack a bit cumbersome, I usually avoid having to use it" and "I might search in Slack [to find information]..., but usually that is not very productive." It is fairly unsurprising that finding historical information is difficult given the millions of messages and thousands of channels. When the huge quantity and timeline of messages is taken into account, it becomes obvious that an interface that only allows you to see one channel's messages, and only a small sample of messages at one time, will make finding historical information difficult. + +§ 3.5.2 KEEPING UP WITH THE WORKSPACE + +The second especially common problem was with people trying to "keep up" with everything happening in the workspace. With so many channels and messages, it can understandably be overwhelming. P1 and P2 both subscribe to a large number of channels, but then rarely check them - leading to hundreds of unread messages. We found that people liked to subscribe to many channels for topics they were interested in but could not practically read all of the messages posted there. A related problem is trying to "catch up" with your channels after a vacation - with some people opting to simply "mark all messages as read" rather than trying to read or skim what they missed. + +§ 4 SLACKTIVITY + +Slack's focus is to provide a communication tool; however, our case study shows that users need more than just communication. Employees need to be able to access historical information and also keep up with their organization. We designed a system called Slacktivity to augment the use of Slack without hindering the ability to use Slack as a communication tool. + +§ 4.1 DESIGN GOALS + +The design of Slacktivity was guided by a set of five design goals. + +§ D1: MAINTAIN CONSISTENT INTERFACE VOCABULARY WITH SLACK + +Our system should share the same interface vocabulary already used in Slack. By reusing concepts users are familiar with, they can transfer knowledge they already have to learn our system faster and more naturally. We found this principle especially important when sorting information. We decided on this design goal because it is important users can reuse the information they already know from Slack and are not confused. + +§ D2: ENCOURAGE EXPLORATION + +The linear, time-centric nature of Slack makes exploration difficult. Our system should support exploration by breaking out of the linear consumption of messages without breaking time continuity. Encouraging exploration also stands to improve finding historical information, as some information such as entire topics or unknown query terms are not directly searchable. + +§ D3: ENABLE AND SUPPORT EMERGENT BEHAVIOURS + +Emergent behaviour is difficult to predict and as such it is impossible to design directly for it. However, our system should enable and support emergent behaviour by allowing for flexibility. By supporting emergent behaviour, we hope to enable events like AMA's in addition to other behaviour like the use of reactions for specific purposes. + +§ D4: ENABLE CONSUMING INFORMATION AT VARYING DETAIL LEVELS + +Slack's goal as a communication tool necessitates a focus around individual messages, however users interact and visualize Slack at a higher level when they are seeking information. As a result, our system should support consumption at varying degrees of detail including those not explicitly part of Slack. Supporting varying levels of detail can also assist users in keeping up with the workspace because they do not have to look at individual messages. Organizations have other sources of data, such as HR databases. Wherever possible, we aim to take advantage of that information to help contextualize the information found within Slack within these other, existing, sources of data. + +D5: Take Advantage of Organizational Data + +§ 4.2 IMPLEMENTATION + +The system is implemented as a web application using React combined with D3.js. Because the dataset is so large, we serve aggregate data using a Node.js server from a MongoDB. + +§ 4.3 GALAXY VIEW + +The galaxy view is the home page of the visualization, and one of the two main screens (Figure 8). It consists of four sections: clusters, trending, stream, and search. The galaxy view is designed to support exploration of the Slack workspace, as well as keeping up with the rest of the organization. Additionally, it acts as a jumping off point for accessing the messages view, the other main screen. + + < g r a p h i c s > + +Figure 8: Galaxy view of Slacktivity. + +§ 4.3.1 CLUSTERS (D4) + +In the center of the screen are a series of hierarchical clusters (Figure 1). There are three concepts when describing the visualization to keep in mind: channel, prefix groups, and divisions. We will describe the clustering from the bottom upwards starting at the channels. Each circle is a channel. The radius of the circle is proportional to the square root of the number of members in the channel. The colour of the circle is determined by the organization from which the employees in that channel are a part of. If there is no majority ( $< {50}\%$ of workers from a single organization), then the circle is coloured grey. The data regarding employee organization is obtained from an HR data source (D5), as Slack does not provide it. Additionally, the saturation of each circle is determined by the magnitude of the employee composition. For example, a pure dark blue circle will consist almost entirely of employees from the blue division. + +The next level of clustering, prefixes, is formed by the prefix of the channel. shows the #spg- group of channels. Large channel prefixes are identified by their label, but smaller prefixes are unlabelled to improve readability. + +There is one final level of clustering, divisions. This final level clusters the prefixes by the majority of the channel colours in a prefix. For example, the #tech- channels are mostly grey, therefore that group of channels appears with other Misc channels. This last level of clustering is meant to represent the different divisions in the organization. + +The visualization is generated using a circle packing algorithm from D3.js, with the hierarchy levels described here. It also supports zooming to allow users to more carefully select a channel they might be interested in, and to explore some of the smaller channels (D2). + +The galaxy visualization reveals interesting features about the organization. For example, the green channels consist of support teams. This explains why there are often green channels inside of other divisions. These channels bridge the gap between support and product teams. Additionally, although most of the grey channels inside of "Misc" such as #tech- or #fun- channels are expressly created for all employees, there are some groups of channels that represent a product of BigCorp, like #- . These products are inter-division products that combine employees from across BigCorp, hence why they are classified as Misc. + +§ 4.3.2 THUMBNAILS (D2, D4) + +Upon hovering over a channel in the cluster visualization a thumbnail will appear displaying useful information (Figure 9). It shows a line graph with the number of messages sent since the Slack workspace was created, the channel name, number of members, the total number of messages, and the top reactions sent in this channel. A user can use these features to easily determine if they would like to join a channel or explore it further. It provides information about how casual the channel is (eg. if the reaction contains a party parrot the channel is likely to be casual), how active the channel is, and how many people a user will need to interact with. Thumbnails encourage exploration by allowing users to interactively investigate channels they might find interesting. Furthermore, it provides a level of detail that Slack does not by showing the lifetime of a channel. + + < g r a p h i c s > + +Figure 9: Channel thumbnail, the channel is listed across the top, the current number of members, total number of messages, a small trend line indicating messages sent over time, and the top 5 used reactions. + +§ 4.3.3 STREAM + +On the right side of the screen is a stream of all messages as they come into the Slack workspace (Figure 8). This is a livestream of all messages; no messages are filtered. However, they are grouped into their respective channels with the previous messages kept for context. As a message comes into the stream it is also highlighted in the galaxy view with a sonar effect that slowly dissipates as the message grows older. Some sonar effects are seen in the blue division. The clustering design of the visualization gives a few benefits can be used to see which parts of the organization are more active than others. By seeing which parts of the company are active an employee can keep up with the news of the company. + +§ 4.3.4 TRENDING + +On the left side of the screen, a single trending channel is chosen to be displayed (Figure 8, left). We compute the importance score for each message and take the sum of the importance of all messages sent to a channel weighted inversely by the seconds since it was sent. This means recent messages will have a higher weighting. For each individual message we calculate four metrics: the length of the message in characters (capped at 80), the total number of replies the message has received, the total number of reactions, the number of unique reactions, and the number of attachments the message has. Each of these values is normalised by taking the largest values and dividing them out so ] all numbers fall between 0 and 1 . We then take the sum of these values. + +importance $= \left| \text{ replies }\right| + \min \left( {\text{ len }\left( \text{ msg }\right) ,{80}}\right) + \left| {\text{ reactions }}_{\text{ unique }}\right| + \left| \text{ reactions }\right|$ + +By capturing a trending channel, we give employees the ability to keep up with the company, including paying attention to AMA's as they happen. + +§ 4.3.5 SEARCH (D2) + +The user can also search for a channel name by typing into the text box across the top of the screen (Figure 8). The search results contain the same visualization as the thumbnails used in the cluster visualization (Figure 9). Users can search to discover new channels they might not already be a part of as well as explore the Slack workspace as a whole. + +§ 4.4 MESSAGES VIEW + +While the galaxy view gives an overview of the Slack workspace, the messages view (Figure 10), is designed to support detailed exploration and finding historical information. The messages view can be accessed by either searching for a channel from the galaxy view or clicking on a channel in the cluster visualization. + +§ 4.4.1 CHANNEL OVERVIEW + +Across the top of the page is the channel overview (Figure 10A). This includes the channel name, topic, and description. Additionally, there is a deep link beside the channel name to go directly into the Slack application to that channel (D1). + +§ 4.4.2 ALL MESSAGE VISUALIZATION + +The primary view of this visualisation is the all message visualisation (Figure 10C). It represents all of the messages visible in the current time slice. Each message is represented as a horizontal bar, where the colour represents the user who sent the message and the length of the bar is proportional to the length of the message. Only the 5 users who have sent the most messages across the history of the channel are coloured to reduce visual clutter. Threaded replies appear as smaller boxes underneath the message they belong to. Each column of messages represents either a week, or month depending on the period of time that is currently selected. The y-vertical position of each message is decided by simply stacking bars upwards until all of the messages have been stacked. The height of each set of messages therefore represents how active the channel was for that time bucket. Hovering over a message will display a thumbnail showing the message and any threaded replies it has. + + < g r a p h i c s > + +Figure 10: Messages view for a single channel with sections for A) channel overview, B) filters, C) all message visualization, D) timeslice, and E) message pane. + +§ 4.4.3 MESSAGE PANE (D1) + +Along the right side is the messages pane which is designed to anchor the user back into the Slack workspace (Figure 10E). It is sorted reverse chronologically just like Slack, with newest messages appearing at the bottom. Each message is deep linked back into Slack. There is a light grey viewfinder in the channel visualization to represent the scroll position of where the message pane currently is. Clicking on a message in the visualization will scroll to that message in the message pane. + +§ 4.4.4 TIMESLICE (D4) + +The timeslice is chosen using the timeline across the bottom of the page (Figure 10D). The line graph displays how many messages were sent during that month. Each point in the graph is a month, with the y-axis representing the total number of messages sent that month. There are two preset buttons for choosing "this month" as well as "this year". The timeslice also limits the number of messages displayed in the message pane. + +§ 4.4.5 FILTERING + +All filtering is done by the filter bar just below the channel overview (Figure 10B and Figure 11). Filters include message sender, important messages, reaction, and exact text matching by message. When a message is filtered out it is removed from the messages pane and also given reduced opacity in the visualization. Clicking a user profile image filters to only show messages from that user. This filter also acts as a legend. + + < g r a p h i c s > + +Figure 11: Filters for the messages view, allowing a user to explore a Slack channel by reducing the number of messages they need to read. + +The slider filters messages using the same importance score used to detect trending. It works by adjusting the slider to control a percentile of the messages displayed. This allows users to quickly scan and review a channel by adjusting the slider to condense a channel and read only the most relevant messages. + +Users can also filter by the 5 most common reactions used in the channel. Clicking one of the reactions filters to display only messages with that reaction. Supporting reaction filters allows for emergent behaviour by giving flexible filters (D3). Filtering by exact text matching is done by typing into the search bar on the far right side of the filters. + + < g r a p h i c s > + +Figure 12: The messages view displaying a person rather than a channel. + +§ 4.4.6 USER MESSAGES VIEW + +Although we have presented the messages view as built for channels, it is also possible to invert the relationship so that the view displays a user's messages rather than a channel's messages (Figure 12Error! Reference source not found.). Oftentimes when searching for information a person is more important than content [8]. By allowing exploration of messages sent by a user we enable exploration of a user (D2, D4). + +The user messages view can be entered by clicking on a username or profile picture from any message in Slacktivity, for example the message stream from the galaxy view or the messages pane in the messages view. + +This view differs from the messages view of a channel only in that the colours in the all message visualization reflect the channel a message was sent in rather than the user who sent it. Additionally, the filters support filtering by channel rather than the message sender. + +§ 4.5 EXAMPLE TASKS + +Slacktivity can be used in several ways, but we present three sample walkthrough tasks to illustrate some useful tasks. + +§ 4.5.1 TASK 1: A NEW EMPLOYEE JOINS AI RESEARCH + +Often people transfer teams or join the company as new employees. New and transferring employees have many questions they need to answer, such as understanding how their new team functions, and discovering how their team interacts with the rest of the organization. The galaxy view can give the new employee a broad overview of the scale of the company they have joined, and how big of a niche their team is in. They can also easily compare how their coworkers use Slack by watching for activity from the prefixes their team channel belongs to and contrasting that with other divisions and prefixes. + +§ 4.5.2 TASK 2: EXPLORING A TOPIC ACROSS THE ORGANIZATION + +Another employee might be interested in exploring work BigCorp has done with a particular machine learning technique. The employee might know there is a research division that researches artificial intelligence, so they begin at the galaxy view by searching for channels that begin with #research. Upon seeing an active channel, they can navigate to the messages view for that channel. The channel has thousands of messages and they do not have time to scroll through them all, so they move the importance slider to show only the top 5% of messages in the channel. From here they scroll the messages pane and read the full text of the remaining messages. Finally, they notice that about half of the important messages were sent by a single person. From this exploration the employee learned about a new channel they could join, found a knowledgeable person in the organization to ask further questions, and learned specific details from reading important chat messages. + +§ 4.5.3 TASK 3: KEEPING UP WITH THE ORGANIZATION + +Rather than actively exploring Slacktivity, a user can keep a window or tab open in the background to the galaxy view. The user can then go back to the tab occasionally throughout their day and read only the trending channel. Through this, the user can discover if there is an ongoing AMA, or important discussion they can either follow or take part in. They might also see new channels they are interested in. This changes the paradigm of Slack from push communication to pull, allowing a user to keep up with the company by only spending the amount of time they would like. + +§ 5 DISCUSSION + +In this paper we described a case study of Slack use at BigCorp. Our case study identified several interesting behaviours and pain points with Slack use at scale. To address this, we designed Slacktivity to augment Slack at BigCorp. However, there are still interesting implications and unanswered questions regarding both our case study and Slacktivity, such as generalisability and evaluation. + +§ 5.1 INCREASED SLACK USE + +We hope that Slacktivity helps employees get value out of using Slack. However, this might have the indirect effect of increasing the amount of time employees spend on Slack. Although increased group chat use might yield many benefits for communicating in the workplace, encouraging its use could have drawbacks. In a study on instant messaging use, Cameron and Webster [29] reported that the use of instant messaging results in more interruptions throughout the day. It is unclear whether maximising the effectiveness of Slack would produce a net benefit when contrasted with increased use. + +§ 5.2 SLACKTIVITY'S EFFECT ON PRIVACY + +Slacktivity increases the number of ways an employee can view and discover a message, essentially making all messages a little bit less private. For example, the stream in the galaxy view has the added effect of showing employees how public their messages really are. Our case study revealed that some employees might not understand how private their messages are. Depending on the perspective, helping employees understand how public their messages are could be seen as either a positive or a negative. It is negative from an organizational perspective because users might be discouraged from using public channels. BigCorp explicitly wants employees to participate in public channels to encourage an environment of collaboration and openness. However, from an employee's perspective having a realistic view of how private they are is important to maintain trust in the organization. + +§ 5.3 SCALING UP + +BigCorp is a large organization, having more than 10,000 employees. However, there exist even larger organizations with over 100,000 employees. It is unclear whether our system scales to such a large organization, or if an organization that large would use Slack in a similar fashion. For example, if the ratio between channels(8,204)and users(12,084)remains constant then one can expect a workspace with 100,000 employees to have 67,891 channels. It seems likely so many channels require another form of organization, maybe by adding another level of hierarchy to organize channels on top of divisions and prefixes. + +We suspect it is unlikely that the median channel gets larger as an organization gets larger. Project and team channels are probably the same size whether an organization has 1000 employees or 100,000 employees. However, some channels are necessarily company wide. These channels already suffer because of the huge population that messages to them address. Sometimes mistakes happen and somebody notifies the entire channel by erroneously using @channel. This problem might be exacerbated when even more employees are in a channel. + +§ 5.4 SCALING DOWN + +On the opposite end of the spectrum it is interesting to consider whether our case study and Slacktivity also generalise to small businesses with 100 or even medium businesses with 1,000 employees. The messages view probably has the same value because it was designed to work with small channels as well as with massive channels. Most of our findings regarding how Slack is used probably scale to medium sized organizations, as managing that many channels will require strict naming schemes and other forms of organization. + +§ 5.5 GENERALISABILITY + +We have presented a case study of the use of Slack in a large organization. The question stands: do our results generalise to other organizations? This is a difficult question to answer because most organizations are unwilling to publicly disclose the detailed information described in our case study. However, we have some confidence that some aspects do generalise. 65 of the Fortune 100 companies use Slack, indicating that other companies do use Slack at a large scale. Other large organizations like IBM have channels "public by default" [30]. Not only do some of the more critical components of our case study generalise, but other parts such as issue specific channels are used in companies like Intuit [31] and other companies host events using Slack [32], [33]. It stands to reason that many of the other problems and descriptions of Slack use at scale described in this paper generalise to other large organizations. + +A limitation of our work is that we do not evaluate Slacktivity with a user study. However, our example walkthroughs make an argument as to how Slacktivity might be used to solve some of the problems in our case study. Furthermore, this paper introduced novel methods of visualising Slack at scale and a detailed case study of how Slack is used at scale today. Future iterations of this work will be deployed at BigCorp, allowing us to study the long term impacts a deployment of Slacktivity might have. + +§ 6 CONCLUSION + +In this paper we presented a case study of how Slack is used at BigCorp, a large organization with over 10,000 employees. Our case study highlighted interesting behaviour of Slack usage at a large corporation, and problems encountered by employees using Slack. Based on the case study we introduced a novel visualization technique, Slacktivity, designed to augment Slack in the workplace and enable its use at scale. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..43c41590774d925d11d3760268b3e892754db042 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,293 @@ +# A stricter constraint produces outstanding matching: Learning reliable image matching with improved networks + +Anonymous Authors + +Author's Affiliation + +## Abstract + +Image matching is widely used in many applications, such as visual-based localization and 3D construction. Comparing with Traditional local features (e.g., SIFT) and outlier elimination method (e.g, RANSAC), learning-based image matching methods, e.g., HardNet and OANet, show promising performance under challenging environments and large-scale benchmarks. However, the existing learning-based methods suffer the noise in training data and the existing loss function, e.g., hinge loss, does not work well in image matching network. In this paper, we propose an end-to-end image matching method with less training dataset to obtain a more accurate and robust performance. First, a novel data cleaning strategy is proposed to remove the noise in the training dataset. Second, we strengthen the matching constraints by proposing a novel quadratic hinge triplet (QHT) loss function to improve the network. Finally, we apply stricter OANet in sample judgement to produce more outstanding matching. The proposed method shows the state-of-the-art performance on a large-scale and challenging Phototourism dataset, and also reported the 1st place in the CVPR 2020 Image Matching Challenges Workshop Track1 with the metric measure of reconstructed pose accuracy. Keywords: Image matching, HardNet, OANet, SIFT, large scale, challenging environments, pose accuracy. + +Keywords: Image matching, HardNet, OANet, SIFT, large scale, challenging environments, pose accuracy. + +## 1 INTRODUCTION + +Image matching is the foundation task of some high-level 3D computer vision tasks, such as 3D reconstruction, structure-from-motion, camera pose estimation, etc. The goal of the task is to solve the corresponding pixels relationship within a same physical region from two images which have a common vision area [1]. Nowadays, with the widespread development of deep learning technologies on many computer vision areas and the need for long-term large-scale environments tasks, the shortcomings of traditional keypoint-based image matching method gradually appear, the major difficulty lies in that usually local features are evaluated through the accuracy of descriptors on small datasets, which only illustrates the matching performance, and is not suitable for integrating and evaluating some downstream tasks. + +The use of deep learning based solution for image matching task is hopeful to outperform the disadvantages of traditional keypoint-based methods, which demonstrated the ability to integrate multi-stage tasks in a lot of works [2][3] and could optimize and evaluate different evaluation metrics in large dataset. Although without complex hand-crafted efforts and show convenient pipeline in further tasks, the methods face with challenging conditions, such as illumination variation, viewpoint changes, repeated texture, which is apparent in outdoor datasets where scenes scales and conditions are highly variable. This leads to the poor performance result both in accuracy and robustness. + +![01963ea7-7708-7c21-9dbd-ad9768a998ef_0_926_475_717_500_0.jpg](images/01963ea7-7708-7c21-9dbd-ad9768a998ef_0_926_475_717_500_0.jpg) + +Figure 1: Our matching method's correspondences in extreme illumination variation and viewpoint changes. + +To solve this problem, further end-to-end solutions and modified descriptors have been proposed with the premise being that the descriptor or network could learn more reliable features across image pairs. A novel representation of features with log-polar sampling method [4] is proposed to achieve scale invariance Some works [2-3] propose a jointly learning feature detection and description method separately to improve the image matching performance. + +Inspired by the mentioned knowledge, in this work, we propose a three-stage pipeline, including feature extraction, feature matching and outlier pre-filtering steps, to compute the corresponding pixels from image pairs, as shown in Figure 1. For each step, we add some constraints to enforce the algorithm converge to better matching results. Unlike the previous methods, our proposed method only requires light-weight model and does not need to train on multi-branches separately. + +We show that our method outperforms previous algorithms and achieves the state-of-the-art performance using the Phototourism benchmark with large-scale environments and challenging conditions. We provide a detailed insight into how improved data processing strategy, HardNet [8] loss function, modified OANet [9] combined with guided match algorithm [10] could help the + +--- + +* email address + +--- + +![01963ea7-7708-7c21-9dbd-ad9768a998ef_1_187_160_1435_333_0.jpg](images/01963ea7-7708-7c21-9dbd-ad9768a998ef_1_187_160_1435_333_0.jpg) + +Figure 2: Image matching based pose estimation pipeline, with some popular traditional and learning-based methods. Our method is based on the pipeline with some improvement on the chosen methods (indicated by a green box with a red tick). + +pipeline to achieve accurate and robust matching results and evaluate the method on pose estimation task. + +The main contributions include: + +- We construct a new patch dataset based on the given Phototourism images and sparse model, which is similar to the UBC Phtotour dataset [11], and pre-train our model on that dataset; + +- We propose a novel quadratic hinge triplet (QHT) loss function for the feature descriptor network HardNet, and an improved OANet combined with a guided matching strategy to compute reliable feature matches; + +- Through abundant experiments and the ablation study for each module, we show that our algorithm achieves state-of-the-art performance in the reconstructed poses, which ranks first on both tracks of stereo and multi-view matching on the public Phototourism Image Matching challenge 2020 [7]. + +## 2 RELATED WORK + +Image matching plays an important role in many high-level tasks in computer vision area [14-15], which is to find the corresponding pixels from the image pairs in the same physical region in which the scene imaged with geometrical constraints [1]. Feature extraction, feature matching and outlier pre-filtering are the most vital components in image matching pipeline, in which traditional methods and deep learning based methods are implemented and completed in recent years. Figure 2 shows the main parts of image matching based pose estimation pipeline, including some common methods and our chosen target algorithm to further improve. + +Feature Extraction. End-to-end feature extraction and matching methods are classified into detect-then-describe (e.g. SuperPoint [16]), detect-and-describe (e.g. R2D2 [3], D2-Net [2]), and describe-then-detect (e.g. DELF [19]) strategies according to the sequences for executing the detection and description processes, which are utilized in different needs. However, they either have poor performance on challenging conditions and lack of repeatability on keypoints or show low efficiency on matching and storage. Traditionally, detectors and descriptors are separately applied in the pipeline, SIFT [5] (and RootSIFT [26]) and SURF [21] are most popular detectors while some descriptors are followed, in which LogPolar [4] shows better performance than ContextDesc [24] from the relative pose error, while SOSNet [25] and HardNet [8] overpass GIFT [23] in the public validation set. + +Outlier pre-filtering. There are a large number of incorrect matches in the matching pair, which will bring noise to the subsequent pose estimation. Traditional outlier removal methods include ratio-test [5], GMS [6], guided-match [10] methods, etc. + +A method based on deep learning is to judge whether the match is an outlier through the pose relationship obtained by the regression of the convolutional network, but it is usually difficult for network training to converge. Another way is to convert the pose to the label of whether the match is correct through the epipolar constraint, and then the regression problem can be converted into a classification problem to determine whether the match is inlier or outlier. The standard binary cross-entropy loss can be used to learn model. + +However, the input matching pairs are unordered, so the network needs to be permutation-invariant (insensitive) to match sequence transformations, so it is not applicable to convolutional networks or fully connected networks. CNe [12] draws on the idea of pointNet [13] and proposes a multi-layer weight-sharing perceptron to solve the irrelevant transformation problem. For each input matching pair, the same perceptron is used to process iterations, but this also leads that each matching pair is processed independently, lacks information circulation, and cannot integrate the predicted matching pair judgment results, and thus cannot learn the useful information in the entire image matching pair. Context normalization [17] is an instance normalization method that is often used widely in image migration [29] and GAN [27]. It normalizes the processing results of image matching pairs to exchange and circulate information. But this simple and violent use of mean variance ignores the complexity of global features, and at the same time cannot capture the correlation information between parts. OANet [9] proposed the use of DiffPool and DiffUnPool layers to promote the information flow and communication of internal neurons. + +To estimate a more accurate pose of the given image pairs or 3D reconstruction through the matching process, we consider improving the pipeline with respect to the following aspects: + +- Obtain stable and accurate keypoints. The keypoints should appear invariably, e.g., a certain point in the first image could also appear stably in the second image taken under a different viewpoint or illumination change. + +- Provide reliable and distinguishable descriptors. Under different environmental conditions, the same keypoint should have similar descriptor, so that the same keypoint in different images can be paired using nearest neighbor matching. + +- Hold powerful outlier pre-filtering ability. Filtering the wrong matches to get most of the correct matching pairs could help to get an accurate pose estimation result finally. + +## 3 Method + +In this paper, we aim to learn accurate and reliable image matching in large-scale challenging conditions. In contrast with previous works which use end-to-end solutions of insufficient accuracy at the feature extraction stage or traditional solvers through the image matching pipeline, we combined traditional detector solver with an improved HardNet and OANet to pretrain on a constructed dataset. In particular, we propose a dataset construction method to optimize hypermeters before training on large-scale dataset and propose a new loss function in the description stage based on HardNet network, during outlier pre-filtering stage, we combined guided match with proposed stricter OANet to learn more accurate matches. + +![01963ea7-7708-7c21-9dbd-ad9768a998ef_2_176_160_1466_289_0.jpg](images/01963ea7-7708-7c21-9dbd-ad9768a998ef_2_176_160_1466_289_0.jpg) + +Figure 3: The whole architecture with proposed methods. We use SIFT feature detector firstly, and then apply HardNet from the 32x32 patch input into the 128 dimensions descriptors, after the nearest neighbor match and outlier pre-filtering, we could get the final matches, which is further used to compute the pose estimation. + +The whole architecture of the proposed method is demonstrated in Figure 3. Firstly, 128 dimensions of features are extracted from a basic HardNet with the input ${32} \times {32}$ patch. Then, after the matching stage, with the proposed stricter OANet, the correspondences could be transferred into $\mathrm{N}$ binary classification (inlier / outlier), which is permutation-invariant and feasible with convolutional layers. Finally, the pre-filtered matches are used to compute pose estimation in stereo and multi-view tasks. + +### 3.1 Dataset Construction + +#### 3.1.1 UBC Dataset Construction for Pretraining + +In order to quickly train our lightweight method and search for some hyperparameters to further use into the pretrained model, thus we use the UBC Phototour dataset for the pretraining, which is patch data and suitable for HardNet training. + +#### 3.1.2 Phototourism Dataset Construction for Training + +After pretraining on the constructed UBC Phototour dataset for fast parameter selection, for the training period, we construct a clean dataset with low noise and less redundancy from Phototourism training scenes. Removing the redundant data could largely shorten the training period, while noise label reducing could help the loss function gradient descent optimization to improve network performance. + +To reduce the noise label in the dataset, we set the threshold of discard the images with low confidence, in which 25% with the least $3\mathrm{\;d}$ points and $3\mathrm{\;d}$ points whose tracks are less than 5 . For the redundant data removing, we sample the $3\mathrm{\;d}$ points that have been tracked more than 5 times, and only keep the $3\mathrm{\;d}$ points which are tracked 5 times. The sampling is repeated 10 times, and the NCC value is calculated for each sampling (the lower of the NCC indicates the lower similarity of the two images and the larger the difference of the matching), and the result with the lowest NCC is retained. In addition, we also applied data augmentation with random flip and random rotation strategies in both pretraining and training processes. + +### 3.2 Feature Extraction with Improved HardNet + +Feature extraction includes keypoint detection and feature description processes. SIFT detector [5] is used firstly to extract the position and scale information on the selected keypoint of the given input image. We adopted OpenCV implemented SIFT with a low detection threshold to generate up to 8000 points and a fixed orientation ${}^{1}$ . A single pixel on the image cannot describe the physical information of the current key point, so in order to describe the key point and the information around it, the key point is cropped with its scale size, called a patch, and then resized to ${32} \times {32}$ as the input of the HardNet network to extract the patch’s descriptor. + +After a relatively simple 7-layer HardNet, the input ${32} \times {32}$ patch is described as a 128-dimensional feature vector, which represents the description generated for each patch. We remain the network structure, and made improvements to its loss function to train the network stably and efficiently on the constructed dataset. + +HardNet uses triplet loss embedded with difficult sample mining to optimize so that the distance of the sample descriptor within the class is small, and the distance of the sample description vector between the classes is large. + +In order to further improve the effectiveness and stability of model learning, we apply a quadratic hinge triplet (QHT) loss based on the triplet loss, which is inspired from the first order similarity loss in SOSNet [25], by adding the square term of the triple, which shares the same mining strategy as in HardNet to find the "hardest-within-batch" negatives. The description loss function ${\mathcal{L}}_{\text{des }}$ is expressed as: + +$$ +{\mathcal{L}}_{\text{des }} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\max {\left( 0,1 + {d}_{i}^{\text{pos }} - {d}_{i}^{\text{neg }}\right) }^{2} \tag{1} +$$ + +$$ +{d}_{i}^{pos} = d\left( {{a}_{i},{pos}_{i}}\right) \tag{2} +$$ + +$$ +{d}_{i}^{neg} = \mathop{\min }\limits_{{\forall j \neq i}}\left( {d\left( {{a}_{i},{neg}_{j}}\right) }\right) \tag{3} +$$ + +Where ${d}_{i}^{\text{pos }}$ represents the L2 distance between anchor descriptor and positive descriptor, ${d}_{i}^{\text{neg }}$ represents the minimum distance in all the distances between anchor descriptor and the negative descriptor. The positive sample represents the patch on different images corresponding to the same $3\mathrm{\;d}$ point with the given anchor in the real world, on the contrary, the negative sample represents the patch obtained from different $3\mathrm{\;d}$ points. + +Quadratic triplet loss weights the gradients with respect to the parameters of the network by the magnitude of the loss. Compared with HardNet, the gradient of QHT loss is more sensitive to changes in loss. Larger ${d}_{i}^{\text{pos }} - {d}_{i}^{\text{neg }}$ results in smaller loss gradient, thus the model learning is more effective and stable. + +In addition, the improved model would be sensitive to the noise in the data. Wrong positive and negative sample labels will degrade the performance of the model, this risk could be avoided through the data denoising work proposed before. + +--- + +${}^{1}$ https://github.com/vcg-uvic/image-matching-benchmark-baselines + +--- + +### 3.3 Stricter OANet with Dynamic Guided Matching + +OANet proposed to use Diffpool and DiffUnPool layers on the basis of CNe to promote the information circulation and communication of internal neurons. The two network's models can be abstracted as Figure 4, $\mathrm{N}$ matching pairs with $\mathrm{D}$ - dimensional are processed by the perceptron of the CNe network to obtain output results of the same dimension. The OANet network reduces the input dimension to an $\mathrm{M} \times \mathrm{D}$ matrix through the Diffpool layer, and then raises the dimension back to $\mathrm{N} \times \mathrm{D}$ output through DiffUnpool, and merges the information in the middle. Diffpool maps the input $\mathrm{N}$ matches to $\mathrm{M}$ by learning the weights of soft distribution to aggregate the information, and then DiffUnpool reorganizes the information back to $\mathrm{N}$ dimensions. The network is not sensitive to disordered disturbances of input matching pairs. + +![01963ea7-7708-7c21-9dbd-ad9768a998ef_3_187_627_652_184_0.jpg](images/01963ea7-7708-7c21-9dbd-ad9768a998ef_3_187_627_652_184_0.jpg) + +Figure 4: Our matching method's correspondences in extreme illumination variation and viewpoint changes. + +We convert the accuracy of the pose to the accuracy of the matching to learn, so the accuracy of the matching will affect the learning ability of the model. We have made the following improvements to OANet to make it more rigorous to learn from positive samples, and to make matching judgments the label be more accurate, which effectively filters the exception points and improves the matching and pose estimation performance. We shortened the threshold condition of the geometric error constraint from 1e-4 to 1e-6. In addition, we also added a point-to-point constraint based on the OANet point-to-line constraint. A point is judged as a negative sample when the projection distance is greater than 10 pixels. + +Generally, inadequate matches may lead to inaccurate pose estimation. Apart from OANet, for the image pairs with less than 100 matches, we also proposed dynamic guided matching (DGM) to increase matches. Different from traditional guided matching [10], the dynamic threshold is applied on the Sampson distance constraint according to the number of matches of an image pair. We argue that a smaller number of matches requires greater threshold. The dynamic threshold ${th}$ is set by the formula as: + +$$ +{th} = t{h}_{\text{init }} - \frac{n}{15} \tag{4} +$$ + +where $t{h}_{\text{init }}$ is 6 and $\mathrm{n}$ is the number of original matches of an image pair. For the image pairs with more than 100 matches, we directly applied DEGENSAC [21] to obtain the inliers for submission. + +### 3.4 Implementation Details + +We train the proposed model integrated with improved SIFT, HardNet, OANet on Phototourism dataset [7], in which HardNet is pretrained on the constructed UBC Patch dataset [11]. The whole model is implemented with Pytorch [18] on a NVIDIA Titan X GPU. All models were trained from scratch and no pre-trained models were used. For stereo task, the co-visibility threshold is restricted to 0.1 while for multi-view task, the minimum 3D points to set to 100 with no more than 25 images are evaluated at a time. + +In description stage, the input patch size is cropped to ${32} \times {32}$ with the random flip and random rotation. Optimization was done by SGD solver [28] with learning rate of 10 and weight decay of 0.0001 . Learning rate was linearly decayed to zero within 15 epochs. + +In outlier pre-filtering stage, for training, we employed dynamic learning rate schedule where the learning rate was increased from zero to maximum linearly during the warm-up period (the first 10,000 steps), and decayed step-by-step after that. Typically, the maximum learning rate was $1\mathrm{e} - 5$ , and the decay rate was $1\mathrm{e} - 8$ . Besides, we trained the fundamental estimation model with setting the value of parameter geo_loss_margin to 0.1 . The side information of ratio test [5] with threshold of 0.8 and mutual check was also added into the model input. In inference time, the output of the model will undergo a DEGENSAC to further eliminate unreliable matches. The DEGENSAC in dynamic guided match all shares the same configuration with the iteration number of 100000, error type of Sampson, inlier threshold of 0.5, confidence of 0.999999 and enabled degeneracy check. + +## 4 EXPERIMENTS + +In this section, we evaluate the proposed method on the public Phototourism benchmark with the features results, the final pose estimation results are computed online. We firstly conduct performance for several separate modules, then qualitatively and quantitively analysis the effectiveness of our method. + +### 4.1 Experimental Settings + +#### 4.1.1 Datasets + +UBC Phototour dataset [11] is a dataset which involves corresponding patches from 3D reconstruction of Liberty, Notre Dame and Yosemite. Every tour consists of several bitmap images at the resolution of ${1024} \times {1024}$ pixels, each image contains image patches as a ${16} \times {16}$ array, in which the patch is sampled as ${64} \times {64}$ grayscale. In addition, the matching information (sample 3D point index of each patch) and keypoint information (reference image index, orientation, scale, and position are supplied separately in the file. + +Phototourism dataset [7] was collected from multi-sensors in 26 popular tourism landmarks. The 3D reconstruction ground truth is obtained with Structure from Motion (SfM) using Colmap, which includes poses, co-visibility estimates and depth maps. The dataset is divided into training set (16 scenes), validation set (3 scenes of training set) and test set (10 scenes). As a benchmark large-scale dataset, the scenes contained in the dataset includes various challenging conditions, e.g., occlusions, changing viewpoints, different time, repeated textures, etc. + +#### 4.1.2 Tasks and Evaluation metrics + +We implement and compare our method with different approaches on two downstream tasks: stereo and multi-view reconstruction with SfM, they both evaluate the intermediate metrics for further comparisons. The two tasks evaluate different format of dataset which is subsampled into smaller random subsets from the Phototourism dataset, Stereo task evaluates image pairs while Multi-view task evaluates $5 \sim {25}$ images with SfM reconstruction. Stereo task uses the random sampling consensus method (RANSAC) [20] to estimate the matches between the two images that meets the motion consistency, thereby decomposing the pose rotation $\mathrm{R}$ and translation t. Multiview task recovers the pose $\mathrm{R}$ and $\mathrm{t}$ of each image through $3\mathrm{D}$ reconstruction. By calculating the cosine distance of the vector between the estimated pose and the ground-truth pose, the performance of image matching is measured by the size of the computed angle. + +Given an image pair with co-visibility constraints in stereo task or the 3D reconstruction from multi-view images, we may compute the following metrics in different experiments, the final results are compared with the mAA metric. + +- Mean average accuracy (%mAA) is computed by integrating the area under curve up to a maximum threshold, which indicate the angular angle between the ground-truth and estimated pose vector. + +- Keypoint repeatability (%Rep.) measures the ratio of possible matches and the minimum number of key points in the co-visible view. + +- Descriptor matching score(%M.S.) is the average ratio of correct matches and the minimum number of key points in the co-visible view. + +- Mean absolute trajectory error (%mATE) measures the average deviation from ground truth trajectory per frame. + +- False positive rate (%FPT) is the ratio between the number of negative events wrongly categorized as positive and the total number of actual negative events + +### 4.2 Qualitative and Quantative Results + +The data construction information on the Phototourism dataset is given in Table 1, it could be seen that the proposed dataset construction method could largely reduce the amount of dataset size thus further speed up the training process and help improve the model performance. The training loss and FRP95 metric trend is shown in Figure 5 and Figure 6 on UBC Phototour dataset, which indicates the loss stability and help us know about the data characteristics. The comparison of matching visualization performance on stereo task is shown in Figure 7, in which we could see our proposed method outperforms the traditional RootSIFT based method with more accurate matches. Figure 8 shows the visualization on Phototourism dataset on stereo task and multi-view task. + +To validate the performance of loss of different scenes, Table 2 lists the evaluation of correspondence on UBC Phototour dataset, which indicates the effects and improvements of our proposed QHT loss compared to HT (hinge triplet) loss. Table 3 demonstrates our submitted final results on Phototourism challenge Track1, we rank 1st on both tasks. + +Table 1: Dataset construction information comparison of two selected scenes and the average of Phototourism dataset + +
ScenesTypeImages3D PointsPatches
Buckinghamoriginal16762460354352977
palacesampled125772814364070
Grand_placeoriginal10802095503206171
Brusselssampled81078108390540
All scenesoriginal1183.9165124.83567694.5
(Average)sampled887.663433.6317168
+ +Table 2: Patch correspondence evaluation performance with the metric of FPR95. We compare different loss functions on UBC Phototour dataset with HardNet as the description network (* means the network is trained by our implement). The best results are in bold. + +
TrainLibertyNotreDame
TestNotre- DameYouse- miteLibertyYouse- mite
HartNet [8]0.622.141.472.67
HardNet-HT [8]0.531.961.491.84
HardNet-HT*0.501.961.481.61
HardNet-QHT*0.451.831.2281.52
+ +![01963ea7-7708-7c21-9dbd-ad9768a998ef_4_927_152_701_417_0.jpg](images/01963ea7-7708-7c21-9dbd-ad9768a998ef_4_927_152_701_417_0.jpg) + +Figure 5: The loss of the training model in different scenes of the UBC Phototour dataset, the difference between different losses is not big, indicating that the improved loss is stable on the model training + +![01963ea7-7708-7c21-9dbd-ad9768a998ef_4_924_718_684_407_0.jpg](images/01963ea7-7708-7c21-9dbd-ad9768a998ef_4_924_718_684_407_0.jpg) + +Figure 6: FRP95 metric trend, using different scene combinations in the UBC phototour dataset as the training set and the validation set. The results show that on the same validation set, the FRP95 trained on the Yosemite scene is higher, indicating that there are false matches or difficult matches. + +![01963ea7-7708-7c21-9dbd-ad9768a998ef_4_999_1308_556_606_0.jpg](images/01963ea7-7708-7c21-9dbd-ad9768a998ef_4_999_1308_556_606_0.jpg) + +Figure 7: Visualization of correspondence on stereo task based on traditional RootSIFT based method and our proposed method (green represents correct match, yellow encodes match error within 5 pixels, red shows the wrong match) + +Table 3: Phototourism challenge. Mean average accuracy in pose estimation with an error threshold of ${10}^{ \circ }$ . The results are submitted with online evaluation. (We only remain the result with the highest rank in the submitted results from each team) + +
Methodstereo-taskmulti-view taskavg.
$\mathrm{{mA}}{\mathrm{A}}^{@{10}^{ \circ }}$rank?$\mathrm{{mA}}{\mathrm{A}}^{@{10}^{ \circ }}$rank?rank?
Ours0.6108110.7828821
Luca et al.0.58300120.7705632
Jiahui et al.0.57826170.7705653
Ximin et al.0.588750.75127144
+ +### 4.3 Ablation Study + +To fully understand each module in our proposed method, we evaluate different module above the Phototourism validation dataset. In Table 4, our method is compared with several versions to see how data construction and improved HardNet could help to improve the descriptor and further to enhance the performance on both multi-view and stereo tasks. We evaluate four variants as the feature descriptor while the other parts keep the same with results: 1) RootSIFT; 2) the original pret rained HardNet; 3) our proposed improved HardNet; 4) our proposed improved HardNet with a data construction technology. The comparison of different feature descriptors indicates that the model performance increases largely by adding the loss constraints on HardNet: it shows an $8\%$ improvement in average mAA of both tasks; with data construction, improved HardNet yields the best performance, obtaining an average mAA of 0.7894 . + +Table 4: Ablation study of proposed HardNet and data construction method on Phototourism validation set + +
MethodmAA@10°
StereoMulti-viewAvg
RootSIFT0.66980.72580.6978
Pretrained HardNet0.73170.79240.7621
Improved HardNet0.72850.81590.7722
Improved HardNet+CleanData0.75370.82520.7894
+ +To evaluate whether the proposed modified OANet could help improve the performance of outlier pre-filtering, Table 5 lists the performance comparison of four different outlier pre-filtering methods: 1) use of RootSIFT as feature descriptor while ratio test and cross check as the outlier filtering; 2) use of pretrained HardNet with ratio test and cross check; 3) use of proposed improved HardNet with ratio test and cross check; 4) use of proposed improved HardNet with modified OANet. The comparison of different outlier removals indicates that the performance of modified OANet overpasses the ratio test with cross check method by $4\%$ in average mAA. + +Table 5: Table 6: Ablation study of proposed OANet method on Phototourism validation set. + +
MethodmAA@10°
StereoMulti-viewAvg
RootSIFT+RT+CC0.66980.72580.6978
Pretrained HardNet+RT+CC0.73170.79240.7621
Improved HardNet+RT+CC0.75370.82520.7894
Improved HardNet+OANet0.79180.86580.8288
+ +## 5 CONCLUSION + +This paper presents a novel image matching approach, which integrates a data construction method to remove noise and to ensure more efficient training, an improved HardNet QHT loss function to make the network more sensitive to noise gradient descent, and stricter OANet combined with guided match algorithm to reduce the recall rate of outlier pre-filtering. By proposing QHT loss, which strengthened the distance constraints and the improved OANet with a stricter judgment on positive samples, the more outstanding matching is produced by these stricter constraints. The proposed method is crucial to obtain high-quality correspondences on a large-scale challenging dataset with illumination vibrances, viewpoint changes and repeated textures. Our experiments show that this method achieves state-of-the-art performance on the public Phototourism Image Matching challenge. + +![01963ea7-7708-7c21-9dbd-ad9768a998ef_5_926_502_730_655_0.jpg](images/01963ea7-7708-7c21-9dbd-ad9768a998ef_5_926_502_730_655_0.jpg) + +Figure 8: Visualization of correspondence on stereo task and reconstruction points in multi-view task. (green represents correct match, yellow encodes match error within 5 pixels, red shows the wrong match, blue shows the keypoints with correct 3D points) + +## REFERENCES + +[1] X. L Dai, Lu J. An object-based approach to automated image matching[C]//IEEE 1999 International Geoscience and Remote Sensing Symposium. IGARSS'99 (Cat. No. 99CH36293). IEEE, 1999, 2: 1189-1191. + +[2] M. Dusmanu, I. Rocco, T. Pajdla, et al. D2-net: A trainable cnn for joint description and detection of local features[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 8092-8101. + +[3] J. Revaud, P. Weinzaepfel, C. De Souza, et al. R2D2: repeatable and reliable detector and descriptor[J]. arXiv preprint arXiv:1906.06195, 2019. + +[4] P. Ebel, A. Mishchuk, M. Yi K, et al. Beyond cartesian representations for local descriptors[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 253-262. + +[5] G. Lowe D. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision, 2004, 60(2): 91-110. + +[6] W. Bian J, Y. Lin W, Y. Matsushita, et al. Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence[C] //Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4181-4190. + +[7] Phototourism Challenge, CVPR 2020 Image Matching Workshop. https://www.cs.ubc.ca/research/image-matching-challenge/. + +[8] A. Mishchuk, D. Mishkin, F. Radenovic, et al. Working hard to know your neighbor's margins: Local descriptor learning loss[J]. + +arXiv preprint arXiv:1705.10872, 2017. + +[9] J. Zhang, D. Sun, Z. Luo, et al. Learning two-view correspondences and geometry using order-aware network[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 5845-5854. + +[10] M. Andrew A. Multiple view geometry in computer vision[J]. Kybernetes, 2001. + +[11] M. Brown, G. Lowe D. Automatic panoramic image stitching using invariant features[J]. International journal of computer vision, 2007, 74(1): 59-73. + +[12] M. Yi K, E. Trulls, Y. Ono, et al. Learning to find good correspondences $\left\lbrack \mathrm{C}\right\rbrack //$ Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2666-2674. + +[13] R. Qi C, H. Su, K. Mo, et al. Pointnet: Deep learning on point sets for 3d classification and segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 652- 660. + +[14] H. Mughal M, J. Khokhar M, M. Shahzad. Assisting UAV Localization Via Deep Contextual Image Matching[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 2445-2457. + +[15] T. Song, B. Cao L, F. Zhao M, et al. Image tracking and matching algorithm of semi-dense optical flow method[J]. International Journal of Wireless and Mobile Computing, 2021, 20(1): 93-98. + +[16] D. DeTone, T. Malisiewicz, A. Rabinovich. Superpoint: Self-supervised interest point detection and description[C]//Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2018: 224-236. + +[17] A. Ortiz, C. Robinson, D. Morris, et al. Local context normalization: Revisiting local normalization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 11276-11285. + +[18] A. Paszke, S. Gross, S. Chintala, et al. Automatic differentiation in pytorch[J]. 2017. + +[19] H. Noh, A. Araujo, J. Sim, et al. Large-scale image retrieval with attentive deep local features[C]//Proceedings of the IEEE international conference on computer vision. 2017: 3456-3465. + +[20] A. Fischler M, C. Bolles R. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24(6): 381-395. + +[21] H. Bay, T. Tuytelaars, L. Van Gool. Surf: Speeded up robust features[C]//European conference on computer vision. Springer, Berlin, Heidelberg, 2006: 404-417. + +[22] O. Chum, T. Werner, J. Matas. Two-view geometry estimation unaffected by a dominant plane[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). IEEE, 2005, 1: 772-779. + +[23] Y. Liu, Z. Shen, Z. Lin, et al. Gift: Learning transformation-invariant dense visual descriptors via group cnns[J]. arXiv preprint arXiv:1911.05932, 2019. + +[24] Z. Luo, T. Shen, L. Zhou, et al. Contextdesc: Local descriptor augmentation with cross-modality context[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 2527-2536. + +[25] Y. Tian, X. Yu, B. Fan, et al. Sosnet: Second order similarity regularization for local descriptor learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 11016-11025. + +[26] R. Arandjelović, A. Zisserman. Three things everyone should know to improve object retrieval[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012: 2911-2918. + +[27] Y. Wang, C. Chen Y, X. Zhang, et al. Attentive normalization for conditional image generation $\left\lbrack \mathrm{C}\right\rbrack //$ Proceedings of the IEEE/CVF + +Conference on Computer Vision and Pattern Recognition. 2020: 5094-5103. + +[28] L. Bottou. Large-scale machine learning with stochastic gradient descent[M]//Proceedings of COMPSTAT'2010. Physica-Verlag HD, 2010: 177-186. + +[29] X. Huang, S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 1501-1510. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..8bd8ee0d2cfa916b15f237bfefaf6683666ee6e9 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,319 @@ +§ A STRICTER CONSTRAINT PRODUCES OUTSTANDING MATCHING: LEARNING RELIABLE IMAGE MATCHING WITH IMPROVED NETWORKS + +Anonymous Authors + +Author's Affiliation + +§ ABSTRACT + +Image matching is widely used in many applications, such as visual-based localization and 3D construction. Comparing with Traditional local features (e.g., SIFT) and outlier elimination method (e.g, RANSAC), learning-based image matching methods, e.g., HardNet and OANet, show promising performance under challenging environments and large-scale benchmarks. However, the existing learning-based methods suffer the noise in training data and the existing loss function, e.g., hinge loss, does not work well in image matching network. In this paper, we propose an end-to-end image matching method with less training dataset to obtain a more accurate and robust performance. First, a novel data cleaning strategy is proposed to remove the noise in the training dataset. Second, we strengthen the matching constraints by proposing a novel quadratic hinge triplet (QHT) loss function to improve the network. Finally, we apply stricter OANet in sample judgement to produce more outstanding matching. The proposed method shows the state-of-the-art performance on a large-scale and challenging Phototourism dataset, and also reported the 1st place in the CVPR 2020 Image Matching Challenges Workshop Track1 with the metric measure of reconstructed pose accuracy. Keywords: Image matching, HardNet, OANet, SIFT, large scale, challenging environments, pose accuracy. + +Keywords: Image matching, HardNet, OANet, SIFT, large scale, challenging environments, pose accuracy. + +§ 1 INTRODUCTION + +Image matching is the foundation task of some high-level 3D computer vision tasks, such as 3D reconstruction, structure-from-motion, camera pose estimation, etc. The goal of the task is to solve the corresponding pixels relationship within a same physical region from two images which have a common vision area [1]. Nowadays, with the widespread development of deep learning technologies on many computer vision areas and the need for long-term large-scale environments tasks, the shortcomings of traditional keypoint-based image matching method gradually appear, the major difficulty lies in that usually local features are evaluated through the accuracy of descriptors on small datasets, which only illustrates the matching performance, and is not suitable for integrating and evaluating some downstream tasks. + +The use of deep learning based solution for image matching task is hopeful to outperform the disadvantages of traditional keypoint-based methods, which demonstrated the ability to integrate multi-stage tasks in a lot of works [2][3] and could optimize and evaluate different evaluation metrics in large dataset. Although without complex hand-crafted efforts and show convenient pipeline in further tasks, the methods face with challenging conditions, such as illumination variation, viewpoint changes, repeated texture, which is apparent in outdoor datasets where scenes scales and conditions are highly variable. This leads to the poor performance result both in accuracy and robustness. + + < g r a p h i c s > + +Figure 1: Our matching method's correspondences in extreme illumination variation and viewpoint changes. + +To solve this problem, further end-to-end solutions and modified descriptors have been proposed with the premise being that the descriptor or network could learn more reliable features across image pairs. A novel representation of features with log-polar sampling method [4] is proposed to achieve scale invariance Some works [2-3] propose a jointly learning feature detection and description method separately to improve the image matching performance. + +Inspired by the mentioned knowledge, in this work, we propose a three-stage pipeline, including feature extraction, feature matching and outlier pre-filtering steps, to compute the corresponding pixels from image pairs, as shown in Figure 1. For each step, we add some constraints to enforce the algorithm converge to better matching results. Unlike the previous methods, our proposed method only requires light-weight model and does not need to train on multi-branches separately. + +We show that our method outperforms previous algorithms and achieves the state-of-the-art performance using the Phototourism benchmark with large-scale environments and challenging conditions. We provide a detailed insight into how improved data processing strategy, HardNet [8] loss function, modified OANet [9] combined with guided match algorithm [10] could help the + +* email address + +Detector: Descriptor: Nearest Ratio-test, GMS Guided match $\checkmark$ Task1: RANSAC Stereo Outlier Pose Pre-filtering Task2: SfM Error Multi-view CNe OANetV Traditional methods L Learning based methods SIFTV SOSNet, GIFT Neighbor V SURF HardNet V Matching Input Image Feature Extraction Feature Pairs Matching End-to-end method: SuperPoint, D2-Net, R2D2 + +Figure 2: Image matching based pose estimation pipeline, with some popular traditional and learning-based methods. Our method is based on the pipeline with some improvement on the chosen methods (indicated by a green box with a red tick). + +pipeline to achieve accurate and robust matching results and evaluate the method on pose estimation task. + +The main contributions include: + + * We construct a new patch dataset based on the given Phototourism images and sparse model, which is similar to the UBC Phtotour dataset [11], and pre-train our model on that dataset; + + * We propose a novel quadratic hinge triplet (QHT) loss function for the feature descriptor network HardNet, and an improved OANet combined with a guided matching strategy to compute reliable feature matches; + + * Through abundant experiments and the ablation study for each module, we show that our algorithm achieves state-of-the-art performance in the reconstructed poses, which ranks first on both tracks of stereo and multi-view matching on the public Phototourism Image Matching challenge 2020 [7]. + +§ 2 RELATED WORK + +Image matching plays an important role in many high-level tasks in computer vision area [14-15], which is to find the corresponding pixels from the image pairs in the same physical region in which the scene imaged with geometrical constraints [1]. Feature extraction, feature matching and outlier pre-filtering are the most vital components in image matching pipeline, in which traditional methods and deep learning based methods are implemented and completed in recent years. Figure 2 shows the main parts of image matching based pose estimation pipeline, including some common methods and our chosen target algorithm to further improve. + +Feature Extraction. End-to-end feature extraction and matching methods are classified into detect-then-describe (e.g. SuperPoint [16]), detect-and-describe (e.g. R2D2 [3], D2-Net [2]), and describe-then-detect (e.g. DELF [19]) strategies according to the sequences for executing the detection and description processes, which are utilized in different needs. However, they either have poor performance on challenging conditions and lack of repeatability on keypoints or show low efficiency on matching and storage. Traditionally, detectors and descriptors are separately applied in the pipeline, SIFT [5] (and RootSIFT [26]) and SURF [21] are most popular detectors while some descriptors are followed, in which LogPolar [4] shows better performance than ContextDesc [24] from the relative pose error, while SOSNet [25] and HardNet [8] overpass GIFT [23] in the public validation set. + +Outlier pre-filtering. There are a large number of incorrect matches in the matching pair, which will bring noise to the subsequent pose estimation. Traditional outlier removal methods include ratio-test [5], GMS [6], guided-match [10] methods, etc. + +A method based on deep learning is to judge whether the match is an outlier through the pose relationship obtained by the regression of the convolutional network, but it is usually difficult for network training to converge. Another way is to convert the pose to the label of whether the match is correct through the epipolar constraint, and then the regression problem can be converted into a classification problem to determine whether the match is inlier or outlier. The standard binary cross-entropy loss can be used to learn model. + +However, the input matching pairs are unordered, so the network needs to be permutation-invariant (insensitive) to match sequence transformations, so it is not applicable to convolutional networks or fully connected networks. CNe [12] draws on the idea of pointNet [13] and proposes a multi-layer weight-sharing perceptron to solve the irrelevant transformation problem. For each input matching pair, the same perceptron is used to process iterations, but this also leads that each matching pair is processed independently, lacks information circulation, and cannot integrate the predicted matching pair judgment results, and thus cannot learn the useful information in the entire image matching pair. Context normalization [17] is an instance normalization method that is often used widely in image migration [29] and GAN [27]. It normalizes the processing results of image matching pairs to exchange and circulate information. But this simple and violent use of mean variance ignores the complexity of global features, and at the same time cannot capture the correlation information between parts. OANet [9] proposed the use of DiffPool and DiffUnPool layers to promote the information flow and communication of internal neurons. + +To estimate a more accurate pose of the given image pairs or 3D reconstruction through the matching process, we consider improving the pipeline with respect to the following aspects: + + * Obtain stable and accurate keypoints. The keypoints should appear invariably, e.g., a certain point in the first image could also appear stably in the second image taken under a different viewpoint or illumination change. + + * Provide reliable and distinguishable descriptors. Under different environmental conditions, the same keypoint should have similar descriptor, so that the same keypoint in different images can be paired using nearest neighbor matching. + + * Hold powerful outlier pre-filtering ability. Filtering the wrong matches to get most of the correct matching pairs could help to get an accurate pose estimation result finally. + +§ 3 METHOD + +In this paper, we aim to learn accurate and reliable image matching in large-scale challenging conditions. In contrast with previous works which use end-to-end solutions of insufficient accuracy at the feature extraction stage or traditional solvers through the image matching pipeline, we combined traditional detector solver with an improved HardNet and OANet to pretrain on a constructed dataset. In particular, we propose a dataset construction method to optimize hypermeters before training on large-scale dataset and propose a new loss function in the description stage based on HardNet network, during outlier pre-filtering stage, we combined guided match with proposed stricter OANet to learn more accurate matches. + +Sift->position+scale Ourlier pre-filtering Nearest Improved OANet Neighbor Match correspondance matches Crop + Resize Improved HardNet Input image ${32} \times {32}$ patch 128d descriptor + +Figure 3: The whole architecture with proposed methods. We use SIFT feature detector firstly, and then apply HardNet from the 32x32 patch input into the 128 dimensions descriptors, after the nearest neighbor match and outlier pre-filtering, we could get the final matches, which is further used to compute the pose estimation. + +The whole architecture of the proposed method is demonstrated in Figure 3. Firstly, 128 dimensions of features are extracted from a basic HardNet with the input ${32} \times {32}$ patch. Then, after the matching stage, with the proposed stricter OANet, the correspondences could be transferred into $\mathrm{N}$ binary classification (inlier / outlier), which is permutation-invariant and feasible with convolutional layers. Finally, the pre-filtered matches are used to compute pose estimation in stereo and multi-view tasks. + +§ 3.1 DATASET CONSTRUCTION + +§ 3.1.1 UBC DATASET CONSTRUCTION FOR PRETRAINING + +In order to quickly train our lightweight method and search for some hyperparameters to further use into the pretrained model, thus we use the UBC Phototour dataset for the pretraining, which is patch data and suitable for HardNet training. + +§ 3.1.2 PHOTOTOURISM DATASET CONSTRUCTION FOR TRAINING + +After pretraining on the constructed UBC Phototour dataset for fast parameter selection, for the training period, we construct a clean dataset with low noise and less redundancy from Phototourism training scenes. Removing the redundant data could largely shorten the training period, while noise label reducing could help the loss function gradient descent optimization to improve network performance. + +To reduce the noise label in the dataset, we set the threshold of discard the images with low confidence, in which 25% with the least $3\mathrm{\;d}$ points and $3\mathrm{\;d}$ points whose tracks are less than 5 . For the redundant data removing, we sample the $3\mathrm{\;d}$ points that have been tracked more than 5 times, and only keep the $3\mathrm{\;d}$ points which are tracked 5 times. The sampling is repeated 10 times, and the NCC value is calculated for each sampling (the lower of the NCC indicates the lower similarity of the two images and the larger the difference of the matching), and the result with the lowest NCC is retained. In addition, we also applied data augmentation with random flip and random rotation strategies in both pretraining and training processes. + +§ 3.2 FEATURE EXTRACTION WITH IMPROVED HARDNET + +Feature extraction includes keypoint detection and feature description processes. SIFT detector [5] is used firstly to extract the position and scale information on the selected keypoint of the given input image. We adopted OpenCV implemented SIFT with a low detection threshold to generate up to 8000 points and a fixed orientation ${}^{1}$ . A single pixel on the image cannot describe the physical information of the current key point, so in order to describe the key point and the information around it, the key point is cropped with its scale size, called a patch, and then resized to ${32} \times {32}$ as the input of the HardNet network to extract the patch’s descriptor. + +After a relatively simple 7-layer HardNet, the input ${32} \times {32}$ patch is described as a 128-dimensional feature vector, which represents the description generated for each patch. We remain the network structure, and made improvements to its loss function to train the network stably and efficiently on the constructed dataset. + +HardNet uses triplet loss embedded with difficult sample mining to optimize so that the distance of the sample descriptor within the class is small, and the distance of the sample description vector between the classes is large. + +In order to further improve the effectiveness and stability of model learning, we apply a quadratic hinge triplet (QHT) loss based on the triplet loss, which is inspired from the first order similarity loss in SOSNet [25], by adding the square term of the triple, which shares the same mining strategy as in HardNet to find the "hardest-within-batch" negatives. The description loss function ${\mathcal{L}}_{\text{ des }}$ is expressed as: + +$$ +{\mathcal{L}}_{\text{ des }} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\max {\left( 0,1 + {d}_{i}^{\text{ pos }} - {d}_{i}^{\text{ neg }}\right) }^{2} \tag{1} +$$ + +$$ +{d}_{i}^{pos} = d\left( {{a}_{i},{pos}_{i}}\right) \tag{2} +$$ + +$$ +{d}_{i}^{neg} = \mathop{\min }\limits_{{\forall j \neq i}}\left( {d\left( {{a}_{i},{neg}_{j}}\right) }\right) \tag{3} +$$ + +Where ${d}_{i}^{\text{ pos }}$ represents the L2 distance between anchor descriptor and positive descriptor, ${d}_{i}^{\text{ neg }}$ represents the minimum distance in all the distances between anchor descriptor and the negative descriptor. The positive sample represents the patch on different images corresponding to the same $3\mathrm{\;d}$ point with the given anchor in the real world, on the contrary, the negative sample represents the patch obtained from different $3\mathrm{\;d}$ points. + +Quadratic triplet loss weights the gradients with respect to the parameters of the network by the magnitude of the loss. Compared with HardNet, the gradient of QHT loss is more sensitive to changes in loss. Larger ${d}_{i}^{\text{ pos }} - {d}_{i}^{\text{ neg }}$ results in smaller loss gradient, thus the model learning is more effective and stable. + +In addition, the improved model would be sensitive to the noise in the data. Wrong positive and negative sample labels will degrade the performance of the model, this risk could be avoided through the data denoising work proposed before. + +${}^{1}$ https://github.com/vcg-uvic/image-matching-benchmark-baselines + +§ 3.3 STRICTER OANET WITH DYNAMIC GUIDED MATCHING + +OANet proposed to use Diffpool and DiffUnPool layers on the basis of CNe to promote the information circulation and communication of internal neurons. The two network's models can be abstracted as Figure 4, $\mathrm{N}$ matching pairs with $\mathrm{D}$ - dimensional are processed by the perceptron of the CNe network to obtain output results of the same dimension. The OANet network reduces the input dimension to an $\mathrm{M} \times \mathrm{D}$ matrix through the Diffpool layer, and then raises the dimension back to $\mathrm{N} \times \mathrm{D}$ output through DiffUnpool, and merges the information in the middle. Diffpool maps the input $\mathrm{N}$ matches to $\mathrm{M}$ by learning the weights of soft distribution to aggregate the information, and then DiffUnpool reorganizes the information back to $\mathrm{N}$ dimensions. The network is not sensitive to disordered disturbances of input matching pairs. + +$N \times D$ $N \times D$ $N \times D$ $N \times D$ $M \times D$ DiffPool DiffUnpool ${X}_{l}^{\prime }$ ${X}_{l + 1}$ OANet ${X}_{l}$ P ${X}_{l}^{\prime }$ ${X}_{l}$ CNe + +Figure 4: Our matching method's correspondences in extreme illumination variation and viewpoint changes. + +We convert the accuracy of the pose to the accuracy of the matching to learn, so the accuracy of the matching will affect the learning ability of the model. We have made the following improvements to OANet to make it more rigorous to learn from positive samples, and to make matching judgments the label be more accurate, which effectively filters the exception points and improves the matching and pose estimation performance. We shortened the threshold condition of the geometric error constraint from 1e-4 to 1e-6. In addition, we also added a point-to-point constraint based on the OANet point-to-line constraint. A point is judged as a negative sample when the projection distance is greater than 10 pixels. + +Generally, inadequate matches may lead to inaccurate pose estimation. Apart from OANet, for the image pairs with less than 100 matches, we also proposed dynamic guided matching (DGM) to increase matches. Different from traditional guided matching [10], the dynamic threshold is applied on the Sampson distance constraint according to the number of matches of an image pair. We argue that a smaller number of matches requires greater threshold. The dynamic threshold ${th}$ is set by the formula as: + +$$ +{th} = t{h}_{\text{ init }} - \frac{n}{15} \tag{4} +$$ + +where $t{h}_{\text{ init }}$ is 6 and $\mathrm{n}$ is the number of original matches of an image pair. For the image pairs with more than 100 matches, we directly applied DEGENSAC [21] to obtain the inliers for submission. + +§ 3.4 IMPLEMENTATION DETAILS + +We train the proposed model integrated with improved SIFT, HardNet, OANet on Phototourism dataset [7], in which HardNet is pretrained on the constructed UBC Patch dataset [11]. The whole model is implemented with Pytorch [18] on a NVIDIA Titan X GPU. All models were trained from scratch and no pre-trained models were used. For stereo task, the co-visibility threshold is restricted to 0.1 while for multi-view task, the minimum 3D points to set to 100 with no more than 25 images are evaluated at a time. + +In description stage, the input patch size is cropped to ${32} \times {32}$ with the random flip and random rotation. Optimization was done by SGD solver [28] with learning rate of 10 and weight decay of 0.0001 . Learning rate was linearly decayed to zero within 15 epochs. + +In outlier pre-filtering stage, for training, we employed dynamic learning rate schedule where the learning rate was increased from zero to maximum linearly during the warm-up period (the first 10,000 steps), and decayed step-by-step after that. Typically, the maximum learning rate was $1\mathrm{e} - 5$ , and the decay rate was $1\mathrm{e} - 8$ . Besides, we trained the fundamental estimation model with setting the value of parameter geo_loss_margin to 0.1 . The side information of ratio test [5] with threshold of 0.8 and mutual check was also added into the model input. In inference time, the output of the model will undergo a DEGENSAC to further eliminate unreliable matches. The DEGENSAC in dynamic guided match all shares the same configuration with the iteration number of 100000, error type of Sampson, inlier threshold of 0.5, confidence of 0.999999 and enabled degeneracy check. + +§ 4 EXPERIMENTS + +In this section, we evaluate the proposed method on the public Phototourism benchmark with the features results, the final pose estimation results are computed online. We firstly conduct performance for several separate modules, then qualitatively and quantitively analysis the effectiveness of our method. + +§ 4.1 EXPERIMENTAL SETTINGS + +§ 4.1.1 DATASETS + +UBC Phototour dataset [11] is a dataset which involves corresponding patches from 3D reconstruction of Liberty, Notre Dame and Yosemite. Every tour consists of several bitmap images at the resolution of ${1024} \times {1024}$ pixels, each image contains image patches as a ${16} \times {16}$ array, in which the patch is sampled as ${64} \times {64}$ grayscale. In addition, the matching information (sample 3D point index of each patch) and keypoint information (reference image index, orientation, scale, and position are supplied separately in the file. + +Phototourism dataset [7] was collected from multi-sensors in 26 popular tourism landmarks. The 3D reconstruction ground truth is obtained with Structure from Motion (SfM) using Colmap, which includes poses, co-visibility estimates and depth maps. The dataset is divided into training set (16 scenes), validation set (3 scenes of training set) and test set (10 scenes). As a benchmark large-scale dataset, the scenes contained in the dataset includes various challenging conditions, e.g., occlusions, changing viewpoints, different time, repeated textures, etc. + +§ 4.1.2 TASKS AND EVALUATION METRICS + +We implement and compare our method with different approaches on two downstream tasks: stereo and multi-view reconstruction with SfM, they both evaluate the intermediate metrics for further comparisons. The two tasks evaluate different format of dataset which is subsampled into smaller random subsets from the Phototourism dataset, Stereo task evaluates image pairs while Multi-view task evaluates $5 \sim {25}$ images with SfM reconstruction. Stereo task uses the random sampling consensus method (RANSAC) [20] to estimate the matches between the two images that meets the motion consistency, thereby decomposing the pose rotation $\mathrm{R}$ and translation t. Multiview task recovers the pose $\mathrm{R}$ and $\mathrm{t}$ of each image through $3\mathrm{D}$ reconstruction. By calculating the cosine distance of the vector between the estimated pose and the ground-truth pose, the performance of image matching is measured by the size of the computed angle. + +Given an image pair with co-visibility constraints in stereo task or the 3D reconstruction from multi-view images, we may compute the following metrics in different experiments, the final results are compared with the mAA metric. + + * Mean average accuracy (%mAA) is computed by integrating the area under curve up to a maximum threshold, which indicate the angular angle between the ground-truth and estimated pose vector. + + * Keypoint repeatability (%Rep.) measures the ratio of possible matches and the minimum number of key points in the co-visible view. + + * Descriptor matching score(%M.S.) is the average ratio of correct matches and the minimum number of key points in the co-visible view. + + * Mean absolute trajectory error (%mATE) measures the average deviation from ground truth trajectory per frame. + + * False positive rate (%FPT) is the ratio between the number of negative events wrongly categorized as positive and the total number of actual negative events + +§ 4.2 QUALITATIVE AND QUANTATIVE RESULTS + +The data construction information on the Phototourism dataset is given in Table 1, it could be seen that the proposed dataset construction method could largely reduce the amount of dataset size thus further speed up the training process and help improve the model performance. The training loss and FRP95 metric trend is shown in Figure 5 and Figure 6 on UBC Phototour dataset, which indicates the loss stability and help us know about the data characteristics. The comparison of matching visualization performance on stereo task is shown in Figure 7, in which we could see our proposed method outperforms the traditional RootSIFT based method with more accurate matches. Figure 8 shows the visualization on Phototourism dataset on stereo task and multi-view task. + +To validate the performance of loss of different scenes, Table 2 lists the evaluation of correspondence on UBC Phototour dataset, which indicates the effects and improvements of our proposed QHT loss compared to HT (hinge triplet) loss. Table 3 demonstrates our submitted final results on Phototourism challenge Track1, we rank 1st on both tasks. + +Table 1: Dataset construction information comparison of two selected scenes and the average of Phototourism dataset + +max width= + +Scenes Type Images 3D Points Patches + +1-5 +Buckingham original 1676 246035 4352977 + +1-5 +palace sampled 1257 72814 364070 + +1-5 +Grand_place original 1080 209550 3206171 + +1-5 +Brussels sampled 810 78108 390540 + +1-5 +All scenes original 1183.9 165124.8 3567694.5 + +1-5 +(Average) sampled 887.6 63433.6 317168 + +1-5 + +Table 2: Patch correspondence evaluation performance with the metric of FPR95. We compare different loss functions on UBC Phototour dataset with HardNet as the description network (* means the network is trained by our implement). The best results are in bold. + +max width= + +Train 2|c|Liberty 2|c|NotreDame + +1-5 +Test Notre- Dame Youse- mite Liberty Youse- mite + +1-5 +HartNet [8] 0.62 2.14 1.47 2.67 + +1-5 +HardNet-HT [8] 0.53 1.96 1.49 1.84 + +1-5 +HardNet-HT* 0.50 1.96 1.48 1.61 + +1-5 +HardNet-QHT* 0.45 1.83 1.228 1.52 + +1-5 + +Loss Loss-notredame 9 epoch Loss-yosemite 0.8 0.6 0.4 0.0 0 2 + +Figure 5: The loss of the training model in different scenes of the UBC Phototour dataset, the difference between different losses is not big, indicating that the improved loss is stable on the model training + +0.05 FPR95 FPR95-nd-lib FPR95-nd-yose FPR95-yose-lib FPR95-yose-nc FPR95-lib-yose FPR95-lib-nd 0.04 0.03 0.02 0.01 epoch + +Figure 6: FRP95 metric trend, using different scene combinations in the UBC phototour dataset as the training set and the validation set. The results show that on the same validation set, the FRP95 trained on the Yosemite scene is higher, indicating that there are false matches or difficult matches. + +(a) RootSIFT method (b) our proposed method + +Figure 7: Visualization of correspondence on stereo task based on traditional RootSIFT based method and our proposed method (green represents correct match, yellow encodes match error within 5 pixels, red shows the wrong match) + +Table 3: Phototourism challenge. Mean average accuracy in pose estimation with an error threshold of ${10}^{ \circ }$ . The results are submitted with online evaluation. (We only remain the result with the highest rank in the submitted results from each team) + +max width= + +2*Method 2|c|stereo-task 2|c|multi-view task avg. + +2-6 + $\mathrm{{mA}}{\mathrm{A}}^{@{10}^{ \circ }}$ rank? $\mathrm{{mA}}{\mathrm{A}}^{@{10}^{ \circ }}$ rank? rank? + +1-6 +Ours 0.61081 1 0.78288 2 1 + +1-6 +Luca et al. 0.58300 12 0.77056 3 2 + +1-6 +Jiahui et al. 0.57826 17 0.77056 5 3 + +1-6 +Ximin et al. 0.5887 5 0.75127 14 4 + +1-6 + +§ 4.3 ABLATION STUDY + +To fully understand each module in our proposed method, we evaluate different module above the Phototourism validation dataset. In Table 4, our method is compared with several versions to see how data construction and improved HardNet could help to improve the descriptor and further to enhance the performance on both multi-view and stereo tasks. We evaluate four variants as the feature descriptor while the other parts keep the same with results: 1) RootSIFT; 2) the original pret rained HardNet; 3) our proposed improved HardNet; 4) our proposed improved HardNet with a data construction technology. The comparison of different feature descriptors indicates that the model performance increases largely by adding the loss constraints on HardNet: it shows an $8\%$ improvement in average mAA of both tasks; with data construction, improved HardNet yields the best performance, obtaining an average mAA of 0.7894 . + +Table 4: Ablation study of proposed HardNet and data construction method on Phototourism validation set + +max width= + +2*Method 3|c|mAA@10° + +2-4 + Stereo Multi-view Avg + +1-4 +RootSIFT 0.6698 0.7258 0.6978 + +1-4 +Pretrained HardNet 0.7317 0.7924 0.7621 + +1-4 +Improved HardNet 0.7285 0.8159 0.7722 + +1-4 +Improved HardNet+CleanData 0.7537 0.8252 0.7894 + +1-4 + +To evaluate whether the proposed modified OANet could help improve the performance of outlier pre-filtering, Table 5 lists the performance comparison of four different outlier pre-filtering methods: 1) use of RootSIFT as feature descriptor while ratio test and cross check as the outlier filtering; 2) use of pretrained HardNet with ratio test and cross check; 3) use of proposed improved HardNet with ratio test and cross check; 4) use of proposed improved HardNet with modified OANet. The comparison of different outlier removals indicates that the performance of modified OANet overpasses the ratio test with cross check method by $4\%$ in average mAA. + +Table 5: Table 6: Ablation study of proposed OANet method on Phototourism validation set. + +max width= + +2*Method 3|c|mAA@10° + +2-4 + Stereo Multi-view Avg + +1-4 +RootSIFT+RT+CC 0.6698 0.7258 0.6978 + +1-4 +Pretrained HardNet+RT+CC 0.7317 0.7924 0.7621 + +1-4 +Improved HardNet+RT+CC 0.7537 0.8252 0.7894 + +1-4 +Improved HardNet+OANet 0.7918 0.8658 0.8288 + +1-4 + +§ 5 CONCLUSION + +This paper presents a novel image matching approach, which integrates a data construction method to remove noise and to ensure more efficient training, an improved HardNet QHT loss function to make the network more sensitive to noise gradient descent, and stricter OANet combined with guided match algorithm to reduce the recall rate of outlier pre-filtering. By proposing QHT loss, which strengthened the distance constraints and the improved OANet with a stricter judgment on positive samples, the more outstanding matching is produced by these stricter constraints. The proposed method is crucial to obtain high-quality correspondences on a large-scale challenging dataset with illumination vibrances, viewpoint changes and repeated textures. Our experiments show that this method achieves state-of-the-art performance on the public Phototourism Image Matching challenge. + + < g r a p h i c s > + +Figure 8: Visualization of correspondence on stereo task and reconstruction points in multi-view task. (green represents correct match, yellow encodes match error within 5 pixels, red shows the wrong match, blue shows the keypoints with correct 3D points) \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..99e3a8c08ec6cd1dea2750b1993dc750fb37d4c8 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,255 @@ +# Service-based Analysis and Abstraction for Content Moderation of Digital Images + +Online Submission ID: 2 + +## Abstract + +This paper presents a service-based approach towards content moderation of digital visual media while browsing web pages. It enables the automatic analysis and classification of possibly offensive content, such as images of violence, nudity, or surgery, and applies common image abstraction techniques at different levels of abstraction to these to lower their affective impact. The system is implemented using a microservice architecture that is accessible via a browser extension, which can be installed in most modern web browsers. It can be used to facilitate content moderation of digital visual media such as digital images or to enable parental control for child protection. + +Index Terms: Computer systems organization-Client-server architectures; Computing methodologies-Image processing; Information systems-Content analysis and feature selection; Information systems-Browsers; Human-centered computing-Web-based interaction; Human-centered computing-Graphical user interfaces; + +## 1 INTRODUCTION + +This work's main objective is to support and facilitate human-driven moderation of digital visual media such as digital images, which are a dominant category in the domain of user-generated content, i.e., content that is acquired and uploaded to content providers. Content moderation usually requires humans who view and decide if content is considered to be appropriate or not, e.g., regarding violence, racism, nudity, privacy etc.; these aspects may have serious impacts in various directions, including legal and psychological issues. + +To cope with negative impacts on the viewers psychology and to alleviate legal issues, we propose a combination of content analysis and classification together with suitable image abstraction techniques that first detect inappropriate content and, subsequently, disguise and obfuscate content depictions or specific portions (Fig. 1). + +### 1.1 Problem Statement and Objectives + +From a technical perspective, the implementation of such approach needs to be independent of operation systems and processing hardware. Thus, we decide to use a service-based approach to detect and classify visual media content and to perform respective abstraction techniques depending on the detection results. Deep-learning approaches will be used that allow for defining what "offensive" means. Such functionality can be integrated into web browsers using a browser-extension based on a World Wide Web Consortium (W3C) draft standard. This way, the content moderation functionality can be applied and integrated into professional IT solutions or can be used by means of end-user apps. + +Facilitate Content Moderation (Objective-1): Today, content moderation becomes more and more crucial for digital content providers (e.g., Facebook or YouTube) to fulfill their responsibilities and to implement an ethical content handling [7]. Moderation comprises manual examination for detection and classification of critical or inappropriate content. For some types of visual media content, the detection and classification can already be performed semi-automatically using machine-learning approaches. However, automated moderation being often limited (see e.g., [13]), the final moderation decisions are often made by humans, required to consume the unfiltered content. + +![01963e90-f00b-71b9-824d-ac443f7b16cc_0_920_525_737_1239_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_0_920_525_737_1239_0.jpg) + +Figure 1: Application example for the combination of service-based analysis and image abstraction used for content moderation functionality provided by a browser extension (a). It enables the classification of digital input images (left) displayed on web sites according to different categories (rows) and their respective disguising using adjustable image abstraction techniques (right), such as pixelation (c), cartoon stylization (e), or blur (g) + +![01963e90-f00b-71b9-824d-ac443f7b16cc_1_152_165_713_251_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_1_152_165_713_251_0.jpg) + +Figure 2: Example of using image content analysis in combination with image abstraction techniques to disguise possibly inappropriate content for subsequent manual moderation: (a) input image, (b) results of image segmentation, (c) completely abstracted image, (d) partial abstraction techniques applied to the child only. + +Recent studies indicate that workers concerned with these tasks are often subject to severe mental health damage due to traumatic experiences or monotonic duties $\left\lbrack {{10},{11},{29}}\right\rbrack$ . Interestingly, non-photo-realistic rendering of these stimuli could potentially reduce their negative impacts $\left\lbrack {1,2,{12}}\right\rbrack$ . Therefore, these negative consequences could be mitigated by reducing the affective responses that arise from consuming the unfiltered content by using a combination of automatic analysis and abstraction techniques as follow (Fig. 2): (i) visual media content is analyzed, e.g., to detect, classify, and possibly perform a semantic segmentation, (ii) abstraction techniques are used to partially or completely disguise possibly offensive content prior to (iii), the interactive visual examination. + +Service-oriented Architectures (Objective-2): To implement an approach for Objective-1 and to make it available to a wide range of applications and users, we set out to provide a prototypical micro-app (e.g., a browser extension) based on a microservice infrastructure. For this, two separate microservices, for analysis and abstraction respectively, are orchestrated by a content moderator service. This enables a use-case specific exchange, replacement, or extension of specific functionality or complete services without risking the overall functionality. By using a Hyper Text Markup Language (HTML) User Interface (UI) integrated into a W3C browser extension, the abstracted content can be interpolated/blended with the original one interactively. + +In current state-of-the-art systems, analysis and abstraction of images and videos are mostly performed using on-device computation. Thus, these systems' processing capabilities are limited by the device's hardware (Central Processing Unit (CPU) and Graphics Processing Unit (GPU)) and software (Operating System (OS)). Being subject to high heterogeneity (device ecosystem), this has major drawbacks concerning the applicability, maintainability, and provisioning of content-moderation applications, in particular: a software development process and integration into ${3}^{\mathrm{{rd}}}$ -party applications is aggravated by:(i)different operating systems (e.g., Windows, Linux, macOS, iOS, Android), (ii) heterogeneous hardware configurations of varying efficiency and Application Programming Interfaces (APIs) (e.g., OpenGL, Vulkan, Metal, DirectX), as well as (iii) display sizes and formats. Further, on-device processing does often not scale with respect to the increasing input complexity (e.g., number of images, increasing spatial resolution of camera sensors), which especially poses problems for mobile devices (e.g., battery drain or overheating). + +### 1.2 Approach and Contributions + +Concerning the aforementioned drawbacks of on-device processing, the proposed combination of standardized technology for micro-apps together with service-oriented architectures and infrastructures offers a variety of advantages, in particular:(i)implementation and testing of specific analysis and abstraction techniques are required to be performed only once (controlling the systems software and hardware due to virtualization), (ii) functionality is offered to a wide range of web-based applications using standardized protocols, which can be easily integrated into ${3}^{\text{rd }}$ -party applications and extended rapidly. This way, (iii) the proposed service-based approach can automatically scale service-instances with respect to input data complexity and computing power required. Thus software up-to-dateness and exchangeability can be easily achieved. Further, the software development process of web-based thin-clients is less complex compared to rich-clients. Together with the upcoming $5\mathrm{G}$ telecommunication standard featuring (among others) high data rates, reduced latency, and energy saving, the presented approach seems feasible to be applied in stationary as well as mobile contexts. Finally, intellectual property of the service providers can be effectively protected by not shipping respective software components to customers, and thus, impede the possibility of reverse engineering. + +## 2 BACKGROUND AND RELATED WORK + +### 2.1 Visual Content Analysis using Neural Networks + +Image analysis can be performed according to different tasks. Image classification such as the ResNet Convolutional Neural Network (CNN) architecture can be performed [9] to determine how likely an image belongs to one or more specific categories. With object recognition, the goal is to identify objects displayed on images with it respective bounding boxes. For object recognition, CNN architectures such as is YOLOv3 [23] and Single Shot Detector (SSD) [17] can be applied. Another task is image segmentation, where objects and regions of certain semantics are identified on a per-pixel base. This can be performed with the approach R-CNN of Girshick et al. [8]. Different kinds of CNNs need to be trained with datasets consisting of images labeled with categories, objects, or image regions. There are public datasets available to train and to benchmark the performance of different CNN architectures. Popular examples are the Pascal VOC [4] and ADE20K [38] datasets. They contain labeled image data for image classification, object recognition, and even semantic segmentation. For the task of content moderation, the objects and categories are quite general; mostly everyday objects are contained. Another dataset is the Google Open Image dataset [14]. It contains about 9 million images labeled with object information. The object categories are hierarchical organized and span over 6000 categories. Another approach, which spans even more different object categories, is YOLO9000 [22]. YOLO9000 is a variation of the YOLOv3 [23] architecture that was trained on a dataset with more than 9000 different objects. However, this dataset is not publicly available. + +Further, there are approaches for image analysis that are more directed towards content moderation. These are mostly presented in the form of RESTful APIs allowing for sending images and receive analysis results. The Google Cloud Vision API [26] assigns scores to images depending on how likely they represent the categories adult, violence, medical and spoof. The API of Sightengine [28] analyzes images for the occurrence of categories such as nudity, weapon, alcohol, drugs, scams and other offensive content. Valossa [18] reviews cloud-based vendors supporting the classification of unsafe content and describes the difficulties of defining what inappropriate content actually is. They concluded that content analysis models must be able to understand the context of objects and depicted situations in order to decide whether images contain unsafe content. They offer an evaluation dataset with images in 16 different content categories and benchmark it on several online RESTful APIs. However, these approaches do not offer any public datasets or specify which machine learning models they use exactly. In contrast, Yahoo [19] offers a trained CNN model that can be used free of charge. The Yahoo Open Not Safe For Work (NSFW) model is basically a ResNet [9] that was fine-tuned on a dataset of NSFW images depicting nudity and other offensive content. For a given image, it determines a score how likely it contains unsafe content. + +![01963e90-f00b-71b9-824d-ac443f7b16cc_2_170_172_1461_612_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_2_170_172_1461_612_0.jpg) + +Figure 3: Conceptual overview of the microservice architecture of the presented approach. The individual service components (blue) are communicating via RESTful APIs and are used by a browser extension (orange) that integrates into standard web technologies (gray). + +### 2.2 Service-based Image Processing + +Several software architectural patterns are feasible to implement service-based image processing. However, one prominent style of building a web-based processing system for any data is the service-oriented architecture [31]. It enables server developers to set up various processing endpoints, each providing a specific functionality and covering a different use case. These endpoints are accessible as a single entity to the client, i.e., the implementation is hidden for the requesting clients, but can be implemented through an arbitrary number of self-contained services. + +Since web services are usually designed to maximize their reusability, their functionality should be simple and atomic. Therefore, the composition of services is critical for fulfilling more complex use cases [15]. The two most prominent patterns for implementing such composition are choreography and orchestration. The choreography pattern describes decentralized collaboration directly between modules without a central component. The orchestration pattern describes collaboration through a central module, which requests the different web services and passes the intermediate results between them. + +In the field of image analysis, Wursch et al. [36] present a web-based tool that enables users to perform various image analysis methods, such as text-line extraction, binarization, and layout analysis. It is implemented using a number of Representational State Transfer (REST) web services. Application examples include multiple web-based applications for different use cases. Further, the viability of implementing image-processing web services using REST has been demonstrated by Winkler et al. [34], including the ease of combination of endpoints. Another example for service-based image processing is Leadtools (https://www.leadtools.com), which provides a fixed set of approx. 200 image-processing functions with a fixed configuration set via a web API. + +In this work, a similar approach using REST is chosen, however, with a different focus in terms of granularity of services. The advantages of using microservices are(i)increased scalability of the components, (ii) easy deployment and maintainability as well as, (iii) the possibility to introduce various technologies into one system [32]. For our work, we are extending a microservice platform for cloud-based visual analysis and processing that was first presented by Richter et al. [25]. In addition thereto and based on that, Wegen et al. [33] present an approach for performing service-based image processing using software rendering to balance cost-performance relation. + +In the field of geodata, the Open Geospatial Consortium (OGC) set standards for a complete server-client ecosystem. As part of this specification, different web services for geodata are introduced [20]. Each web service is defined through specific input and output data and the ability to self-describe its functionality. In contrast, in the domain of general image-processing there is no such standardization yet. However, it is possible to transfer concepts from the OGC standard, such as unified data models. These data models are implemented using a platform-independent effect format. In the future, it is possible to transfer even more concepts set by the OGC to the general image-processing domain, such as the standardized self-description of services. + +## 3 Method + +With respect to Objective-2, we choose to implement our approach using microservices, which are described as follows. + +### 3.1 System Overview + +Fig. 3 shows a conceptual overview of the components as well as their data and control flow. It comprises of the following components that communicate via RESTful APIs: + +Moderation Browser-Extension: A client-device running a web browser that(i)hosts the moderation browser-extension and (ii) an arbitrary website with visual media content. The web-site's visual media content is hosted by a content provider and referenced by an Unified Resource Locator (URL). The browser extension accesses the URLs via content-script and uses it to query the RESTful API of the CMS asynchronously. + +Content Moderation Service (CMS): The CMS orchestrates the interplay between instances of the analysis and abstraction services, which encapsulate respective techniques. Upon request, it downloads the image from the given URL and forwards its content to an analysis service instance by querying the analysis RESTful API. Depending on the analysis response, it uses the configuration data to map analysis results to specific parameter values that are used to query the image abstraction service. The response is subsequently forwarded to the browser extension that replaces a placeholder content with the abstracted content. + +Content Analysis Service (CAS): The CAS identifies if an image contains offensive content and has to be filtered using image abstraction techniques. It receives image data from the CMS and performs image analysis with different image recognition models as well as multiple image classification and object recognition CNNs. It then returns results of the different analysis models in a unified and structured way. + +Image Abstraction Service (IAS): The IAS provides an interface for applying various image abstraction techniques (e.g., blur, pixelation, or more specific operations such as cartoon stylization, etc.) with presets of different strength to images that are identified by the CAS to possibly contain offensive content. + +### 3.2 Browser-Extension for Moderation Client + +The browser extension traverses the Document Object Model (DOM) tree and utilizes a MutationObserver object to detect changes in the respective image and picture tags; a DOM-MutationObserver is provided by the corresponding web browser and is intended to watch for changes being made to the DOM tree. As soon as an image is detected, it is initially blurred using Cascading Style Sheets (CSS) image filters. This prevents users to see possible disturbing image content while the CMS's RESTful API is queried and the image's URL is transmitted to it. The response received by the CMS contains information on whether or not an image contains disturbing content and therefore needs to be disguised. In the case that an image is categorized as offensive, the response also includes a processed version of the image. + +![01963e90-f00b-71b9-824d-ac443f7b16cc_3_505_807_364_283_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_3_505_807_364_283_0.jpg) + +Figure 4: Filtered image with an overlay shown on mouse-over. + +Subsequent to receiving the response, the local CSS-filter blur is removed. If the processed image does contain suggestive content, the original image gets replaced - otherwise, the original image is displayed. Finally, an overlay is added for every image (Fig. 4) that provides buttons for(i)allowing users to report misclassified images and (ii) toggling between the original and the processed image. If a user does not agree with the determined content classification, they can propose a more suitable one. A modal (Fig. 5) will appear where the user can select if another category is more suitable, the image is disturbing in a different way, or it should not be filtered. + +### 3.3 Content Moderation Service + +The CMS moderates communication and interactions between the browser extension, the analysis service, and the image abstraction service. Clients use it to initiate the analysis process that consists of the following steps. First, the image is downloaded using the URL specified in the analysis request. Then, the image analyzer is queried to detect if the image contains offensive content. Subsequently, the CMS maps the image analysis results to an image abstraction technique and forwards it to the abstraction service for application. Finally, it notifies the client whether the image contains offensive content and, if it does, attaches the processed image to the response. If a user decides to send feedback via the feedback modal (e.g., the chosen scenario is unsuitable), a request to a feedback route is sent and the feedback is stored for further use. + +![01963e90-f00b-71b9-824d-ac443f7b16cc_3_996_151_579_376_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_3_996_151_579_376_0.jpg) + +Figure 5: A feedback pop-up enables the correction of misclassifications and for feeding this information back to the server. + +### 3.4 Content Analysis & Abstraction Service + +The CAS provides an interface for clients to analyze images for the presence of certain objects and categories. Clients send the image data and receive analysis results that are represented in the form of tags with score and meta data associated. Tags can comprise objects displayed in a image or categories that can be associated with an image. The score describes how likely an object or category is present. The meta data could include an Axis-aligned Bounding Box (AABB) that describes the estimated position of an object within the image. + +The image analysis is performed with different machine learning models, in our case using CNNs. The output, specific to each model used, must be transformed to the unified analysis result format. This allows extending the analysis service with additional Machine Learning (ML) models. The results of the image analysis are used by the content moderator to decide what kind of content an image is of and whether an image abstraction is applied. + +If offensive content was detected on an image, it is disguised by applying an image abstraction techniques to it. The IAS provides an endpoint to apply specific techniques such as pixelation, or blur to an image. For every technique, different presets and parameters are provided to indicate different degrees and styles of image abstraction. To query the abstraction endpoint, the IAS requires the image's data as well as a mapped abstraction techniques and its preset (Sec. 4). With respect to this, the CMS perform such mapping by taking the analyzed scenario and score into account. In response, the processed image is returned by the IAS. + +## 4 MAPPING OF ANALYSIS RESULTS + +The analysis result for an image is a set of tags with scores. The tags describe objects that can be displayed or categories that can be associated with an image. The scores describe how likely these tags are actually present on an image. An image abstraction in the sense of content moderation, is processing a user generated content image with an image abstraction technique with a specific parameter preset. To process images with the goal of reducing explicit content, one has to define a mapping from analysis results to an image abstraction technique with a specific parameter preset. + +In the proposed system each tag was manually associated with a scenario. A scenario is a type of content that should be moderated. For this system, the scenarios nudity, violence, and medical are used. Each scenario is specified by: (i) name, (ii) set of tag names and threshold, (iii) image abstraction technique, and (iv) three effect presets, sorted by degree of abstraction (low, medium, strong). A scenario is matched to analysis results if any of the scenario's tag names are contained in the received analysis' tags and the received score is equal or higher than the scenario tag score threshold. Because an analysis result could match multiple scenarios, the scenarios are prioritized and the matching scenario with the highest priority is chosen. If no scenario matches, then no image abstraction is required. Otherwise, the user-defined preset will be selected and used for the subsequent image abstraction step. + +![01963e90-f00b-71b9-824d-ac443f7b16cc_4_145_158_729_1095_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_4_145_158_729_1095_0.jpg) + +Figure 6: Images with similar content (left) but different mappings (right) based on how likely they are rated as showing violent content. + +Fig. 6 shows four images with similar content but different mappings. All four images depict scenes with weapons and are categorized as showing violent content by the CAS. The used mappings are chosen according to the score that indicates how likely these images show violent content. Fig. 6(a) is disguised using a Gaussian blur preset with a large kernel size (Fig. 6(b)). Fig. 6(d) shows an applied cartoon filter using a black & white preset with thin edges to remove color information. Fig. 6(h) uses an cartoon filter with thick edges. Fig. 6(h) show the results of a pixelation abstraction technique for aggressive disguise. The effect that is mapped to an image is arbitrary and can be customized, but different abstraction techniques are more or less suitable for certain scenarios. In particular, this work uses a pixelation technique for images showing violent content, a cartoon filter for medical content based on the work of Winnemöller et al. [35] and as suggested by the studies of Besançon et al. $\left\lbrack {1,2}\right\rbrack$ , and a Gaussian blur for images that depict nudity. + +## 5 IMPLEMENTATION ASPECTS + +In a service-based architecture similar to the one used in the content moderation scenario, it is necessary that messages are exchanged between the individual services. Therefore, each service provides a RESTful API and queries other services correspondingly. The browser extension is implemented using JavaScript (JS) and utilizes the used browser's API to access and alter a website's DOM tree to administer local storage and to react on changes made to the website. For sending requests to other services, the fetch API is applied. Filter options facilitate users to customize whether and to what extend suggestive images should be abstracted and all three scenarios can be customized individually. Users can choose among three levels of abstraction or can switch off single scenarios completely. + +The CMS is implemented using Node.js and provides a RESTful API with two endpoints: one for requesting an image to be analyzed and categorized and one for sending user feedback. A request sent to the analyze endpoint needs to include the URL of the image that should be analyzed and options that represent the filter settings made by the user. A feedback request consists of three different pieces of information: the data concerning the assessed image, the category proposed by the image analyzer service, and the category included in the user's feedback. This data is stored and used as a training set for machine learning algorithms that can be used during image classification. The implementation of the IAS also relies on Node.js. It provides an endpoint that accepts requests that need to include the data of the image to be abstracted and an operation that should be applied to the image. A preset, related to the desired effect, can also be sent to this endpoint as an optional parameter. The CAS is implemented using Python and flask. It provides a RESTful API with an endpoint to start the analysis process. + +For the basic functionality of the CMS, two different kinds of neural networks are used:(i)a Single Shot MultiBox model [17] and (ii) the Yahoo Open NSFW model [19]. Single Shot MultiBox is a CNN architecture that performs object recognition on images. For a given image it returns AABB with a scenario and a confidence score (0 to 1). The confidence score indicates to what extend a scenario is detected in the image. An existing implementation for PyTorch was used, as well as a classification model that has been initially trained on the Pascal VOC dataset [4], but with very general object classes such as aeroplane, car, cow, dog, or TV monitor. To this end, we also trained an SSD model on military-like classes of the Google Open Images Dataset [14] (such as rifle, tank, knife, missile) to be able to make predictions on somewhat realistic explicit content. The Yahoo Open NSFW model also returns for a given image an NSFW score (0 - safe, no nudity detected; to 1 - not safe, nudity detected) and an implementation and trained model for TensorFlow is available, but a proper threshold for this score is required. Such threshold might be different according to the use case of the system and must be chosen carefully. + +## 6 RESULTS AND DISCUSSION + +### 6.1 Applications + +The system described in this work offers advantages for the consumption, use, and eventually moderation of graphic content in different areas we now detail. First of all, it could assist for medical and surgical education. A primary use case would be to facilitate the education of nurses and medical students (not all destined to be surgeons) by reducing affection and aversion when looking at images showing blood and medical acts $\left\lbrack {1,2}\right\rbrack$ . Similarly, another use case is on the communication between surgeons and patients $\left\lbrack {1,2}\right\rbrack$ . Patients are usually informed and prepared before the planning of future surgeries and the explanations can be facilitated by the use of images. Yet, laypeople often find looking at images depicting surgery or blood extremely difficult [6, 27]. Communication between patients and doctors could therefore be improved with such automated image processing tools. + +Furthermore, the system can be used to moderate internet forums and social networks. Nowadays, a lot of digital media, including images and videos, are shared through social media platforms such as Facebook or Instagram. While graphic content sometimes adheres to the Terms of Services (TOSs) of these platforms [5, 30], many graphic media are not accepted for ethical or legal reasons. The filtering between authorized and non-authorized content is performed by a combination of algorithms and people, or people alone (content moderators) depending on the platform. The software system presented in this paper could facilitate a content moderator's work and help to prevent mental issues that can be a consequence of looking at disturbing pictures all day [21]. The system could also alleviate the toll on volunteer moderators of platforms such as Wikipedia or Reddit. In a similar fashion, journalists and news editors might have to browse through hundreds of shocking content to illustrate their articles or better understand the case they are reporting on (e.g., war-zones, disasters, accidents). Our automated tool could also be useful in this specific context. + +![01963e90-f00b-71b9-824d-ac443f7b16cc_5_163_212_1408_492_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_5_163_212_1408_492_0.jpg) + +Figure 7: Distribution of request times per day (a) and hour (b). Center line of boxes shows mean, boxes in total show quartile, whiskers show rest of the distribution without obvious outliers. + +![01963e90-f00b-71b9-824d-ac443f7b16cc_5_167_866_641_431_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_5_167_866_641_431_0.jpg) + +Figure 8: Mean request times according to different tasks. + +Finally, since exposure to graphic or pornographic content has been shown to be particularly detrimental to children [16], our tool could particularly be interesting on this regard. While blocking software could be used to limit access to nudity or pornography, such software tend to also limit access to useful information (e.g., online sex education) [24], are rarely maintained or even used [3], and unlikely to block access to content on some social media platforms. Blocking software finally rarely target all sorts of graphic content. With respect to this issue, we hope that our tool can help limit the impact of unwanted graphic content, rather than eliminate it completely along with potentially useful information. + +### 6.2 Performance Evaluation + +We only focus here on system performance as the potential reduction of affective responses through image abstraction techniques has already been studied $\left\lbrack {1,2,{37}}\right\rbrack$ . The system’s performance was evaluated by timing different tasks involved in the process of content moderation and comparing these to assemble a metric of the requests and processed images (time stamp, image resolution, image transformation). Timed tasks measured on the CMS were(i)fetching image, (ii) performing image analysis, and (iii) performing image transformation by abstraction. The CMS, CAS, and IAS are hosted on a single dedicated GPU-server without a significant network overhead. Thus, the network request times between clients and the services are assumed to be independent of our system and are not considered. To evaluate the performance in a way that considers all required tasks equally, only requests that led to an image transformation (unsafe image was detected) were considered. The dedicated GPU-server is equipped with Xeon E5-2637 v4, 3.5 GHz processor (8 cores), 64 GB RAM, NVIDIA Quadro M6000 GPU with 24 GB VRAM. + +Over a week of testing the extension, about 35000 requests involving image transformation are logged in total. Fig. 7(a) and Fig. 7(b) show the distribution of total request times per days and hours. These are mostly independent of each day and hour with slight variances, which could be explained by a varying load of the server or different number of requests incoming in a shorter period of time. Fig. 8 shows the mean time required for each task per day. The image analysis requires $\approx {75}\%$ of the mean request time, with some outliers on day 6 where fetching the images suddenly takes up as much time as analyzing the images on average. Fig. 9(a) shows the resolution of images over the time required for analysis. + +We further tested whether images of high resolution impact analysis performance. The documentation of the used CNNs describe that image data is propagated through the networks at a constant resolution, i.e., a downsampling is required before propagation through the CNNs. Fig. 9(a) shows that images have similar analysis times independently of the resolution and the required downsampling step. A further question relates whether very small images (such as icons) cause a performance overhead if they appear on websites very often. Images that are smaller than ${32} \times {32}$ pixels are highlighted in red within Fig. 9(a). One can see that they need similar inference times compared to all other images. Further analysis of the statistics also indicate that they take up only $\approx 5\%$ in count of all images and less than one percent in total request time. A similar analysis was performed for the image transformation task. Relating to the image resolution vs.the image transformation (Fig. 9(b)) shows a linear dependency between image resolution (as number of total pixels) and transformation time. However, this does not impact the complete performance severely as image analysis is slower by a factor of about 10. Small images have been highlighted, again they take up $\approx 5\%$ in count and about $\approx 4\%$ in total time only. + +![01963e90-f00b-71b9-824d-ac443f7b16cc_6_166_213_1409_490_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_6_166_213_1409_490_0.jpg) + +Figure 9: Relating resolution of input images with analysis inference time (a) and image transformation time (b). Images smaller than ${32} \times {32}$ pixels are highlighted in red. + +![01963e90-f00b-71b9-824d-ac443f7b16cc_6_168_870_637_433_0.jpg](images/01963e90-f00b-71b9-824d-ac443f7b16cc_6_168_870_637_433_0.jpg) + +Figure 10: Relating requests per minute and the total request time. + +Finally, we evaluated whether a high load (possibly caused by a high number of requests in a short time) causes higher total request times. For it, each request is plotted as a point with the number of requests in that minute and its required time (Fig. 10). This does not show any strong correlation between the number of requests per minute and the total request time though; perhaps, the load generated by the requests was not high enough yet. + +### 6.3 Limitations + +Applying specific filters to weaken affect when looking at medical images might work differently or not at all depending on the user and the specific image at hand. More generally, it is impossible to find a definition of offensive content that fits all users. What is perceived as "graphic" highly depends on the perception, age, views, cultural background, and personal history of the user, which results in many different potential use cases for each individual user. Even if a clear definition would be possible, modern computer vision approaches are not able to correctly recognize offensive content all of the time. Additionally, detecting objects that are known to be offensive is not enough. The context in which these objects are as well as the overall image composition could completely change the meaning. + +We do not have the capacities to train accurate neural networks for real use-cases because there are no acceptable, publicly available datasets of offensive content. Even with a proper dataset, it is unlikely to train a perfect neural network, which classifies all images correctly and detects offensive content every time. + +Regarding the browser extension, it is difficult to even detect all the images on websites since there are a number of different ways to integrate an image into a web page, e.g., through custom HTML elements or extensive JS usage. If images are loaded asynchronously via JS and many images change simultaneously, the extension is not able to react fast enough to all of the changes, resulting in "unobfuscated" or "non-abstracted" images. Moreover, JS is executed in a single thread in all established web browsers, which increased the occurrence of time-related problems. Additionally, a very low bandwidth could be slowing down the processing of images. The impact would be comparably tiny because not much additional data is sent and the client still downloads each image only once. + +## 7 CONCLUSIONS AND FUTURE WORK + +This paper presents a service-based approach to facilitate consumption of digital graphic images online. To achieve this, an automatic analysis and classification of possibly offensive content is performed using services, and, based on the results, image abstraction techniques are applied with varying levels of abstraction. This functionality is accessed and configured via a browser extension that is supported by most modern web browsers. The presented content moderation approach has various applications such as reducing affective responses during medical education, allowing less distressing browsing of social media, or enabling safer browsing for child protection. + +Regarding future work, the user experience of the extension could be increased as follows. Modern object recognition algorithms are not only able to detect certain objects in images but are also able to locate them. With respect to this, image abstraction techniques can be applied to segments of the image to maintain the context and only abstract the sensitive image regions. This might support the user to identify whether an image was classified correctly. As an alternative, the image abstraction techniques could be applied to the complete image, but the user could be given the option to interactively reveal the image partial using lens-based interaction metaphors. + +[1] L. Besançon, A. Semmo, D. J. Biau, B. Frachet, V. Pineau, E. H. Sariali, M. Soubeyrand, R. Taouachi, T. Isenberg, and P. Dragicevic. Reducing Affective Responses to Surgical Images and Videos Through Stylization. Computer Graphics Forum, 39(1):462-483, Jan. 2020. doi: 10.1111/cgf. 13886 + +[2] L. Besançon, A. Semmo, D. J. Biau, B. Frachet, V. Pineau, E. H. Sariali, R. Taouachi, T. Isenberg, and P. Dragicevic. Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization. In Proceedings of the Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering, pp. 4:1-4:13. ACM Press, Victoria, Canada, Aug. 2018. doi: 10.1145/3229147.3229158 + +[3] A. Davis, C. Wright, M. Curtis, M. Hellard, M. Lim, and M. Temple-Smith. 'not my child': parenting, pornography, and views on education. Journal of Family Studies, pp. 1-16, 2019. doi: 10.1080/13229400. 2019.1657929 + +[4] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zis-serman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303-338, June 2010. + +[5] Facebook. Community standards. https://www.facebook.com/ communitystandards/objectionable_content. Accessed: 2021- 04-07. + +[6] P. T. Gilchrist and B. Ditto. The effects of blood-draw and injection stimuli on the vasovagal response. Psychophysiology, 49(6):815-820, 2012. doi: 10.1111/j.1469-8986.2012.01359.x + +[7] T. Gillespie. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press, 2018. doi: 10.12987/9780300235029 + +[8] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, abs/1311.2524, 2013. + +[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. + +[10] E. A. Holman, D. R. Garfin, and R. C. Silver. Media's role in broadcasting acute stress following the boston marathon bombings. Proceedings of the National Academy of Sciences, 111(1):93-98, 2014. doi: 10. 1073/pnas. 1316265110 + +[11] T. L. Hopwood and N. S. Schutte. Psychological outcomes in reaction to media exposure to disasters and large-scale violence: A meta-analysis. Psychology of violence, 7(2):316, 2017. doi: 10.1037/ vio0000056 + +[12] T. Isenberg. Interactive npar: What type of tools should we create? In Proceedings of Expressive, pp. 89-96, 2016. doi: 10.2312/exp. 20161067 + +[13] S. Jhaver, I. Birman, E. Gilbert, and A. Bruckman. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoder-ator. ACM Trans. Comput.-Hum. Interact., 26(5), July 2019. doi: 10. 1145/3338243 + +[14] I. Krasin, T. Duerig, N. Alldrin, A. Veit, S. Abu-El-Haija, S. Belongie, D. Cai, Z. Feng, V. Ferrari, V. Gomes, A. Gupta, D. Narayanan, C. Sun, G. Chechik, and K. Murphy. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2016. + +[15] A. L. Lemos, F. Daniel, and B. Benatallah. Web service composition: A survey of techniques and tools. ACM Computing Surveys, 48(3):33:1- 33:41, 2015. doi: 10.1145/2831270 + +[16] M. S. C. Lim, E. R. Carrotte, and M. E. Hellard. The impact of pornography on gender-based violence, sexual health and well-being: what do we know? Journal of Epidemiology & Community Health, 70(1):3-5, 2016. doi: 10.1136/jech-2015-205453 + +[17] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. E. Reed, C. Fu, and A. C. Berg. SSD: single shot multibox detector. CoRR, abs/1512.02325, 2015. + +[18] V. L. Ltd. Automatic visual content moderation - reviewing the state-of-the-art vendors in cognitive cloud services to spot unwanted content. https://valossa.com/automatic-visual-content-moderation/.Accessed 2021-04-07. + +[19] J. Mahadeokar and G. Pesavento. Open sourcing a deep + +3 4 learning solution for detecting nsfw images. https: + +//yahooeng.tumblr.com/post/151148689421/open-sourcing-a-deep-learning-solution-for. Accessed 2021-04-07. + +[20] M. Mueller and B. Pross. OGC WPS 2.0.2 Interface Standard. Open Geospatial Consortium, 2015. http://docs.opengeospatial.org/is/14- 065/14-065.html. + +[21] C. Newton. The trauma floor. https://www.theverge.com/2019/ 2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona, The Verge, Feb 2019. Accessed: 2021-04-07. + +[22] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. arXiv preprint arXiv: 1612.08242, 2016. + +[23] J. Redmon and A. Farhadi. Yolov3: An incremental improvement. arXiv, 2018. + +[24] C. R. Richardson, P. J. Resnick, D. L. Hansen, H. A. Derry, and V. J. Rideout. Does Pornography-Blocking Software Block Access to Health Information on the Internet? JAMA, 288(22):2887-2894, 2002. doi: 10.1001/jama.288.22.2887 + +[25] M. Richter, M. Söchting, A. Semmo, J. Döllner, and M. Trapp. Service-based Processing and Provisioning of Image-Abstraction Techniques. In Proceedings International Conference on Computer Graphics, Visualization and Computer Vision (WSCG), pp. 97-106. Computer Science Research Notes (CSRN), Plzen, Czech Republic, 2018. doi: 10.24132/ CSRN.2018.2802.13 + +[26] S. Robinson. Filtering inappropriate content with the Cloud Vision API, Aug 2016. + +[27] C. N. Sawchuk, J. M. Lohr, D. H. Westendorf, S. A. Meunier, and D. F. Tolin. Emotional responding to fearful and disgusting stimuli in specific phobics. Behaviour research and therapy, 40(9):1031-1046, 2002. doi: 10.1016/S0005-7967(01)00093-6 + +[28] Sightengine. Sightengine online api. https://sightengine.com/ demo. Accessed 2021-04-07. + +[29] R. R. Thompson, N. M. Jones, E. A. Holman, and R. C. Silver. Media exposure to mass violence events can fuel a cycle of distress. Science Advances, 5(4), 2019. doi: 10.1126/sciadv.aav3502 + +[30] Twitter. Twitter's sensitive media policy. https: //help.twitter.com/en/rules-and-policies/media-policy. Accessed: 2021-04-07. + +[31] M.-F. Vaida, V. Todica, and M. Cremene. Service oriented architecture for medical image processing. International Journal of Computer Assisted Radiology and Surgery, 3(3):363-369, 2008. doi: 10.1007/ s11548-008-0231-8 + +[32] M. Viggiato, R. Terra, H. Rocha, M. T. Valente, and E. Figueiredo. Microservices in practice: A survey study. CoRR, abs/1808.04836, 2018. + +[33] O. Wegen, M. Trapp, J. Döllner, and S. Pasewaldt. Performance Evaluation and Comparison of Service-based Image Processing based on Software Rendering. In Proceedings International Conference on Computer Graphics, Visualization and Computer Vision (WSCG), pp. 127-136. Computer Science Research Notes (CSRN), Plzen, Czech Republic, 2019. doi: 10.24132/csrn.2019.2901.1.15 + +[34] R. P. Winkler and C. Schlesiger. Image processing rest web services. Technical Report ARL-TR-6393, Army Research Laboraty, Adelphi, MD 20783-119, 2013. + +[35] H. Winnemöller, S. C. Olsen, and B. Gooch. Real-time Video Abstraction. ACM Transactions On Graphics), 25(3):1221-1226, 2006. + +[36] M. Würsch, R. Ingold, and M. Liwicki. Divaservices - a restful web service for document image analysis methods. Digital Scholarship in the Humanities, 32(1):i150-i156, 2017. doi: 10.1093/llc/fqw051 + +[37] H. Yang, J. Han, and K. Min. Eeg-based estimation on the reduction of negative emotions for illustrated surgical images. Sensors, 20(24), 2020. doi: 10.3390/s20247103 + +[38] B. Zhou, H. Zhao, X. Puig, T. Xiao, S. Fidler, A. Barriuso, and A. Tor-ralba. Semantic Understanding of Scenes through ADE20K Dataset. International Journal on Computer Vision, 127(3):302-321, 2019. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..a1d04eee3340340829a62fb242545ddc4fcfa315 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,175 @@ +§ SERVICE-BASED ANALYSIS AND ABSTRACTION FOR CONTENT MODERATION OF DIGITAL IMAGES + +Online Submission ID: 2 + +§ ABSTRACT + +This paper presents a service-based approach towards content moderation of digital visual media while browsing web pages. It enables the automatic analysis and classification of possibly offensive content, such as images of violence, nudity, or surgery, and applies common image abstraction techniques at different levels of abstraction to these to lower their affective impact. The system is implemented using a microservice architecture that is accessible via a browser extension, which can be installed in most modern web browsers. It can be used to facilitate content moderation of digital visual media such as digital images or to enable parental control for child protection. + +Index Terms: Computer systems organization-Client-server architectures; Computing methodologies-Image processing; Information systems-Content analysis and feature selection; Information systems-Browsers; Human-centered computing-Web-based interaction; Human-centered computing-Graphical user interfaces; + +§ 1 INTRODUCTION + +This work's main objective is to support and facilitate human-driven moderation of digital visual media such as digital images, which are a dominant category in the domain of user-generated content, i.e., content that is acquired and uploaded to content providers. Content moderation usually requires humans who view and decide if content is considered to be appropriate or not, e.g., regarding violence, racism, nudity, privacy etc.; these aspects may have serious impacts in various directions, including legal and psychological issues. + +To cope with negative impacts on the viewers psychology and to alleviate legal issues, we propose a combination of content analysis and classification together with suitable image abstraction techniques that first detect inappropriate content and, subsequently, disguise and obfuscate content depictions or specific portions (Fig. 1). + +§ 1.1 PROBLEM STATEMENT AND OBJECTIVES + +From a technical perspective, the implementation of such approach needs to be independent of operation systems and processing hardware. Thus, we decide to use a service-based approach to detect and classify visual media content and to perform respective abstraction techniques depending on the detection results. Deep-learning approaches will be used that allow for defining what "offensive" means. Such functionality can be integrated into web browsers using a browser-extension based on a World Wide Web Consortium (W3C) draft standard. This way, the content moderation functionality can be applied and integrated into professional IT solutions or can be used by means of end-user apps. + +Facilitate Content Moderation (Objective-1): Today, content moderation becomes more and more crucial for digital content providers (e.g., Facebook or YouTube) to fulfill their responsibilities and to implement an ethical content handling [7]. Moderation comprises manual examination for detection and classification of critical or inappropriate content. For some types of visual media content, the detection and classification can already be performed semi-automatically using machine-learning approaches. However, automated moderation being often limited (see e.g., [13]), the final moderation decisions are often made by humans, required to consume the unfiltered content. + + < g r a p h i c s > + +Figure 1: Application example for the combination of service-based analysis and image abstraction used for content moderation functionality provided by a browser extension (a). It enables the classification of digital input images (left) displayed on web sites according to different categories (rows) and their respective disguising using adjustable image abstraction techniques (right), such as pixelation (c), cartoon stylization (e), or blur (g) + + < g r a p h i c s > + +Figure 2: Example of using image content analysis in combination with image abstraction techniques to disguise possibly inappropriate content for subsequent manual moderation: (a) input image, (b) results of image segmentation, (c) completely abstracted image, (d) partial abstraction techniques applied to the child only. + +Recent studies indicate that workers concerned with these tasks are often subject to severe mental health damage due to traumatic experiences or monotonic duties $\left\lbrack {{10},{11},{29}}\right\rbrack$ . Interestingly, non-photo-realistic rendering of these stimuli could potentially reduce their negative impacts $\left\lbrack {1,2,{12}}\right\rbrack$ . Therefore, these negative consequences could be mitigated by reducing the affective responses that arise from consuming the unfiltered content by using a combination of automatic analysis and abstraction techniques as follow (Fig. 2): (i) visual media content is analyzed, e.g., to detect, classify, and possibly perform a semantic segmentation, (ii) abstraction techniques are used to partially or completely disguise possibly offensive content prior to (iii), the interactive visual examination. + +Service-oriented Architectures (Objective-2): To implement an approach for Objective-1 and to make it available to a wide range of applications and users, we set out to provide a prototypical micro-app (e.g., a browser extension) based on a microservice infrastructure. For this, two separate microservices, for analysis and abstraction respectively, are orchestrated by a content moderator service. This enables a use-case specific exchange, replacement, or extension of specific functionality or complete services without risking the overall functionality. By using a Hyper Text Markup Language (HTML) User Interface (UI) integrated into a W3C browser extension, the abstracted content can be interpolated/blended with the original one interactively. + +In current state-of-the-art systems, analysis and abstraction of images and videos are mostly performed using on-device computation. Thus, these systems' processing capabilities are limited by the device's hardware (Central Processing Unit (CPU) and Graphics Processing Unit (GPU)) and software (Operating System (OS)). Being subject to high heterogeneity (device ecosystem), this has major drawbacks concerning the applicability, maintainability, and provisioning of content-moderation applications, in particular: a software development process and integration into ${3}^{\mathrm{{rd}}}$ -party applications is aggravated by:(i)different operating systems (e.g., Windows, Linux, macOS, iOS, Android), (ii) heterogeneous hardware configurations of varying efficiency and Application Programming Interfaces (APIs) (e.g., OpenGL, Vulkan, Metal, DirectX), as well as (iii) display sizes and formats. Further, on-device processing does often not scale with respect to the increasing input complexity (e.g., number of images, increasing spatial resolution of camera sensors), which especially poses problems for mobile devices (e.g., battery drain or overheating). + +§ 1.2 APPROACH AND CONTRIBUTIONS + +Concerning the aforementioned drawbacks of on-device processing, the proposed combination of standardized technology for micro-apps together with service-oriented architectures and infrastructures offers a variety of advantages, in particular:(i)implementation and testing of specific analysis and abstraction techniques are required to be performed only once (controlling the systems software and hardware due to virtualization), (ii) functionality is offered to a wide range of web-based applications using standardized protocols, which can be easily integrated into ${3}^{\text{ rd }}$ -party applications and extended rapidly. This way, (iii) the proposed service-based approach can automatically scale service-instances with respect to input data complexity and computing power required. Thus software up-to-dateness and exchangeability can be easily achieved. Further, the software development process of web-based thin-clients is less complex compared to rich-clients. Together with the upcoming $5\mathrm{G}$ telecommunication standard featuring (among others) high data rates, reduced latency, and energy saving, the presented approach seems feasible to be applied in stationary as well as mobile contexts. Finally, intellectual property of the service providers can be effectively protected by not shipping respective software components to customers, and thus, impede the possibility of reverse engineering. + +§ 2 BACKGROUND AND RELATED WORK + +§ 2.1 VISUAL CONTENT ANALYSIS USING NEURAL NETWORKS + +Image analysis can be performed according to different tasks. Image classification such as the ResNet Convolutional Neural Network (CNN) architecture can be performed [9] to determine how likely an image belongs to one or more specific categories. With object recognition, the goal is to identify objects displayed on images with it respective bounding boxes. For object recognition, CNN architectures such as is YOLOv3 [23] and Single Shot Detector (SSD) [17] can be applied. Another task is image segmentation, where objects and regions of certain semantics are identified on a per-pixel base. This can be performed with the approach R-CNN of Girshick et al. [8]. Different kinds of CNNs need to be trained with datasets consisting of images labeled with categories, objects, or image regions. There are public datasets available to train and to benchmark the performance of different CNN architectures. Popular examples are the Pascal VOC [4] and ADE20K [38] datasets. They contain labeled image data for image classification, object recognition, and even semantic segmentation. For the task of content moderation, the objects and categories are quite general; mostly everyday objects are contained. Another dataset is the Google Open Image dataset [14]. It contains about 9 million images labeled with object information. The object categories are hierarchical organized and span over 6000 categories. Another approach, which spans even more different object categories, is YOLO9000 [22]. YOLO9000 is a variation of the YOLOv3 [23] architecture that was trained on a dataset with more than 9000 different objects. However, this dataset is not publicly available. + +Further, there are approaches for image analysis that are more directed towards content moderation. These are mostly presented in the form of RESTful APIs allowing for sending images and receive analysis results. The Google Cloud Vision API [26] assigns scores to images depending on how likely they represent the categories adult, violence, medical and spoof. The API of Sightengine [28] analyzes images for the occurrence of categories such as nudity, weapon, alcohol, drugs, scams and other offensive content. Valossa [18] reviews cloud-based vendors supporting the classification of unsafe content and describes the difficulties of defining what inappropriate content actually is. They concluded that content analysis models must be able to understand the context of objects and depicted situations in order to decide whether images contain unsafe content. They offer an evaluation dataset with images in 16 different content categories and benchmark it on several online RESTful APIs. However, these approaches do not offer any public datasets or specify which machine learning models they use exactly. In contrast, Yahoo [19] offers a trained CNN model that can be used free of charge. The Yahoo Open Not Safe For Work (NSFW) model is basically a ResNet [9] that was fine-tuned on a dataset of NSFW images depicting nudity and other offensive content. For a given image, it determines a score how likely it contains unsafe content. + + < g r a p h i c s > + +Figure 3: Conceptual overview of the microservice architecture of the presented approach. The individual service components (blue) are communicating via RESTful APIs and are used by a browser extension (orange) that integrates into standard web technologies (gray). + +§ 2.2 SERVICE-BASED IMAGE PROCESSING + +Several software architectural patterns are feasible to implement service-based image processing. However, one prominent style of building a web-based processing system for any data is the service-oriented architecture [31]. It enables server developers to set up various processing endpoints, each providing a specific functionality and covering a different use case. These endpoints are accessible as a single entity to the client, i.e., the implementation is hidden for the requesting clients, but can be implemented through an arbitrary number of self-contained services. + +Since web services are usually designed to maximize their reusability, their functionality should be simple and atomic. Therefore, the composition of services is critical for fulfilling more complex use cases [15]. The two most prominent patterns for implementing such composition are choreography and orchestration. The choreography pattern describes decentralized collaboration directly between modules without a central component. The orchestration pattern describes collaboration through a central module, which requests the different web services and passes the intermediate results between them. + +In the field of image analysis, Wursch et al. [36] present a web-based tool that enables users to perform various image analysis methods, such as text-line extraction, binarization, and layout analysis. It is implemented using a number of Representational State Transfer (REST) web services. Application examples include multiple web-based applications for different use cases. Further, the viability of implementing image-processing web services using REST has been demonstrated by Winkler et al. [34], including the ease of combination of endpoints. Another example for service-based image processing is Leadtools (https://www.leadtools.com), which provides a fixed set of approx. 200 image-processing functions with a fixed configuration set via a web API. + +In this work, a similar approach using REST is chosen, however, with a different focus in terms of granularity of services. The advantages of using microservices are(i)increased scalability of the components, (ii) easy deployment and maintainability as well as, (iii) the possibility to introduce various technologies into one system [32]. For our work, we are extending a microservice platform for cloud-based visual analysis and processing that was first presented by Richter et al. [25]. In addition thereto and based on that, Wegen et al. [33] present an approach for performing service-based image processing using software rendering to balance cost-performance relation. + +In the field of geodata, the Open Geospatial Consortium (OGC) set standards for a complete server-client ecosystem. As part of this specification, different web services for geodata are introduced [20]. Each web service is defined through specific input and output data and the ability to self-describe its functionality. In contrast, in the domain of general image-processing there is no such standardization yet. However, it is possible to transfer concepts from the OGC standard, such as unified data models. These data models are implemented using a platform-independent effect format. In the future, it is possible to transfer even more concepts set by the OGC to the general image-processing domain, such as the standardized self-description of services. + +§ 3 METHOD + +With respect to Objective-2, we choose to implement our approach using microservices, which are described as follows. + +§ 3.1 SYSTEM OVERVIEW + +Fig. 3 shows a conceptual overview of the components as well as their data and control flow. It comprises of the following components that communicate via RESTful APIs: + +Moderation Browser-Extension: A client-device running a web browser that(i)hosts the moderation browser-extension and (ii) an arbitrary website with visual media content. The web-site's visual media content is hosted by a content provider and referenced by an Unified Resource Locator (URL). The browser extension accesses the URLs via content-script and uses it to query the RESTful API of the CMS asynchronously. + +Content Moderation Service (CMS): The CMS orchestrates the interplay between instances of the analysis and abstraction services, which encapsulate respective techniques. Upon request, it downloads the image from the given URL and forwards its content to an analysis service instance by querying the analysis RESTful API. Depending on the analysis response, it uses the configuration data to map analysis results to specific parameter values that are used to query the image abstraction service. The response is subsequently forwarded to the browser extension that replaces a placeholder content with the abstracted content. + +Content Analysis Service (CAS): The CAS identifies if an image contains offensive content and has to be filtered using image abstraction techniques. It receives image data from the CMS and performs image analysis with different image recognition models as well as multiple image classification and object recognition CNNs. It then returns results of the different analysis models in a unified and structured way. + +Image Abstraction Service (IAS): The IAS provides an interface for applying various image abstraction techniques (e.g., blur, pixelation, or more specific operations such as cartoon stylization, etc.) with presets of different strength to images that are identified by the CAS to possibly contain offensive content. + +§ 3.2 BROWSER-EXTENSION FOR MODERATION CLIENT + +The browser extension traverses the Document Object Model (DOM) tree and utilizes a MutationObserver object to detect changes in the respective image and picture tags; a DOM-MutationObserver is provided by the corresponding web browser and is intended to watch for changes being made to the DOM tree. As soon as an image is detected, it is initially blurred using Cascading Style Sheets (CSS) image filters. This prevents users to see possible disturbing image content while the CMS's RESTful API is queried and the image's URL is transmitted to it. The response received by the CMS contains information on whether or not an image contains disturbing content and therefore needs to be disguised. In the case that an image is categorized as offensive, the response also includes a processed version of the image. + + < g r a p h i c s > + +Figure 4: Filtered image with an overlay shown on mouse-over. + +Subsequent to receiving the response, the local CSS-filter blur is removed. If the processed image does contain suggestive content, the original image gets replaced - otherwise, the original image is displayed. Finally, an overlay is added for every image (Fig. 4) that provides buttons for(i)allowing users to report misclassified images and (ii) toggling between the original and the processed image. If a user does not agree with the determined content classification, they can propose a more suitable one. A modal (Fig. 5) will appear where the user can select if another category is more suitable, the image is disturbing in a different way, or it should not be filtered. + +§ 3.3 CONTENT MODERATION SERVICE + +The CMS moderates communication and interactions between the browser extension, the analysis service, and the image abstraction service. Clients use it to initiate the analysis process that consists of the following steps. First, the image is downloaded using the URL specified in the analysis request. Then, the image analyzer is queried to detect if the image contains offensive content. Subsequently, the CMS maps the image analysis results to an image abstraction technique and forwards it to the abstraction service for application. Finally, it notifies the client whether the image contains offensive content and, if it does, attaches the processed image to the response. If a user decides to send feedback via the feedback modal (e.g., the chosen scenario is unsuitable), a request to a feedback route is sent and the feedback is stored for further use. + + < g r a p h i c s > + +Figure 5: A feedback pop-up enables the correction of misclassifications and for feeding this information back to the server. + +§ 3.4 CONTENT ANALYSIS & ABSTRACTION SERVICE + +The CAS provides an interface for clients to analyze images for the presence of certain objects and categories. Clients send the image data and receive analysis results that are represented in the form of tags with score and meta data associated. Tags can comprise objects displayed in a image or categories that can be associated with an image. The score describes how likely an object or category is present. The meta data could include an Axis-aligned Bounding Box (AABB) that describes the estimated position of an object within the image. + +The image analysis is performed with different machine learning models, in our case using CNNs. The output, specific to each model used, must be transformed to the unified analysis result format. This allows extending the analysis service with additional Machine Learning (ML) models. The results of the image analysis are used by the content moderator to decide what kind of content an image is of and whether an image abstraction is applied. + +If offensive content was detected on an image, it is disguised by applying an image abstraction techniques to it. The IAS provides an endpoint to apply specific techniques such as pixelation, or blur to an image. For every technique, different presets and parameters are provided to indicate different degrees and styles of image abstraction. To query the abstraction endpoint, the IAS requires the image's data as well as a mapped abstraction techniques and its preset (Sec. 4). With respect to this, the CMS perform such mapping by taking the analyzed scenario and score into account. In response, the processed image is returned by the IAS. + +§ 4 MAPPING OF ANALYSIS RESULTS + +The analysis result for an image is a set of tags with scores. The tags describe objects that can be displayed or categories that can be associated with an image. The scores describe how likely these tags are actually present on an image. An image abstraction in the sense of content moderation, is processing a user generated content image with an image abstraction technique with a specific parameter preset. To process images with the goal of reducing explicit content, one has to define a mapping from analysis results to an image abstraction technique with a specific parameter preset. + +In the proposed system each tag was manually associated with a scenario. A scenario is a type of content that should be moderated. For this system, the scenarios nudity, violence, and medical are used. Each scenario is specified by: (i) name, (ii) set of tag names and threshold, (iii) image abstraction technique, and (iv) three effect presets, sorted by degree of abstraction (low, medium, strong). A scenario is matched to analysis results if any of the scenario's tag names are contained in the received analysis' tags and the received score is equal or higher than the scenario tag score threshold. Because an analysis result could match multiple scenarios, the scenarios are prioritized and the matching scenario with the highest priority is chosen. If no scenario matches, then no image abstraction is required. Otherwise, the user-defined preset will be selected and used for the subsequent image abstraction step. + + < g r a p h i c s > + +Figure 6: Images with similar content (left) but different mappings (right) based on how likely they are rated as showing violent content. + +Fig. 6 shows four images with similar content but different mappings. All four images depict scenes with weapons and are categorized as showing violent content by the CAS. The used mappings are chosen according to the score that indicates how likely these images show violent content. Fig. 6(a) is disguised using a Gaussian blur preset with a large kernel size (Fig. 6(b)). Fig. 6(d) shows an applied cartoon filter using a black & white preset with thin edges to remove color information. Fig. 6(h) uses an cartoon filter with thick edges. Fig. 6(h) show the results of a pixelation abstraction technique for aggressive disguise. The effect that is mapped to an image is arbitrary and can be customized, but different abstraction techniques are more or less suitable for certain scenarios. In particular, this work uses a pixelation technique for images showing violent content, a cartoon filter for medical content based on the work of Winnemöller et al. [35] and as suggested by the studies of Besançon et al. $\left\lbrack {1,2}\right\rbrack$ , and a Gaussian blur for images that depict nudity. + +§ 5 IMPLEMENTATION ASPECTS + +In a service-based architecture similar to the one used in the content moderation scenario, it is necessary that messages are exchanged between the individual services. Therefore, each service provides a RESTful API and queries other services correspondingly. The browser extension is implemented using JavaScript (JS) and utilizes the used browser's API to access and alter a website's DOM tree to administer local storage and to react on changes made to the website. For sending requests to other services, the fetch API is applied. Filter options facilitate users to customize whether and to what extend suggestive images should be abstracted and all three scenarios can be customized individually. Users can choose among three levels of abstraction or can switch off single scenarios completely. + +The CMS is implemented using Node.js and provides a RESTful API with two endpoints: one for requesting an image to be analyzed and categorized and one for sending user feedback. A request sent to the analyze endpoint needs to include the URL of the image that should be analyzed and options that represent the filter settings made by the user. A feedback request consists of three different pieces of information: the data concerning the assessed image, the category proposed by the image analyzer service, and the category included in the user's feedback. This data is stored and used as a training set for machine learning algorithms that can be used during image classification. The implementation of the IAS also relies on Node.js. It provides an endpoint that accepts requests that need to include the data of the image to be abstracted and an operation that should be applied to the image. A preset, related to the desired effect, can also be sent to this endpoint as an optional parameter. The CAS is implemented using Python and flask. It provides a RESTful API with an endpoint to start the analysis process. + +For the basic functionality of the CMS, two different kinds of neural networks are used:(i)a Single Shot MultiBox model [17] and (ii) the Yahoo Open NSFW model [19]. Single Shot MultiBox is a CNN architecture that performs object recognition on images. For a given image it returns AABB with a scenario and a confidence score (0 to 1). The confidence score indicates to what extend a scenario is detected in the image. An existing implementation for PyTorch was used, as well as a classification model that has been initially trained on the Pascal VOC dataset [4], but with very general object classes such as aeroplane, car, cow, dog, or TV monitor. To this end, we also trained an SSD model on military-like classes of the Google Open Images Dataset [14] (such as rifle, tank, knife, missile) to be able to make predictions on somewhat realistic explicit content. The Yahoo Open NSFW model also returns for a given image an NSFW score (0 - safe, no nudity detected; to 1 - not safe, nudity detected) and an implementation and trained model for TensorFlow is available, but a proper threshold for this score is required. Such threshold might be different according to the use case of the system and must be chosen carefully. + +§ 6 RESULTS AND DISCUSSION + +§ 6.1 APPLICATIONS + +The system described in this work offers advantages for the consumption, use, and eventually moderation of graphic content in different areas we now detail. First of all, it could assist for medical and surgical education. A primary use case would be to facilitate the education of nurses and medical students (not all destined to be surgeons) by reducing affection and aversion when looking at images showing blood and medical acts $\left\lbrack {1,2}\right\rbrack$ . Similarly, another use case is on the communication between surgeons and patients $\left\lbrack {1,2}\right\rbrack$ . Patients are usually informed and prepared before the planning of future surgeries and the explanations can be facilitated by the use of images. Yet, laypeople often find looking at images depicting surgery or blood extremely difficult [6, 27]. Communication between patients and doctors could therefore be improved with such automated image processing tools. + +Furthermore, the system can be used to moderate internet forums and social networks. Nowadays, a lot of digital media, including images and videos, are shared through social media platforms such as Facebook or Instagram. While graphic content sometimes adheres to the Terms of Services (TOSs) of these platforms [5, 30], many graphic media are not accepted for ethical or legal reasons. The filtering between authorized and non-authorized content is performed by a combination of algorithms and people, or people alone (content moderators) depending on the platform. The software system presented in this paper could facilitate a content moderator's work and help to prevent mental issues that can be a consequence of looking at disturbing pictures all day [21]. The system could also alleviate the toll on volunteer moderators of platforms such as Wikipedia or Reddit. In a similar fashion, journalists and news editors might have to browse through hundreds of shocking content to illustrate their articles or better understand the case they are reporting on (e.g., war-zones, disasters, accidents). Our automated tool could also be useful in this specific context. + + < g r a p h i c s > + +Figure 7: Distribution of request times per day (a) and hour (b). Center line of boxes shows mean, boxes in total show quartile, whiskers show rest of the distribution without obvious outliers. + + < g r a p h i c s > + +Figure 8: Mean request times according to different tasks. + +Finally, since exposure to graphic or pornographic content has been shown to be particularly detrimental to children [16], our tool could particularly be interesting on this regard. While blocking software could be used to limit access to nudity or pornography, such software tend to also limit access to useful information (e.g., online sex education) [24], are rarely maintained or even used [3], and unlikely to block access to content on some social media platforms. Blocking software finally rarely target all sorts of graphic content. With respect to this issue, we hope that our tool can help limit the impact of unwanted graphic content, rather than eliminate it completely along with potentially useful information. + +§ 6.2 PERFORMANCE EVALUATION + +We only focus here on system performance as the potential reduction of affective responses through image abstraction techniques has already been studied $\left\lbrack {1,2,{37}}\right\rbrack$ . The system’s performance was evaluated by timing different tasks involved in the process of content moderation and comparing these to assemble a metric of the requests and processed images (time stamp, image resolution, image transformation). Timed tasks measured on the CMS were(i)fetching image, (ii) performing image analysis, and (iii) performing image transformation by abstraction. The CMS, CAS, and IAS are hosted on a single dedicated GPU-server without a significant network overhead. Thus, the network request times between clients and the services are assumed to be independent of our system and are not considered. To evaluate the performance in a way that considers all required tasks equally, only requests that led to an image transformation (unsafe image was detected) were considered. The dedicated GPU-server is equipped with Xeon E5-2637 v4, 3.5 GHz processor (8 cores), 64 GB RAM, NVIDIA Quadro M6000 GPU with 24 GB VRAM. + +Over a week of testing the extension, about 35000 requests involving image transformation are logged in total. Fig. 7(a) and Fig. 7(b) show the distribution of total request times per days and hours. These are mostly independent of each day and hour with slight variances, which could be explained by a varying load of the server or different number of requests incoming in a shorter period of time. Fig. 8 shows the mean time required for each task per day. The image analysis requires $\approx {75}\%$ of the mean request time, with some outliers on day 6 where fetching the images suddenly takes up as much time as analyzing the images on average. Fig. 9(a) shows the resolution of images over the time required for analysis. + +We further tested whether images of high resolution impact analysis performance. The documentation of the used CNNs describe that image data is propagated through the networks at a constant resolution, i.e., a downsampling is required before propagation through the CNNs. Fig. 9(a) shows that images have similar analysis times independently of the resolution and the required downsampling step. A further question relates whether very small images (such as icons) cause a performance overhead if they appear on websites very often. Images that are smaller than ${32} \times {32}$ pixels are highlighted in red within Fig. 9(a). One can see that they need similar inference times compared to all other images. Further analysis of the statistics also indicate that they take up only $\approx 5\%$ in count of all images and less than one percent in total request time. A similar analysis was performed for the image transformation task. Relating to the image resolution vs.the image transformation (Fig. 9(b)) shows a linear dependency between image resolution (as number of total pixels) and transformation time. However, this does not impact the complete performance severely as image analysis is slower by a factor of about 10. Small images have been highlighted, again they take up $\approx 5\%$ in count and about $\approx 4\%$ in total time only. + + < g r a p h i c s > + +Figure 9: Relating resolution of input images with analysis inference time (a) and image transformation time (b). Images smaller than ${32} \times {32}$ pixels are highlighted in red. + + < g r a p h i c s > + +Figure 10: Relating requests per minute and the total request time. + +Finally, we evaluated whether a high load (possibly caused by a high number of requests in a short time) causes higher total request times. For it, each request is plotted as a point with the number of requests in that minute and its required time (Fig. 10). This does not show any strong correlation between the number of requests per minute and the total request time though; perhaps, the load generated by the requests was not high enough yet. + +§ 6.3 LIMITATIONS + +Applying specific filters to weaken affect when looking at medical images might work differently or not at all depending on the user and the specific image at hand. More generally, it is impossible to find a definition of offensive content that fits all users. What is perceived as "graphic" highly depends on the perception, age, views, cultural background, and personal history of the user, which results in many different potential use cases for each individual user. Even if a clear definition would be possible, modern computer vision approaches are not able to correctly recognize offensive content all of the time. Additionally, detecting objects that are known to be offensive is not enough. The context in which these objects are as well as the overall image composition could completely change the meaning. + +We do not have the capacities to train accurate neural networks for real use-cases because there are no acceptable, publicly available datasets of offensive content. Even with a proper dataset, it is unlikely to train a perfect neural network, which classifies all images correctly and detects offensive content every time. + +Regarding the browser extension, it is difficult to even detect all the images on websites since there are a number of different ways to integrate an image into a web page, e.g., through custom HTML elements or extensive JS usage. If images are loaded asynchronously via JS and many images change simultaneously, the extension is not able to react fast enough to all of the changes, resulting in "unobfuscated" or "non-abstracted" images. Moreover, JS is executed in a single thread in all established web browsers, which increased the occurrence of time-related problems. Additionally, a very low bandwidth could be slowing down the processing of images. The impact would be comparably tiny because not much additional data is sent and the client still downloads each image only once. + +§ 7 CONCLUSIONS AND FUTURE WORK + +This paper presents a service-based approach to facilitate consumption of digital graphic images online. To achieve this, an automatic analysis and classification of possibly offensive content is performed using services, and, based on the results, image abstraction techniques are applied with varying levels of abstraction. This functionality is accessed and configured via a browser extension that is supported by most modern web browsers. The presented content moderation approach has various applications such as reducing affective responses during medical education, allowing less distressing browsing of social media, or enabling safer browsing for child protection. + +Regarding future work, the user experience of the extension could be increased as follows. Modern object recognition algorithms are not only able to detect certain objects in images but are also able to locate them. With respect to this, image abstraction techniques can be applied to segments of the image to maintain the context and only abstract the sensitive image regions. This might support the user to identify whether an image was classified correctly. As an alternative, the image abstraction techniques could be applied to the complete image, but the user could be given the option to interactively reveal the image partial using lens-based interaction metaphors. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..6969fd4103fba5551104ba644ec961f2aba2b600 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,323 @@ +## Visual-Interactive Neural Machine Translation + +Category: Research + +![01963ea6-6fc2-77d1-8244-0901581a3a6c_0_221_330_1354_474_0.jpg](images/01963ea6-6fc2-77d1-8244-0901581a3a6c_0_221_330_1354_474_0.jpg) + +Figure 1: The main view of our neural machine translation system: (A) Document View with sentences of the document for the current filtering settings, (B) Metrics View with sentences of the filtering result highlighted, and (C) Keyphrase View with a set of rare words that may be mistranslated. The Document View initially contains all sentences automatically translated with the NMT model. After filtering with the Metrics View and Keyphrase View, a smaller selection of sentences is shown. Each entry in the Document View provides information about metrics, the correction state, and functionality for modification (on the right side next to each sentence). The Metrics View represents each sentence as one path and shows values for different metrics (e.g., correlation, coverage penalty, sentence length). Green paths correspond to sentences of the current filtering. One sentence is highlighted (yellow) in both the Metrics View and the Document View. + +## Abstract + +We introduce a novel visual analytics approach for analyzing, understanding, and correcting neural machine translation. Our system supports users in automatically translating documents using neural machine translation and identifying and correcting possible erroneous translations. User corrections can then be used to fine-tune the neural machine translation model and automatically improve the whole document. While translation results of neural machine translation can be impressive, there are still many challenges such as over-and under-translation, domain-specific terminology, and handling long sentences, making it necessary for users to verify translation results; our system aims at supporting users in this task. Our visual analytics approach combines several visualization techniques in an interactive system. A parallel coordinates plot with multiple metrics related to translation quality can be used to find, filter, and select translations that might contain errors. An interactive beam search visualization and graph visualization for attention weights can be used for post-editing and understanding machine-generated translations. The machine translation model is updated from user corrections to improve the translation quality of the whole document. We designed our approach for an LSTM-based translation model and extended it to also include the Transformer architecture. We show for representative examples possible mistranslations and how to use our system to deal with them. A user study revealed that many participants favor such a system over manual text-based translation, especially for translating large documents. + +Index Terms: Human-centered computing-Visualization-Visualization application domains-Visual analytics; Human-centered computing-Visualization-Visualization systems and tools; Computing methodologies-Artificial intelligence-Natural language processing—Machine translation + +## 1 INTRODUCTION + +Machine learning and especially deep learning are popular and rapidly growing fields in many research areas. The results created with machine learning models are often impressive but sometimes still problematic. Currently, much research is performed to better understand, explain, and interact with these models. In this context, visualization and visual analytics methods are suitable and more and more often used to explore different aspects of these models. Available techniques for visual analytics in deep learning were examined by Hohman et al. [16]. While there is a large amount of work available for explainability in computer vision, less work exists for machine translation. + +As it becomes increasingly important to communicate in different languages, and since information should be available for a huge range of people from different countries, many texts have to be translated. Doing this manually takes much effort. Nowadays, online translation systems like Google Translate [13] or deepL [10] support humans in translating texts. However, the translations generated that way are often not as expected or like someone familiar with both languages might translate them. It may also not express someone's translation style or use the correct terminology of a specific domain or for some occasion. Often, more background knowledge about the text is required to translate documents appropriately. + +With the introduction of deep learning methods, the translation quality of machine translation models has improved considerably in the last years. However, there are still difficulties that need to be addressed. Common problems of neural machine translation (NMT) models are, for instance, over- and under-translation [35] when words are translated repeatedly or not at all. Handling rare words [20], which might be available in specific documents, and long sentences, are also issues. Domain adaption [20] is another challenge; especially documents from specific domains such as medicine, law, or science require high-quality translations [7]. As many NMT models are trained on general data sets, their translation performance is worse for domain-specific texts. + +![01963ea6-6fc2-77d1-8244-0901581a3a6c_1_144_132_1494_710_0.jpg](images/01963ea6-6fc2-77d1-8244-0901581a3a6c_1_144_132_1494_710_0.jpg) + +Figure 2: The detailed view for a selected sentence consists of the Sentence View (A), the Attention View (B), and the Beam Search View (C). The Sentence View allows text-based modifications of the translation. The Attention View shows the attention weights (represented by the lines connecting source words with their translation) for the translation. The Beam Search View provides an interactive visualization that shows different translation possibilities and allows exploration and correction of the translation. All three areas are linked. + +If high-quality translations for large texts are required, it is insufficient to use machine translation models alone. These models are computationally efficient and able to translate large documents with low time effort, but they may create erroneous or inappropriate translations. Humans are very slow compared to these models, but they can detect and correct mistranslations when familiar with the languages and the domain terminology. In a visual analytics system, both of these capabilities can be combined. Such a system should provide the translations from an NMT model and possibilities for users to visually explore translation results to find mistranslated sentences, correct them, and steer the machine learning model. + +We have developed a visual analytics approach to reach the goals outlined above. First, our system performs automatic translation of a whole, possibly large, document and shows the result in the Document View (Figure 1). Users can then explore and modify the document on different views [28] (Figure 2) to improve translations and use these corrections to fine-tune the NMT model. We support different NMT architectures and use both an LSTM-based and a Transformer architecture. + +So far, visual analytics systems for deep learning were mostly available for computer vision, some text-related areas, focusing on smaller parts of machine translation [22, 27] or intended for domain experts to gain insight into the models or to debug them $\left\lbrack {{32},{33}}\right\rbrack$ . This work contributes to visualization research by introducing the application domain of NMT using a user-oriented visual analytics approach. In our system, we employ different visualization techniques adapted for usage with NMT. Our parallel coordinates plot (Figure 1 (B)) supports the visualization of different metrics related to text quality. The interaction techniques in our graph-based visualization for attention (Figure 2 (B)) and tree-based visualization for beam search (Figure 2 (C)) are specifically designed for text exploration and modification. They have a strong coupling to the underlying model. Furthermore, our system has a fast feedback loop and allows interaction in real-time. We demonstrate our system's features in a video and will provide the source code of our system with the published paper. + +## 2 RELATED WORK + +This section first discusses visualization and visual analytics approaches for language translation in general and then visual analytics of deep learning for text. Afterward, we provide an overview of work that combines both areas in the context of NMT. + +Many visualization techniques and visual analytics systems exist for text; see Kucher and Kerren [21] for an overview. However, there is little work on exploring and modifying translation results. An interactive system to explore and correct translations was introduced by Albrecht et al. [1]. While the translation was created by machine translation, their system did not use deep learning. Lattice structures with an uncertainty that can be used to visualize machine translation were used by Collins et al. [9]. They created a lattice structure from beam search where the path for the best translation result is highlighted and can be corrected. We also use visualization for beam search, but ours is based on a tree structure. + +Recently, much research was done to visualize deep learning models to understand them better. Multiple surveys $\left\lbrack {6,{12},{16},{23},{45}}\right\rbrack$ are available that give summaries of existing visual analytics systems. It is noticeable that not much work exists related to text-based domains. One of the few examples is RNN-Vis [24], a visual analytics system designed to understand and compare models for natural language processing by considering hidden state units. Karpathy et al. [18] explore the prediction of Long Short-Term Memory (LSTM) models by visualizing activations on text. Heatmaps are used by Hermann et al. [15] in order to visualize attention for machine-reading tasks. To explore the training process and to better understand how the network is learning, RNNbow [4] can be used to visualize the gradient flow during backpropagation training in Recurrent Neural Networks (RNNs). + +While the previous systems support the analysis of deep learning models for text domains in general, approaches exist to specifically explore and understand NMT. The first who introduced visualizations for attention were Bahdanau et al. [2]; they showed the contribution of source words to translated words within a sentence, using an attention weight matrix. Later, Rikters et al. [27] introduced multiple ways to visualize attention and implemented exploration of a whole document. They visualize attention weights with a matrix and a graph-based visualization connecting source words and translated words by lines whose thickness represents the attention weight. Bar charts give an overview for a whole document for multiple attention-based metrics that are supposed to correlate with the translation quality. Interactive ordering of these metrics and sentence selection is possible. However, it is difficult for large documents to compare the different metrics as each bar chart is horizontally too large to be entirely shown on a display. The only connection between different bar charts is that the bars for the currently selected sentence are highlighted. Our system also uses such a metrics approach, but instead of using bar charts, a parallel coordinates plot was chosen for better scalability, interaction, and filtering. + +An interactive visualization approach for beam search is provided by Lee et al. [22]. The interaction techniques supported by their tree structure are quite limited. It is possible to expand the structure and to change attention weights. However, it is not possible to add unknown words, and no sub-word units are considered. Furthermore, the exploration is limited to single sentences instead of a whole document. + +With LSTMVis, Strobelt et al. [33] introduced a system to explore LSTM networks by showing hidden state dynamics. Among other application areas, their approach is also suitable for NMT. While our approach is rather intended for end-users, LSTMVis has the goal of debugging models by researchers and machine learning developers. With Seq2Seq-Vis, Strobelt et al. [32] present a system that uses an attention view similar to ours, and they also provide an interactive beam search visualization. However, their system is designed to translate single sentences, and no model adaption is possible for improved translation quality. Their system was designed for debugging and for gaining insight into the models. + +Since there are different architectures available for generating translations [43], specific visualization approaches may be required. Often, LSTM- based architectures are used. Recently, the Transformer architecture [36] gained popularity; Vig [37,38] visually explores their self-attention layers and Rikters et al. [26] extended their previous approach for debugging document to Transformer-based systems. + +All these systems provide different, possibly interactive, visualizations. However, their goal is rather to debug NMT models instead of supporting users in translating entire documents, or they are limited to small aspects of the model. Additionally, they are usually designed for one specific translation model. None of these approaches provide extended interaction techniques for beam search or interactive approaches to iteratively improve the translation quality of a whole document. + +## 3 VISUAL ANALYTICS APPROACH + +Our visual analytics approach allows the automatic translation, exploration, and correction of documents. Its components can be split into multiple parts. First, a document is automatically translated from one language into another one, then mistranslated sentences in the document are identified by users, and individual sentences can be explored and corrected. Finally, a model can be fine-tuned and the document retranslated. + +Our approach has a strong link to machine data processing and follows the visual analytics process presented by Keim et al. [19]. We use visualizations for different aspects of NMT models, and users can interact with the provided information. + +### 3.1 Requirements + +For the development of our system, we followed the nested model by Munzner [25]. The main focus was on the outer parts of the model, including identifying domain issues, feature implementation design, and visualization and interaction implementation. Additionally, we used a similar process as Sedlmair et al. [29], especially focusing on the core phases. Design decisions were made in close cooperation with deep learning and NMT experts, who are also co-authors of this paper. The visual analytics system was implemented in a formative process that included these experts. Our system went through an iterative development that included multiple meetings with our domain experts. Together, we identified the requirements listed in Table 1. After implementing the basic prototype of the system, we demonstrated it to further domain experts. At a later stage, we performed a small user study with experts for visualization and machine translation. For our current prototype, we added recommended functionality from these experts. + +Table 1: Requirements for our visual analytics system and their implementations in our approach. + +
$\mathbf{{R1}}$Automatic translation - A document is translated automatically by an NMT model.
$\mathbf{{R2}}$Overview - The user can see the whole document as a list of all source sentences and their translations (Figure 1 (A)). Additionally, an overview of the translation quality is provided in the Metrics View that reveals statistics about different metrics encoded as a parallel coordinates plot (Figure 1 (B)) showing an overall quality distribution.
$\mathbf{{R3}}$Find, filter, and select relevant sentences - Interaction in the parallel coordinates allows filtering according to different metrics and selecting specific sentences. It is also possible to select one sentence and order the other sentences of the document by similarity to verify for similar sentences if they contain similar errors. Additionally, our Keyphrase View (Figure 1 (C)) supports selecting sentences containing specific keywords that might be domain-specific and rarely used in general documents.
$\mathbf{{R4}}$Visualize and modify sentences - For each sentence, a beam search and attention visualization (Figure 2) can be used to interactively explore and adapt the translation result in order to correct erroneous sentences and explore how a translation failed. It is also possible to explore alternative translations.
$\mathbf{{R5}}$Update model and translation - The model can be fine-tuned using the user inputs from translation corrections; this is especially useful for domain adaption. Afterward, the document is retranslated with the updated model in order to improve the translation result (the result is visualized similar to Figure 9).
$\mathbf{{R6}}$Generalizability and extensibility - While we initially designed our visualization system for one translation model, we soon noticed that our approach should handle data from other translation models as well. Therefore, our approach should be easily adaptable for new models to cope with the dynamic development of new deep learning architectures. Our general translation and correction process is held quite agnostic to be applied on a variety of models. Only model-specific visualizations have limitations, need to be adapted or exchanged when using a different translation architecture.
$\mathbf{{R7}}$Target group - The target group for our system should be quite broad and include professional translators or students who need to translate documents. However, it should also be able to be used by other people interested in correcting and possibly better understanding the results of automated translation.
+ +### 3.2 Neural Machine Translation + +The goal of machine translation is to translate a sequence of words from a source language into a sequence of words in a target language. Different approaches exist to achieve this goal [34,44]. + +Usually, neural networks for machine translation are based on an encoder-decoder architecture. The encoder is responsible for transforming the source sequence into a fixed-length representation known as a context vector. Based on the context vector, the decoder generates an output sequence where each element is then used to generate a probability distribution over the target vocabulary. These probabilities are then used to determine the target sequence; a common method to achieve this uses beam search decoding [14]. + +Although different NMT models vary in their architecture, the previously described encoder-decoder design should apply to a wide range of architectures and new approaches that may be developed in the future (R6). In this work, we explored an LSTM architecture with attention and extended our approach to include the Transformer architecture, thus verifying its ability to generalize. + +One of the first neural network architectures for machine translation consists of two RNNs with LSTM units [5]. To handle long sentences, the attention mechanism for NMT [2] was introduced. It allows sequence-to-sequence models to attend to different parts of the source sequence while predicting the next element of the target sequence by giving the decoder access to the encoder's weighted hidden states. During decoding, the hidden states of the encoder together with the hidden state of the decoder for the current step are used to compute the attention scores. Finally, the context vector for the current step is computed as a sum of the encoder hidden states, weighted by the attention scores. The attention weights can be easily visualized and used to explain why a neural network model predicted a certain output. Furthermore, attention weights can be understood as a soft alignment between source and target sequences. For each translated word, the weight distribution over the source sequence signifies which source words were most important for predicting this target word. The Transformer architecture was recently introduced by Vaswani et al. [36] and gained much popularity. It uses a more complex attention mechanism with multi-head attention layers; especially, self-attentions play an important role in the translation process. We verify its applicability to our approach and visualize only the part of the attention information that showed an alignment between source and target sentences comparable to the LSTM model. + +### 3.3 Exploration of Documents + +After uploading a document to our system, it is translated by an NMT model (R1). The main view of our approach then shows information about the whole document (R2). This includes a list of all sentences in the Document View (Figure 1 (A)) and an overview of the translation quality in the Metrics View (Figure 1 (B)). Using the Metrics View and Keyphrase View (Figure 1 (C)), sentences can be filtered to detect possible mistranslated sentences that can be flagged by the user (R3). Once a mistranslated sentence is found, it is also possible to filter for sentences containing similar errors (R3). + +## Metrics View + +In the Metrics View, a parallel coordinates plot (Figure 1 (B)) is used to detect possible mistranslated sentences by filtering sentences according to different metrics (R3). For instance, it is possible to find sentences that have low translation confidence. + +Multiple metrics exist that are relevant to identify translations with low quality; we use the following metrics in our approach: + +- Confidence: A metric that considers attention distribution for input and output tokens; it was suggested by Rikters et al. [27]. Here, a higher value is usually better. + +- Coverage Penalty: This metric by Wu et al. [42] can be used to detect sentences where words did not get enough attention. Here, a lower value is usually better. + +- Sentence length: The sentence length (the number of words in a source sentence) can be used to filter very short or long sentences. For example, long sentences might be more likely to contain errors. + +- Keyphrases: This metric can be used to filter for sentences containing domain-specific words. As these words are rare in the training data, the initial translation of sentences containing them is likely erroneous. The values used for this metric are the number of occurrences of keyphrases in a sentence weighted by the frequency of the keyphrases in the whole document. + +- Sentence similarity: Optionally, for a given sentence, the similarity to all other sentences can be determined using cosine similarity. This helps to find sentences with similar errors to a detected mistranslated sentence. + +- Document index: The document index allows the user to sort sentences according to their original order in the document, which can be especially important for correcting translations where the context of sentences is relevant. Furthermore, this metric might also show trends like consecutive sentences with low confidence. + +In contrast to Rikters et al. [27], who use bar charts to visualize different metrics, we chose a parallel coordinates plot [17]. Each sentence can be mapped to one line in such a plot, and different metrics can be easily compared. These plots are useful for an overview of different metrics and to detect outliers and trends. Interactions with the metrics, such as highlighting lines or choosing filtering ranges, are supported. It can be expected that sentences filtered for both, low confidence and high coverage penalty, are more likely to be poorly translated than sentences falling into only one of these categories. + +## Keyphrase View + +It is possible to search for sentences according to keyphrases by selecting them in the Keyphrase View (Figure 1 (C)) (R3). This can be visualized as shown in Figure 4. Keyphrases are domain-specific words and were not often included in the training data used for our model. As the model has not enough knowledge on how to deal with these words, it is important to verify if the respective sentences were translated correctly. In addition to automatically determined keyphrases, users can manually specify further keyphrases for sentence filtering. + +## Document View + +A list of all the source sentences in a document and a list of their translations are shown in the Document View (Figure 1 (A)) (R2). Each entry in this list can be marked as correct or flagged (Figure 4) for later correction. A small histogram shows an overview of the previously mentioned metrics. If a sentence is modified, either through user-correction or retranslation by the fine-tuned model, changes in the sentences are highlighted (Figure 9). Both the Metrics View and the Keyphrase View are connected via brushing and linking [39] to allow filtering for sentences that are likely to be mistranslated and should be examined and possibly corrected. Additionally, sentences can be sorted into a list by similarity to a user-selected reference sentence. In this list, sentences can be selected for further exploration and correction in more detailed sentence-based views. + +### 3.4 Exploration and Correction of Sentences + +After filtering and selection, a sentence can be further analysed with the Sentence, Attention, and Beam Search Views (Figure 2) and subsequent correction (R4). These views are shown simultaneously to allow interactive exploration and modification of translations. + +Note, on the sentence level, we use subword units to handle the problem of rare words, which often occurs in domain-specific documents, and to avoid unknown words. We use the Byte Pair Encoding (BPE) method proposed by Sennrich et al. [31] for compressing text by recursively joining frequent pairs of characters into new subwords. This means, instead of choosing whole words to build the source and target vocabulary, words are split into subword units consisting of possibly multiple characters. This method reduces model size, complexity, and training time. Additionally, the model can handle unknown words by splitting them into their subword units. As these subword units are known beforehand, they do not require the introduction of an "unknown" token for translation. Thus, we can adapt the NMT model to any new domain, including those with vocabulary not seen at training time. + +![01963ea6-6fc2-77d1-8244-0901581a3a6c_4_150_148_723_274_0.jpg](images/01963ea6-6fc2-77d1-8244-0901581a3a6c_4_150_148_723_274_0.jpg) + +Figure 3: Attention visualization: (top) when hovering a source word (here: 'verarbeiten') translated words influenced by the source are highlighted and (bottom) when hovering a translated word (here: 'process') source words that influence the translation are highlighted according to attention weights. + +## Sentence View + +Similar to common translation systems, the Sentence View (Figure 2 (A)) shows the source sentence and the current translation. It is possible to manually modify the translation, which in turn updates the content in the other sentence-based views. After adding a new word in the text area, the translation with the highest score is used for the remainder of the sentence. This supports a quick text-based modification of a translation without explicit use of visualizations. + +## Attention View + +The Attention View depends on the underlying NMT model. It is intended to visualize the relationship between words of the source sentence and the current translation as a weighted graph (Figure 2 (B)); such a technique was also used by Strobelt et al. [32]. Both source and translated words are represented by nodes; links between such words show the attention weights encoded by the thickness of the connecting lines (we use a predefined threshold to hide lines for very low attention). These weights correlate with the importance of source words for the translated words. Hovering over a source word highlights connecting lines to translated words starting at this word. In addition, the translated words are highlighted by transparency according to the attention weights (Figure 3 top). While this shows how a source word contributes to the translation, it is also possible to show for translated words how source words contribute to the translation (Figure 3 bottom). This interactive visualization supports users in understanding how translations are generated from the source sentence words. On the one hand, such a visualization helps gain insight into the NMT model, and, on the other hand, detects issues in generated translations. The links between source sentence and translation can be explored to identify anomalies such as under-or over-translation. Missing attention weights can be an indication for under-translation and links to multiple translated words for over-translation. In our case study in Section 4, examples of these cases are presented. While this technique specifically employs information of the attention-based LSTM model, we use it in an adapted form for the Transformer architecture (see Section 4.4). A visualization more tailored to Transformers, also including self-attention and attention scores from multiple decoder layers, could provide additional information. Further models may need different visualizations for a generalized use of our approach, employing model-specific information. + +## Beam Search View + +While the Attention View can be used to identify positions with mistranslations, the Beam Search View supports users in interactively modifying and correcting translations. The Beam Search View visualizes multiple translations created by the beam search decoding as a hierarchical structure (see Figure 2 (C)). This interactive visualization can be used for post-editing the translations. + +The simplest way of predicting a target sequence is greedy decoding, where at every time step, the token with the highest output probability is chosen as the next predictied token and fed to the decoder in the next step. This is an efficient, straightforward way of generating an output sequence. However, another translation may be better overall, despite having lower probabilities for the first words. Beam search decoding [14] is a compromise between exhaustive search and greedy decoding, often used for generating the final translation. A fixed number(k)of hypotheses is considered at each timestep. For each hypothesis considered, the NMT model outputs a probability distribution over the target vocabulary for the next token. These hypotheses are sorted by the probability of the latest token, and up to $k$ hypotheses remain in the beam. Hypotheses ending with the End-of-Sequence (EOS) token are filtered out and put into the result set. Once $k$ hypotheses are in the result set, the beam search stops, and the final hypotheses are ranked according to a scoring function that depends on the attention weights and the sentence length. + +For visualization, we use a similar approach as Strobelt et al. [32], and Lee et al. [22]: a tree structure reflects the inherently hierarchical nature of the beam search decoding. This way, translation hypotheses starting with the same prefixes are merged into one branch of this hierarchical structure. The root node of each translation is associated with a Start-of-Sequence (SOS) token and all leaf nodes with an End-of-Sequence (EOS) token. Compared to the visualization of a list of different suggested translations, showing a tree is more compact, and it is easier to recognize where commonalities of different translation variants lie. + +Each term of the translation is visualized by a circle that represents the actual node and a corresponding label. The color of a circle is mapped to the word's output probability. This supports users in identifying areas with a lower probability that might require further exploration. It can be seen as uncertainty for the prediction of words. In our visualization, we differentiate between nodes that represent subwords and whole words. Continuous lines connect subwords and nodes are placed closer together to form a unit. In contrast, the connections to whole words are represented by dashed lines. + +The beam search visualization can be used to navigate within a translation and edit it (Figures 7 and 8). The interaction can be performed either mouse-based or keyboard-based; the latter is more efficient for fast post-editing. The view supports standard panning-and-zooming techniques that are especially needed to explore long sentences as they do not fit common displays. For navigation within the tree, arrow keys can be used to move through a sentence, or nodes can be selected by mouse cursor. If the translation of the current node's child node is not satisfying, the node can be expanded to show suggestions for correction. If the user selects a suggested word, the beam search runs with a lexical prefix constraint, and the tree structure gets updated. If the suggested words are not suitable, a custom correction can be performed by typing an arbitrary word that better fits. The number of suggested translations is initially set to three and can be increased by adapting the beam size. Increasing this value may create better translations and provides more alternative translations. However, the higher the value is, the more information has to be shown in the visualization. By hovering and selecting elements in this view, corresponding elements of the Attention View and Sentence View are shown for reference. + +![01963ea6-6fc2-77d1-8244-0901581a3a6c_5_151_147_712_573_0.jpg](images/01963ea6-6fc2-77d1-8244-0901581a3a6c_5_151_147_712_573_0.jpg) + +Figure 4: Main view of the system: the Document View shows some flagged sentences for correction. Additionally, the keyphrase filter (top right) is active: all sentences containing the keyphrase 'MÜ' are shown in the Metrics and Document Views. It is visible that 'MÜ' is never correctly translated to 'MT'. + +### 3.5 Model Fine-tuning and Retranslation + +After correcting the translation of multiple sentences, the user corrections can be used to fine-tune the NMT model and automatically improve the translation of the not yet verified sentences (R5). This approach can be applied repeatedly to improve the document's translation quality, especially for domain-specific texts. + +Documents often belong to a specific domain, e.g., legal, medical, or scientific. Each domain has specific terminology, and one word may even refer to different concepts in different domains. As such, the ability of NMT models to handle different types of domains is an important research topic. Domain adaption refers to techniques allowing NMT models trained on general training data, also called out-of-domain, to adapt to domain-specific documents, called in-domain. This is useful since there may be an abundant amount of general training data, but domain-specific data may be rare. Since NMT models need a large amount of training data to achieve good translation quality, the out-of-domain data can be used to train a baseline model. The model can then be adapted using the in-domain data (R5), which typically contains a smaller number of sentences: in our system, we use the user-corrected sentences. This mitigates the problem of training an NMT model in a low-resource scenario where little data exists for a given domain. In our approach, we continue training for the in-domain data in a reduced way by freezing certain model weights (for the LSTM-based model, both decoder and the LSTM layers of the encoder are trained; for the Transformer, only the decoder is trained). + +## 4 CASE STUDY + +As a typical use case, we take the German Wikipedia article for machine translation (Maschinelle Übersetzung) [41] as a document for translation into English. In the following, we show how to use our system to improve the translation quality of the document. Please see our accompanying video for a demonstration with the Transformer model. The examples in the following were created with both the LSTM and Transformer models. We trained our models on a general data set: the German-to-English data set from the 2016 ACL Conference on Machine Translation (WMT'16) [3] shared news translation task. This is a popular data set for NMT, used, for instance, by Denkowski and Neubig [11] and Sennrich et al. [30]. + +![01963ea6-6fc2-77d1-8244-0901581a3a6c_5_995_149_583_326_0.jpg](images/01963ea6-6fc2-77d1-8244-0901581a3a6c_5_995_149_583_326_0.jpg) + +Figure 5: Example of over-translation: 'Examples' is placed twice as translation for the German word 'Beispiele'. The Beam Search View (right) shows possible alternative translations. However, only increasing the beam size to four shows the translation we would have expected. + +### 4.1 Exploration of Documents + +After uploading a document (R1), we have a look at the parallel coordinates plot (R2) for our initial translations and the list of keyphrases in order to detect possible mistranslations (R3). In the Keyphrase View, we notice the domain-specific term 'MÜ' occurring very often. This term is the German abbreviation for 'machine translation' and should therefore be translated as 'MT'. However, none of the translations use the correct term (Figure 4). Additionally, one could select and verify sentences with low confidence or with a high coverage penalty. Here, we especially notice the under-translation of some sentences. After verifying a translation in the Document View, users can decide if they are correct (R2). If the users do not agree with the translation, they can set a flag (Figure 4) to modify the translation later or switch to the sentence-based views to correct it (R4). + +### 4.2 Exploration and Correction of Sentences + +After setting flags for multiple sentences (Figure 4), or the decision to explore or modify a sentence, a more detailed view for each sentence can be shown to explore and improve their translations interactively (Figure 2) (R4). + +Over-translation is a common issue of NMT [20]. In the Attention View, it is possible to see what went wrong by identifying where the attention weights connect the source and destination words. + +For both models, we notice some cases for very short sentences. Figure 5 shows for the German heading 'Beispiele' (en: 'Examples'), a translation that uses the translated word multiple times. Also, the suggested alternatives use this term more than once. Only after increasing the beam size to four, the correct translation is visible, which can then be selected as the correction. + +More often, only parts of a sentence are translated, and important words are not considered in our document. Such under-translation is shown in Figure 6. In the first example, only the beginning of the sentence is translated, and it is visible that the remaining nodes have almost zero attention. In the second example, the German term 'zweisprachigen' (en: 'bilingual') is skipped in the translation. While this part of the translation is missing, the translated sentence is still correct and fluid; it might be difficult to detect such an error without such attention visualizations. + +An example of a wrong translation containing a keyphrase is visualized in Figure 7. Here, it is also shown that using the beam search visualization, it is possible to select an alternative translation interactively starting from the position where the first error occurs. The beam search provides possible alternative translations, but it is possible to manually type what the user believes should be the next term. Here, we enter the correct translation manually. The beam search visualization automatically updates in real-time according to the correction. + +![01963ea6-6fc2-77d1-8244-0901581a3a6c_6_149_153_1499_244_0.jpg](images/01963ea6-6fc2-77d1-8244-0901581a3a6c_6_149_153_1499_244_0.jpg) + +Figure 6: Example of under-translation shown in the Attention View: (top) for the LSTM model the end of the sentence is not translated; attention weights are very low for this part of the sentence. (Bottom) for the Transformer architecture the term 'zweisprachigen' (en: 'bilingual') is not translated; attention weights are very low for this term. + +![01963ea6-6fc2-77d1-8244-0901581a3a6c_6_148_556_722_657_0.jpg](images/01963ea6-6fc2-77d1-8244-0901581a3a6c_6_148_556_722_657_0.jpg) + +Figure 7: Example of a mistranslated sentence containing the keyphrase 'MÜ' shown as beam search visualization: (top) suggested translation, suggested alternatives and custom correction; (bottom) updated translation tree for corrected keyword with new suggestions for continuing the sentence after the custom change. + +Finally, it is also possible to change sentences without mistakes. Sometimes, sentences are correctly translated, but different words or sentence structures are used as the current user would prefer for the context of a sentence or to express someone's own style (Figure 8). Again, it is possible to explore and select alternative words or sentences with the Beam Search View. If we wanted to start the sentence with a different word, an alternative could be selected, and the remaining sentence would get updated accordingly. + +After correcting and accepting multiple translation corrections, the Document View shows how a translation was changed (Figure 9). + +### 4.3 Model Fine-tuning and Retranslation + +After users corrected multiple sentences, they can choose to retrain the current model for not yet accepted sentences (R5). The model is then fine-tuned using the corrected sentences by the user. This is usually a small number of sentences. Afterward, the system translates the uncorrected sentences to improve translation quality as the model adapted from the corrected sentences. Since our document contains 29 times the keyphrase 'MÜ' that is wrongly translated, we retrained our model after correcting only a few (less than 5 ) of these terms to 'MT'. After retranslation, the Document View shows the difference of the translations compared to before. For both the LSTM and the Transformer model, all or almost all occurrences of 'MÜ' are now correctly translated. The user can look at the changes and accept translations or continue with iteratively improving sentences and fine-tuning the model. + +![01963ea6-6fc2-77d1-8244-0901581a3a6c_6_926_563_723_765_0.jpg](images/01963ea6-6fc2-77d1-8244-0901581a3a6c_6_926_563_723_765_0.jpg) + +Figure 8: Correctly translated sentence is changed to another correct translation. 'SOS' is selected to show alternative beginnings for the sentence. After choosing an alternative the remaining sentence gets updated by another correct translation. + +### 4.4 Architecture-specific Observations + +We initially designed our approach for the use with an LSTM-based model with an attention mechanism. Since other architectures exist to translate documents, we also adapted it and tested its usefulness for the current state-of-the-art Transformer architecture [36] (R6). This architecture is also attention-based, and we analyzed how well it fits our interactive visualization approach. The general workflow of our system can be used in the same way as the model we initially developed it for: the Document and Metrics Views can be used to identify sentences for further investigation, and sentences can be updated using the Sentence and Beam Search View. The main difference between the Transformer model concerning our approach is the attention mechanism that influences the Attention View and some calculated metric values. + +The Transformer architecture uses multiple layers with multiple self-attention heads instead of just attentions between encoder and decoder. There are approaches for the visualization of this more complex attention mechanism [37, 38]. The attention values for Transformers could, for example, show different linguistic characteristics for different attention heads [8]. However, including this into our system would make our approach more complex and not useful for end-users (R7) with little knowledge about this architecture. As a simple workaround to apply our visualization, we discard the self-attention and only use the decoder attention. We explored the influence of decoder attention values from different layers, averaged across all attention heads. Similar to Rikters et al. [26], we noticed that averaging attention from all layers is not meaningful since almost all source words are connected to all target words. Using one of the first layers showed similar results. For the final layer, a better alignment could be seen; however, the last token of the source word received too much attention compared to other words. Instead, using the second last layer showed a similar alignment between source and target words as it is available for the LSTM model. Therefore, we use this as a compromise for the use in our Attention View and for calculation of metric values. + +
SourceTranslation
#23 Der Stand der MÜ im Jahr 2010 wurde von vielen Menschen als unbefriedigend bewertet .Many people have been considered unsatisfactory in assessed the context state of the MU MT in 2010 as unsatisfactory .
#30 Der Bedarf an MÜ-Anwendungen steigt weiter :The need for MUs MT applications continues to rise :✓
#46 Dies ist die älteste und einfachste MÜ-Methode , die beispielsweise auch obigem Russisch-Englisch-System zugrunde lagThis is the oldest and simplest MT method that ✓ underlies, which was based, for example, on the obigem Russian-English system mentioned above
#48 Die Transfer-Methode Ist die klassische MÜ-Methode mit drei Schritten :The transfer method is the classic MU method MT ✓ method with three steps :
61 Beispielbasierte MÜExamples based MT✓
+ +Figure 9: Document View showing corrected translations and changes to the initial machine-generated translations. + +Since there are different approaches and architectures developed for NMT, we could incorporate them as well (R6). Some might provide better support in gaining insights into the model and offer different visualization and interaction capabilities. For others, new ways for visualization will have to be investigated. + +## 5 User Study + +We conducted an early user study during the development of our approach to evaluate our system's concept. We used a prototype with an LSTM translation model. The system had the same views as described before but limited features. A group of anonymous visualization and machine learning experts were invited to test our system online for general aspects related to visualization, interaction, and usefulness. Our goal was to make sure that we considered aspects relevant from both the visualization and the machine translation perspective in our system and to improve our approach. The user study was questionnaire-based to evaluate the effectiveness of the system, understandability of visualizations, and usability of interaction techniques. A 7-point Likert scale was used. In this study, the German Wikipedia article for autonomous driving (Autonomes Fahren) [40] was available to all participants. This allowed the participants to explore the phenomena we showed previously. The participants claimed to have good English (mean $= {5.1}$ , std. dev. $= {0.8}$ ) and very good German (mean $= {6.2}$ , std. dev. $= {1.7}$ ) knowledge. While the visualization experts claimed to have rather low knowledge about machine learning (mean: 2.5), the machine learning experts similarly indicated lower knowledge for visualization (mean: 3). + +First, participants were introduced to the system with a short overview of the features. Then, they could explore the system freely with no time restriction. Afterward, they were asked to participate in a survey regarding the usefulness of our system and its design choices. Additionally, there were free text sections for further feedback. We recruited 11 voluntary participants from our university (six experts on visualization and five for language processing). + +Table 2: Ratings from our user study for each evaluated view on a 7-point Likert scale; mean and standard deviation values are provided. + +
$\mathbf{{View}}$EffectivenessVisualizationInteraction
Metrics View5.9 (1.1)6.8 (0.4)6.1 (0.7)
Keyphrase View4.4 (1.6)6.5 (1.2)6.3 (1.1)
Beam Search View5.6 (1.5)6 (1.3)4.5 (1.8)
Attention View${5.6}\left( {0.8}\right)$6.2 (1.2)5.9 (0.9)
+ +The general effectiveness of translating a large document containing more than 100 sentences with our approach was rated high (mean $= {5.6}$ , std. dev. $= {1.0}$ ) compared to a small document containing up to 20 sentences (mean $= {4.5}$ , std. dev. $= {1.6}$ ). The results for effectiveness, ease of understanding and intuitiveness of visualizations, and ease of interaction are given in Table 2. The ratings for the visualizations were high for all views. Best rated was the Metrics View that additionally had the lowest standard deviation. As not all our user study participants were visualization experts, we noticed that non-experts could also manage to understand and work with parallel coordinate plots. We conclude that our design choice for the visualization of metrics was appropriate. The ratings for interaction were also very high, but there was more variation. Especially the interaction for beam search was rated comparatively low and had the highest standard deviation; two language processing participants ranked it very low ( 1 and 2 ) and two (one from each participant group) very high (7). This variation might be the result of the learning curve being different for different participant groups. Since we conducted the user study, we have also improved the interaction in this view. For effectiveness, the Keyphrase View had the lowest rating. We believe the reason is that participants were not able to detect enough mistranslated sentences with this view. However, this might be due to our document provided and may differ for other documents containing more domain-specific vocabulary as we showed in our case study. + +In addition, we asked users for general feedback on our approach. Especially the Metrics View received positive feedback. Participants mentioned that it is useful for quickly detecting mistranslations through brushing and linking. For the Beam Search View, one participant noted that the alternatives provided would speed up the correction of translations. For one participant, the Attention View was useful in showing the differences in the sentence structure of different languages. Negative feedback was mostly related to interaction and specific features; some participants suggested new features. Multiple participants noted that the exploration and correction of long sentences are challenging in the Beam Search View as the size of the viewport is limited. Furthermore, a feature to delete individual words and functionality for freezing areas was suggested. From the remaining feedback, we already included, for example, an undo function for the sentence views. Also, to find sentences that might contain similar errors, one participant recommended showing sentences similar to a selected sentence, and we added a respective metric. Additionally, it was mentioned that confidence scores could be shown in the document list next to each sentence and not only in the Metrics View. This would be helpful to quickly examine the confidence value even if the document is sorted by a different metric (e.g., document order); small histograms were added next to each sentence as a quick quality overview. + +## 6 DISCUSSION AND FUTURE WORK + +To conclude, we present a visual analytics approach for exploring, understanding, and correcting translations created by NMT. Our approach supports users in translating large domain-specific documents with interactive visualizations in different views, allows sentence correction in real-time and model adaption. + +Our qualitative user study results showed that our visual analytics system was rated positively regarding effectiveness, interpretability of visualizations, and ease of interaction. The participants mastered the translation process well with our selected visualizations. Especially, our choice of parallel coordinate plots for visualization of multiple metrics and the related interaction techniques for brushing and linking were rated positively. Our approach had a clear preference for translating large documents compared to a traditional text-based approach. Right now, users have to use metrics to decide with which sentence they will start correcting the translations. More research has to be done for better automatic detection of mistranslated sentences. For example, an additional machine learning model could be trained with sentences that were already identified as wrong translations. + +We believe our system is useful for people who have to deal with large documents and could use the features of interactive sentence correction and domain adaption. Comparing the use of our approach for LSTM and the Transfomer architecture showed almost no difference; for both, we could successfully interactively improve the translation quality of documents and see model-specific information. We argue that our general translation and visualization process can also be used with further models, while in such cases, some visualization views might need limited adaptation. + +## REFERENCES + +[1] J. Albrecht, R. Hwa, and G. E. Marai. The Chinese Room: Visualization and interaction to understand and correct ambiguous machine translation. Computer Graphics Forum, 28(3):1047-1054, 2009. + +[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. + +[3] O. Bojar, R. Chatterjee, C. Federmann, Y. Graham, B. Haddow, M. Huck, A. Jimeno Yepes, P. Koehn, V. Logacheva, C. Monz, M. Ne-gri, A. Neveol, M. Neves, M. Popel, M. Post, R. Rubino, C. Scarton, L. Specia, M. Turchi, K. Verspoor, and M. Zampieri. Findings of the 2016 Conference on Machine Translation (WMT16). In Proceedings of the First Conference on Machine Translation, Volume 2: Shared Task Papers, pp. 131-198. Association for Computational Linguistics, 8 2016. + +[4] D. Cashman, G. Patterson, A. Mosca, and R. Chang. RNNbow: Visualizing learning via backpropagation gradients in recurrent neural networks. In Workshop on Visual Analytics for Deep Learning (VADL), 2017. + +[5] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724-1734. Association for Computational Linguistics, 2014. + +[6] J. Choo and S. Liu. Visual analytics for explainable deep learning. IEEE Computer Graphics and Applications, 38(4):84-92, Jul 2018. + +[7] C. Chu and R. Wang. A survey of domain adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 1304-1319. Association for Computational Linguistics, 2018. + +[8] K. Clark, U. Khandelwal, O. Levy, and C. D. Manning. What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 276-286. Association for Computational Linguistics, Florence, Italy, Aug. 2019. doi: 10.18653/v1/W19-4828 + +[9] C. Collins, S. Carpendale, and G. Penn. Visualization of uncertainty in lattices to support decision-making. In Proceedings of Eurograph-ics/IEEE VGTC Symposium on Visualization (EuroVis 2007), pp. 51-58, 2007. + +[10] DeepL. DeepL Translator. https://www.deepl.com/translator, 2021. + +[11] M. Denkowski and G. Neubig. Stronger baselines for trustable results in neural machine translation. In Proceedings of the First Workshop on + +Neural Machine Translation, pp. 18-27. Association for Computational Linguistics, 2017. + +[12] R. Garcia, A. C. Telea, B. C. da Silva, J. Tørresen, and J. L. D. Comba. + +A task-and-technique centered survey on visual analytics for deep learning model engineering. Computers & Graphics, 77:30-49, 2018. + +[13] Google. Google Translate. https://translate.google.com, 2021. + +[14] A. Graves. Sequence transduction with recurrent neural networks. CoRR, abs/1211.3711, 2012. + +[15] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, eds., Advances in Neural Information Processing Systems 28, pp. 1693-1701. Curran Associates, Inc., 2015. + +[16] F. M. Hohman, M. Kahng, R. Pienta, and D. H. Chau. Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Transactions on Visualization and Computer Graphics, pp. 1-1, 2018. + +[17] A. Inselberg. The plane with parallel coordinates. The Visual Computer, 1(2):69-91, Aug 1985. doi: 10.1007/BF01898350 + +[18] A. Karpathy, J. Johnson, and F. Li. Visualizing and understanding recurrent networks. CoRR, abs/1506.02078, 2015. + +[19] D. A. Keim, F. Mansmann, J. Schneidewind, J. Thomas, and H. Ziegler. Visual analytics: Scope and challenges. In Visual data mining, pp. 76-90. Springer, 2008. + +[20] P. Koehn and R. Knowles. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pp. 28-39. Association for Computational Linguistics, 2017. + +[21] K. Kucher and A. Kerren. Text visualization techniques: Taxonomy, visual survey, and community insights. In 2015 IEEE Pacific Visualization Symposium (PacificVis), pp. 117-121, 2015. + +[22] J. Lee, J.-H. Shin, and J.-S. Kim. Interactive visualization and manipulation of attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 121-126. Association for Computational Linguistics, 2017. + +[23] S. Liu, X. Wang, M. Liu, and J. Zhu. Towards better analysis of machine learning models: A visual analytics perspective. Visual Informatics, 1(1):48-56, 2017. + +[24] Y. Ming, S. Cao, R. Zhang, Z. Li, Y. Chen, Y. Song, and H. Qu. Understanding hidden memories of recurrent neural networks. In 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 13-24, 2017. + +[25] T. Munzner. A nested model for visualization design and validation. IEEE Transactions on Visualization and Computer Graphics, 15(6):921-928, 2009. + +[26] M. Rikters. Debugging neural machine translations. arXiv preprint arXiv:1808.02733, 2018. + +[27] M. Rikters, M. Fishel, and O. Bojar. Visualizing neural machine translation attention and confidence. The Prague Bulletin of Mathematical Linguistics, 109(1):39-50, 2017. + +[28] J. C. Roberts. State of the art: Coordinated multiple views in exploratory visualization. In Fifth International Conference on Coordinated and Multiple Views in Exploratory Visualization (CMV 2007), pp. 61-71, 2007. + +[29] M. Sedlmair, M. Meyer, and T. Munzner. Design study methodology: Reflections from the trenches and the stacks. IEEE Transactions on Visualization and Computer Graphics, 18(12):2431-2440, 2012. doi: 10.1109/TVCG.2012.213 + +[30] R. Sennrich, B. Haddow, and A. Birch. Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, August 11-12, Berlin, Germany, pp. 371-376. The Association for Computer Linguistics, 2016. doi: 10.18653/v1/w16-2323 + +[31] R. Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725. Association for Computational Linguistics, 2016. + +[32] H. Strobelt, S. Gehrmann, M. Behrisch, A. Perer, H. Pfister, and A. M. Rush. Seq2seq-vis : A visual debugging tool for sequence-to-sequence + +models. IEEE Transactions on Visualization and Computer Graphics, + +pp. 1-1, 2018. + +[33] H. Strobelt, S. Gehrmann, H. Pfister, and A. M. Rush. LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Transactions on Visualization and Computer Graphics, 24(1):667-676, 2018. + +[34] Z. Tan, S. Wang, Z. Yang, G. Chen, X. Huang, M. Sun, and Y. Liu. Neural machine translation: A review of methods, resources, and tools, 2020. + +[35] Z. Tu, Y. Liu, L. Shang, X. Liu, and H. Li. Neural machine translation with reconstruction. In Thirty-First AAAI Conference on Artificial Intelligence (AAAI), pp. 3097-3103, 2017. + +[36] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. ArXiv e-prints, June 2017. + +[37] J. Vig. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 37-42. Association for Computational Linguistics, Florence, Italy, July 2019. doi: 10. 18653/v1/P19-3007 + +[38] J. Vig. Visualizing attention in transformerbased language models. arXiv preprint arXiv:1904.02679, 2019. + +[39] M. O. Ward. Linking and Brushing, pp. 1623-1626. Springer US, Boston, MA, 2009. doi: 10.1007/978-0-387-39940-9_1129 + +[40] Wikipedia. Autonomes fahren - wikipedia, die freie enzyklopädie, 2018. + +[41] Wikipedia. Maschinelle übersetzung — wikipedia, die freie enzyk-lopädie, 2021. + +[42] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, Ł. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. ArXiv e-prints, Sept. 2016. + +[43] S. Yang, Y. Wang, and X. Chu. A survey of deep learning techniques for neural machine translation. arXiv preprint arXiv:2002.07526, 2020. + +[44] S. Yang, Y. Wang, and X. Chu. A survey of deep learning techniques for neural machine translation. 2020. + +[45] J. Yuan, C. Chen, W. Yang, M. Liu, J. Xia, and S. Liu. A survey of visual analytics techniques for machine learning. Computational Visual Media, pp. 1-34, 2020. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..7255b0d840d72c98da38b289ad1fa3073155addd --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,280 @@ +§ VISUAL-INTERACTIVE NEURAL MACHINE TRANSLATION + +Category: Research + + < g r a p h i c s > + +Figure 1: The main view of our neural machine translation system: (A) Document View with sentences of the document for the current filtering settings, (B) Metrics View with sentences of the filtering result highlighted, and (C) Keyphrase View with a set of rare words that may be mistranslated. The Document View initially contains all sentences automatically translated with the NMT model. After filtering with the Metrics View and Keyphrase View, a smaller selection of sentences is shown. Each entry in the Document View provides information about metrics, the correction state, and functionality for modification (on the right side next to each sentence). The Metrics View represents each sentence as one path and shows values for different metrics (e.g., correlation, coverage penalty, sentence length). Green paths correspond to sentences of the current filtering. One sentence is highlighted (yellow) in both the Metrics View and the Document View. + +§ ABSTRACT + +We introduce a novel visual analytics approach for analyzing, understanding, and correcting neural machine translation. Our system supports users in automatically translating documents using neural machine translation and identifying and correcting possible erroneous translations. User corrections can then be used to fine-tune the neural machine translation model and automatically improve the whole document. While translation results of neural machine translation can be impressive, there are still many challenges such as over-and under-translation, domain-specific terminology, and handling long sentences, making it necessary for users to verify translation results; our system aims at supporting users in this task. Our visual analytics approach combines several visualization techniques in an interactive system. A parallel coordinates plot with multiple metrics related to translation quality can be used to find, filter, and select translations that might contain errors. An interactive beam search visualization and graph visualization for attention weights can be used for post-editing and understanding machine-generated translations. The machine translation model is updated from user corrections to improve the translation quality of the whole document. We designed our approach for an LSTM-based translation model and extended it to also include the Transformer architecture. We show for representative examples possible mistranslations and how to use our system to deal with them. A user study revealed that many participants favor such a system over manual text-based translation, especially for translating large documents. + +Index Terms: Human-centered computing-Visualization-Visualization application domains-Visual analytics; Human-centered computing-Visualization-Visualization systems and tools; Computing methodologies-Artificial intelligence-Natural language processing—Machine translation + +§ 1 INTRODUCTION + +Machine learning and especially deep learning are popular and rapidly growing fields in many research areas. The results created with machine learning models are often impressive but sometimes still problematic. Currently, much research is performed to better understand, explain, and interact with these models. In this context, visualization and visual analytics methods are suitable and more and more often used to explore different aspects of these models. Available techniques for visual analytics in deep learning were examined by Hohman et al. [16]. While there is a large amount of work available for explainability in computer vision, less work exists for machine translation. + +As it becomes increasingly important to communicate in different languages, and since information should be available for a huge range of people from different countries, many texts have to be translated. Doing this manually takes much effort. Nowadays, online translation systems like Google Translate [13] or deepL [10] support humans in translating texts. However, the translations generated that way are often not as expected or like someone familiar with both languages might translate them. It may also not express someone's translation style or use the correct terminology of a specific domain or for some occasion. Often, more background knowledge about the text is required to translate documents appropriately. + +With the introduction of deep learning methods, the translation quality of machine translation models has improved considerably in the last years. However, there are still difficulties that need to be addressed. Common problems of neural machine translation (NMT) models are, for instance, over- and under-translation [35] when words are translated repeatedly or not at all. Handling rare words [20], which might be available in specific documents, and long sentences, are also issues. Domain adaption [20] is another challenge; especially documents from specific domains such as medicine, law, or science require high-quality translations [7]. As many NMT models are trained on general data sets, their translation performance is worse for domain-specific texts. + + < g r a p h i c s > + +Figure 2: The detailed view for a selected sentence consists of the Sentence View (A), the Attention View (B), and the Beam Search View (C). The Sentence View allows text-based modifications of the translation. The Attention View shows the attention weights (represented by the lines connecting source words with their translation) for the translation. The Beam Search View provides an interactive visualization that shows different translation possibilities and allows exploration and correction of the translation. All three areas are linked. + +If high-quality translations for large texts are required, it is insufficient to use machine translation models alone. These models are computationally efficient and able to translate large documents with low time effort, but they may create erroneous or inappropriate translations. Humans are very slow compared to these models, but they can detect and correct mistranslations when familiar with the languages and the domain terminology. In a visual analytics system, both of these capabilities can be combined. Such a system should provide the translations from an NMT model and possibilities for users to visually explore translation results to find mistranslated sentences, correct them, and steer the machine learning model. + +We have developed a visual analytics approach to reach the goals outlined above. First, our system performs automatic translation of a whole, possibly large, document and shows the result in the Document View (Figure 1). Users can then explore and modify the document on different views [28] (Figure 2) to improve translations and use these corrections to fine-tune the NMT model. We support different NMT architectures and use both an LSTM-based and a Transformer architecture. + +So far, visual analytics systems for deep learning were mostly available for computer vision, some text-related areas, focusing on smaller parts of machine translation [22, 27] or intended for domain experts to gain insight into the models or to debug them $\left\lbrack {{32},{33}}\right\rbrack$ . This work contributes to visualization research by introducing the application domain of NMT using a user-oriented visual analytics approach. In our system, we employ different visualization techniques adapted for usage with NMT. Our parallel coordinates plot (Figure 1 (B)) supports the visualization of different metrics related to text quality. The interaction techniques in our graph-based visualization for attention (Figure 2 (B)) and tree-based visualization for beam search (Figure 2 (C)) are specifically designed for text exploration and modification. They have a strong coupling to the underlying model. Furthermore, our system has a fast feedback loop and allows interaction in real-time. We demonstrate our system's features in a video and will provide the source code of our system with the published paper. + +§ 2 RELATED WORK + +This section first discusses visualization and visual analytics approaches for language translation in general and then visual analytics of deep learning for text. Afterward, we provide an overview of work that combines both areas in the context of NMT. + +Many visualization techniques and visual analytics systems exist for text; see Kucher and Kerren [21] for an overview. However, there is little work on exploring and modifying translation results. An interactive system to explore and correct translations was introduced by Albrecht et al. [1]. While the translation was created by machine translation, their system did not use deep learning. Lattice structures with an uncertainty that can be used to visualize machine translation were used by Collins et al. [9]. They created a lattice structure from beam search where the path for the best translation result is highlighted and can be corrected. We also use visualization for beam search, but ours is based on a tree structure. + +Recently, much research was done to visualize deep learning models to understand them better. Multiple surveys $\left\lbrack {6,{12},{16},{23},{45}}\right\rbrack$ are available that give summaries of existing visual analytics systems. It is noticeable that not much work exists related to text-based domains. One of the few examples is RNN-Vis [24], a visual analytics system designed to understand and compare models for natural language processing by considering hidden state units. Karpathy et al. [18] explore the prediction of Long Short-Term Memory (LSTM) models by visualizing activations on text. Heatmaps are used by Hermann et al. [15] in order to visualize attention for machine-reading tasks. To explore the training process and to better understand how the network is learning, RNNbow [4] can be used to visualize the gradient flow during backpropagation training in Recurrent Neural Networks (RNNs). + +While the previous systems support the analysis of deep learning models for text domains in general, approaches exist to specifically explore and understand NMT. The first who introduced visualizations for attention were Bahdanau et al. [2]; they showed the contribution of source words to translated words within a sentence, using an attention weight matrix. Later, Rikters et al. [27] introduced multiple ways to visualize attention and implemented exploration of a whole document. They visualize attention weights with a matrix and a graph-based visualization connecting source words and translated words by lines whose thickness represents the attention weight. Bar charts give an overview for a whole document for multiple attention-based metrics that are supposed to correlate with the translation quality. Interactive ordering of these metrics and sentence selection is possible. However, it is difficult for large documents to compare the different metrics as each bar chart is horizontally too large to be entirely shown on a display. The only connection between different bar charts is that the bars for the currently selected sentence are highlighted. Our system also uses such a metrics approach, but instead of using bar charts, a parallel coordinates plot was chosen for better scalability, interaction, and filtering. + +An interactive visualization approach for beam search is provided by Lee et al. [22]. The interaction techniques supported by their tree structure are quite limited. It is possible to expand the structure and to change attention weights. However, it is not possible to add unknown words, and no sub-word units are considered. Furthermore, the exploration is limited to single sentences instead of a whole document. + +With LSTMVis, Strobelt et al. [33] introduced a system to explore LSTM networks by showing hidden state dynamics. Among other application areas, their approach is also suitable for NMT. While our approach is rather intended for end-users, LSTMVis has the goal of debugging models by researchers and machine learning developers. With Seq2Seq-Vis, Strobelt et al. [32] present a system that uses an attention view similar to ours, and they also provide an interactive beam search visualization. However, their system is designed to translate single sentences, and no model adaption is possible for improved translation quality. Their system was designed for debugging and for gaining insight into the models. + +Since there are different architectures available for generating translations [43], specific visualization approaches may be required. Often, LSTM- based architectures are used. Recently, the Transformer architecture [36] gained popularity; Vig [37,38] visually explores their self-attention layers and Rikters et al. [26] extended their previous approach for debugging document to Transformer-based systems. + +All these systems provide different, possibly interactive, visualizations. However, their goal is rather to debug NMT models instead of supporting users in translating entire documents, or they are limited to small aspects of the model. Additionally, they are usually designed for one specific translation model. None of these approaches provide extended interaction techniques for beam search or interactive approaches to iteratively improve the translation quality of a whole document. + +§ 3 VISUAL ANALYTICS APPROACH + +Our visual analytics approach allows the automatic translation, exploration, and correction of documents. Its components can be split into multiple parts. First, a document is automatically translated from one language into another one, then mistranslated sentences in the document are identified by users, and individual sentences can be explored and corrected. Finally, a model can be fine-tuned and the document retranslated. + +Our approach has a strong link to machine data processing and follows the visual analytics process presented by Keim et al. [19]. We use visualizations for different aspects of NMT models, and users can interact with the provided information. + +§ 3.1 REQUIREMENTS + +For the development of our system, we followed the nested model by Munzner [25]. The main focus was on the outer parts of the model, including identifying domain issues, feature implementation design, and visualization and interaction implementation. Additionally, we used a similar process as Sedlmair et al. [29], especially focusing on the core phases. Design decisions were made in close cooperation with deep learning and NMT experts, who are also co-authors of this paper. The visual analytics system was implemented in a formative process that included these experts. Our system went through an iterative development that included multiple meetings with our domain experts. Together, we identified the requirements listed in Table 1. After implementing the basic prototype of the system, we demonstrated it to further domain experts. At a later stage, we performed a small user study with experts for visualization and machine translation. For our current prototype, we added recommended functionality from these experts. + +Table 1: Requirements for our visual analytics system and their implementations in our approach. + +max width= + +$\mathbf{{R1}}$ Automatic translation - A document is translated automatically by an NMT model. + +1-2 +$\mathbf{{R2}}$ Overview - The user can see the whole document as a list of all source sentences and their translations (Figure 1 (A)). Additionally, an overview of the translation quality is provided in the Metrics View that reveals statistics about different metrics encoded as a parallel coordinates plot (Figure 1 (B)) showing an overall quality distribution. + +1-2 +$\mathbf{{R3}}$ Find, filter, and select relevant sentences - Interaction in the parallel coordinates allows filtering according to different metrics and selecting specific sentences. It is also possible to select one sentence and order the other sentences of the document by similarity to verify for similar sentences if they contain similar errors. Additionally, our Keyphrase View (Figure 1 (C)) supports selecting sentences containing specific keywords that might be domain-specific and rarely used in general documents. + +1-2 +$\mathbf{{R4}}$ Visualize and modify sentences - For each sentence, a beam search and attention visualization (Figure 2) can be used to interactively explore and adapt the translation result in order to correct erroneous sentences and explore how a translation failed. It is also possible to explore alternative translations. + +1-2 +$\mathbf{{R5}}$ Update model and translation - The model can be fine-tuned using the user inputs from translation corrections; this is especially useful for domain adaption. Afterward, the document is retranslated with the updated model in order to improve the translation result (the result is visualized similar to Figure 9). + +1-2 +$\mathbf{{R6}}$ Generalizability and extensibility - While we initially designed our visualization system for one translation model, we soon noticed that our approach should handle data from other translation models as well. Therefore, our approach should be easily adaptable for new models to cope with the dynamic development of new deep learning architectures. Our general translation and correction process is held quite agnostic to be applied on a variety of models. Only model-specific visualizations have limitations, need to be adapted or exchanged when using a different translation architecture. + +1-2 +$\mathbf{{R7}}$ Target group - The target group for our system should be quite broad and include professional translators or students who need to translate documents. However, it should also be able to be used by other people interested in correcting and possibly better understanding the results of automated translation. + +1-2 + +§ 3.2 NEURAL MACHINE TRANSLATION + +The goal of machine translation is to translate a sequence of words from a source language into a sequence of words in a target language. Different approaches exist to achieve this goal [34,44]. + +Usually, neural networks for machine translation are based on an encoder-decoder architecture. The encoder is responsible for transforming the source sequence into a fixed-length representation known as a context vector. Based on the context vector, the decoder generates an output sequence where each element is then used to generate a probability distribution over the target vocabulary. These probabilities are then used to determine the target sequence; a common method to achieve this uses beam search decoding [14]. + +Although different NMT models vary in their architecture, the previously described encoder-decoder design should apply to a wide range of architectures and new approaches that may be developed in the future (R6). In this work, we explored an LSTM architecture with attention and extended our approach to include the Transformer architecture, thus verifying its ability to generalize. + +One of the first neural network architectures for machine translation consists of two RNNs with LSTM units [5]. To handle long sentences, the attention mechanism for NMT [2] was introduced. It allows sequence-to-sequence models to attend to different parts of the source sequence while predicting the next element of the target sequence by giving the decoder access to the encoder's weighted hidden states. During decoding, the hidden states of the encoder together with the hidden state of the decoder for the current step are used to compute the attention scores. Finally, the context vector for the current step is computed as a sum of the encoder hidden states, weighted by the attention scores. The attention weights can be easily visualized and used to explain why a neural network model predicted a certain output. Furthermore, attention weights can be understood as a soft alignment between source and target sequences. For each translated word, the weight distribution over the source sequence signifies which source words were most important for predicting this target word. The Transformer architecture was recently introduced by Vaswani et al. [36] and gained much popularity. It uses a more complex attention mechanism with multi-head attention layers; especially, self-attentions play an important role in the translation process. We verify its applicability to our approach and visualize only the part of the attention information that showed an alignment between source and target sentences comparable to the LSTM model. + +§ 3.3 EXPLORATION OF DOCUMENTS + +After uploading a document to our system, it is translated by an NMT model (R1). The main view of our approach then shows information about the whole document (R2). This includes a list of all sentences in the Document View (Figure 1 (A)) and an overview of the translation quality in the Metrics View (Figure 1 (B)). Using the Metrics View and Keyphrase View (Figure 1 (C)), sentences can be filtered to detect possible mistranslated sentences that can be flagged by the user (R3). Once a mistranslated sentence is found, it is also possible to filter for sentences containing similar errors (R3). + +§ METRICS VIEW + +In the Metrics View, a parallel coordinates plot (Figure 1 (B)) is used to detect possible mistranslated sentences by filtering sentences according to different metrics (R3). For instance, it is possible to find sentences that have low translation confidence. + +Multiple metrics exist that are relevant to identify translations with low quality; we use the following metrics in our approach: + + * Confidence: A metric that considers attention distribution for input and output tokens; it was suggested by Rikters et al. [27]. Here, a higher value is usually better. + + * Coverage Penalty: This metric by Wu et al. [42] can be used to detect sentences where words did not get enough attention. Here, a lower value is usually better. + + * Sentence length: The sentence length (the number of words in a source sentence) can be used to filter very short or long sentences. For example, long sentences might be more likely to contain errors. + + * Keyphrases: This metric can be used to filter for sentences containing domain-specific words. As these words are rare in the training data, the initial translation of sentences containing them is likely erroneous. The values used for this metric are the number of occurrences of keyphrases in a sentence weighted by the frequency of the keyphrases in the whole document. + + * Sentence similarity: Optionally, for a given sentence, the similarity to all other sentences can be determined using cosine similarity. This helps to find sentences with similar errors to a detected mistranslated sentence. + + * Document index: The document index allows the user to sort sentences according to their original order in the document, which can be especially important for correcting translations where the context of sentences is relevant. Furthermore, this metric might also show trends like consecutive sentences with low confidence. + +In contrast to Rikters et al. [27], who use bar charts to visualize different metrics, we chose a parallel coordinates plot [17]. Each sentence can be mapped to one line in such a plot, and different metrics can be easily compared. These plots are useful for an overview of different metrics and to detect outliers and trends. Interactions with the metrics, such as highlighting lines or choosing filtering ranges, are supported. It can be expected that sentences filtered for both, low confidence and high coverage penalty, are more likely to be poorly translated than sentences falling into only one of these categories. + +§ KEYPHRASE VIEW + +It is possible to search for sentences according to keyphrases by selecting them in the Keyphrase View (Figure 1 (C)) (R3). This can be visualized as shown in Figure 4. Keyphrases are domain-specific words and were not often included in the training data used for our model. As the model has not enough knowledge on how to deal with these words, it is important to verify if the respective sentences were translated correctly. In addition to automatically determined keyphrases, users can manually specify further keyphrases for sentence filtering. + +§ DOCUMENT VIEW + +A list of all the source sentences in a document and a list of their translations are shown in the Document View (Figure 1 (A)) (R2). Each entry in this list can be marked as correct or flagged (Figure 4) for later correction. A small histogram shows an overview of the previously mentioned metrics. If a sentence is modified, either through user-correction or retranslation by the fine-tuned model, changes in the sentences are highlighted (Figure 9). Both the Metrics View and the Keyphrase View are connected via brushing and linking [39] to allow filtering for sentences that are likely to be mistranslated and should be examined and possibly corrected. Additionally, sentences can be sorted into a list by similarity to a user-selected reference sentence. In this list, sentences can be selected for further exploration and correction in more detailed sentence-based views. + +§ 3.4 EXPLORATION AND CORRECTION OF SENTENCES + +After filtering and selection, a sentence can be further analysed with the Sentence, Attention, and Beam Search Views (Figure 2) and subsequent correction (R4). These views are shown simultaneously to allow interactive exploration and modification of translations. + +Note, on the sentence level, we use subword units to handle the problem of rare words, which often occurs in domain-specific documents, and to avoid unknown words. We use the Byte Pair Encoding (BPE) method proposed by Sennrich et al. [31] for compressing text by recursively joining frequent pairs of characters into new subwords. This means, instead of choosing whole words to build the source and target vocabulary, words are split into subword units consisting of possibly multiple characters. This method reduces model size, complexity, and training time. Additionally, the model can handle unknown words by splitting them into their subword units. As these subword units are known beforehand, they do not require the introduction of an "unknown" token for translation. Thus, we can adapt the NMT model to any new domain, including those with vocabulary not seen at training time. + + < g r a p h i c s > + +Figure 3: Attention visualization: (top) when hovering a source word (here: 'verarbeiten') translated words influenced by the source are highlighted and (bottom) when hovering a translated word (here: 'process') source words that influence the translation are highlighted according to attention weights. + +§ SENTENCE VIEW + +Similar to common translation systems, the Sentence View (Figure 2 (A)) shows the source sentence and the current translation. It is possible to manually modify the translation, which in turn updates the content in the other sentence-based views. After adding a new word in the text area, the translation with the highest score is used for the remainder of the sentence. This supports a quick text-based modification of a translation without explicit use of visualizations. + +§ ATTENTION VIEW + +The Attention View depends on the underlying NMT model. It is intended to visualize the relationship between words of the source sentence and the current translation as a weighted graph (Figure 2 (B)); such a technique was also used by Strobelt et al. [32]. Both source and translated words are represented by nodes; links between such words show the attention weights encoded by the thickness of the connecting lines (we use a predefined threshold to hide lines for very low attention). These weights correlate with the importance of source words for the translated words. Hovering over a source word highlights connecting lines to translated words starting at this word. In addition, the translated words are highlighted by transparency according to the attention weights (Figure 3 top). While this shows how a source word contributes to the translation, it is also possible to show for translated words how source words contribute to the translation (Figure 3 bottom). This interactive visualization supports users in understanding how translations are generated from the source sentence words. On the one hand, such a visualization helps gain insight into the NMT model, and, on the other hand, detects issues in generated translations. The links between source sentence and translation can be explored to identify anomalies such as under-or over-translation. Missing attention weights can be an indication for under-translation and links to multiple translated words for over-translation. In our case study in Section 4, examples of these cases are presented. While this technique specifically employs information of the attention-based LSTM model, we use it in an adapted form for the Transformer architecture (see Section 4.4). A visualization more tailored to Transformers, also including self-attention and attention scores from multiple decoder layers, could provide additional information. Further models may need different visualizations for a generalized use of our approach, employing model-specific information. + +§ BEAM SEARCH VIEW + +While the Attention View can be used to identify positions with mistranslations, the Beam Search View supports users in interactively modifying and correcting translations. The Beam Search View visualizes multiple translations created by the beam search decoding as a hierarchical structure (see Figure 2 (C)). This interactive visualization can be used for post-editing the translations. + +The simplest way of predicting a target sequence is greedy decoding, where at every time step, the token with the highest output probability is chosen as the next predictied token and fed to the decoder in the next step. This is an efficient, straightforward way of generating an output sequence. However, another translation may be better overall, despite having lower probabilities for the first words. Beam search decoding [14] is a compromise between exhaustive search and greedy decoding, often used for generating the final translation. A fixed number(k)of hypotheses is considered at each timestep. For each hypothesis considered, the NMT model outputs a probability distribution over the target vocabulary for the next token. These hypotheses are sorted by the probability of the latest token, and up to $k$ hypotheses remain in the beam. Hypotheses ending with the End-of-Sequence (EOS) token are filtered out and put into the result set. Once $k$ hypotheses are in the result set, the beam search stops, and the final hypotheses are ranked according to a scoring function that depends on the attention weights and the sentence length. + +For visualization, we use a similar approach as Strobelt et al. [32], and Lee et al. [22]: a tree structure reflects the inherently hierarchical nature of the beam search decoding. This way, translation hypotheses starting with the same prefixes are merged into one branch of this hierarchical structure. The root node of each translation is associated with a Start-of-Sequence (SOS) token and all leaf nodes with an End-of-Sequence (EOS) token. Compared to the visualization of a list of different suggested translations, showing a tree is more compact, and it is easier to recognize where commonalities of different translation variants lie. + +Each term of the translation is visualized by a circle that represents the actual node and a corresponding label. The color of a circle is mapped to the word's output probability. This supports users in identifying areas with a lower probability that might require further exploration. It can be seen as uncertainty for the prediction of words. In our visualization, we differentiate between nodes that represent subwords and whole words. Continuous lines connect subwords and nodes are placed closer together to form a unit. In contrast, the connections to whole words are represented by dashed lines. + +The beam search visualization can be used to navigate within a translation and edit it (Figures 7 and 8). The interaction can be performed either mouse-based or keyboard-based; the latter is more efficient for fast post-editing. The view supports standard panning-and-zooming techniques that are especially needed to explore long sentences as they do not fit common displays. For navigation within the tree, arrow keys can be used to move through a sentence, or nodes can be selected by mouse cursor. If the translation of the current node's child node is not satisfying, the node can be expanded to show suggestions for correction. If the user selects a suggested word, the beam search runs with a lexical prefix constraint, and the tree structure gets updated. If the suggested words are not suitable, a custom correction can be performed by typing an arbitrary word that better fits. The number of suggested translations is initially set to three and can be increased by adapting the beam size. Increasing this value may create better translations and provides more alternative translations. However, the higher the value is, the more information has to be shown in the visualization. By hovering and selecting elements in this view, corresponding elements of the Attention View and Sentence View are shown for reference. + + < g r a p h i c s > + +Figure 4: Main view of the system: the Document View shows some flagged sentences for correction. Additionally, the keyphrase filter (top right) is active: all sentences containing the keyphrase 'MÜ' are shown in the Metrics and Document Views. It is visible that 'MÜ' is never correctly translated to 'MT'. + +§ 3.5 MODEL FINE-TUNING AND RETRANSLATION + +After correcting the translation of multiple sentences, the user corrections can be used to fine-tune the NMT model and automatically improve the translation of the not yet verified sentences (R5). This approach can be applied repeatedly to improve the document's translation quality, especially for domain-specific texts. + +Documents often belong to a specific domain, e.g., legal, medical, or scientific. Each domain has specific terminology, and one word may even refer to different concepts in different domains. As such, the ability of NMT models to handle different types of domains is an important research topic. Domain adaption refers to techniques allowing NMT models trained on general training data, also called out-of-domain, to adapt to domain-specific documents, called in-domain. This is useful since there may be an abundant amount of general training data, but domain-specific data may be rare. Since NMT models need a large amount of training data to achieve good translation quality, the out-of-domain data can be used to train a baseline model. The model can then be adapted using the in-domain data (R5), which typically contains a smaller number of sentences: in our system, we use the user-corrected sentences. This mitigates the problem of training an NMT model in a low-resource scenario where little data exists for a given domain. In our approach, we continue training for the in-domain data in a reduced way by freezing certain model weights (for the LSTM-based model, both decoder and the LSTM layers of the encoder are trained; for the Transformer, only the decoder is trained). + +§ 4 CASE STUDY + +As a typical use case, we take the German Wikipedia article for machine translation (Maschinelle Übersetzung) [41] as a document for translation into English. In the following, we show how to use our system to improve the translation quality of the document. Please see our accompanying video for a demonstration with the Transformer model. The examples in the following were created with both the LSTM and Transformer models. We trained our models on a general data set: the German-to-English data set from the 2016 ACL Conference on Machine Translation (WMT'16) [3] shared news translation task. This is a popular data set for NMT, used, for instance, by Denkowski and Neubig [11] and Sennrich et al. [30]. + + < g r a p h i c s > + +Figure 5: Example of over-translation: 'Examples' is placed twice as translation for the German word 'Beispiele'. The Beam Search View (right) shows possible alternative translations. However, only increasing the beam size to four shows the translation we would have expected. + +§ 4.1 EXPLORATION OF DOCUMENTS + +After uploading a document (R1), we have a look at the parallel coordinates plot (R2) for our initial translations and the list of keyphrases in order to detect possible mistranslations (R3). In the Keyphrase View, we notice the domain-specific term 'MÜ' occurring very often. This term is the German abbreviation for 'machine translation' and should therefore be translated as 'MT'. However, none of the translations use the correct term (Figure 4). Additionally, one could select and verify sentences with low confidence or with a high coverage penalty. Here, we especially notice the under-translation of some sentences. After verifying a translation in the Document View, users can decide if they are correct (R2). If the users do not agree with the translation, they can set a flag (Figure 4) to modify the translation later or switch to the sentence-based views to correct it (R4). + +§ 4.2 EXPLORATION AND CORRECTION OF SENTENCES + +After setting flags for multiple sentences (Figure 4), or the decision to explore or modify a sentence, a more detailed view for each sentence can be shown to explore and improve their translations interactively (Figure 2) (R4). + +Over-translation is a common issue of NMT [20]. In the Attention View, it is possible to see what went wrong by identifying where the attention weights connect the source and destination words. + +For both models, we notice some cases for very short sentences. Figure 5 shows for the German heading 'Beispiele' (en: 'Examples'), a translation that uses the translated word multiple times. Also, the suggested alternatives use this term more than once. Only after increasing the beam size to four, the correct translation is visible, which can then be selected as the correction. + +More often, only parts of a sentence are translated, and important words are not considered in our document. Such under-translation is shown in Figure 6. In the first example, only the beginning of the sentence is translated, and it is visible that the remaining nodes have almost zero attention. In the second example, the German term 'zweisprachigen' (en: 'bilingual') is skipped in the translation. While this part of the translation is missing, the translated sentence is still correct and fluid; it might be difficult to detect such an error without such attention visualizations. + +An example of a wrong translation containing a keyphrase is visualized in Figure 7. Here, it is also shown that using the beam search visualization, it is possible to select an alternative translation interactively starting from the position where the first error occurs. The beam search provides possible alternative translations, but it is possible to manually type what the user believes should be the next term. Here, we enter the correct translation manually. The beam search visualization automatically updates in real-time according to the correction. + + < g r a p h i c s > + +Figure 6: Example of under-translation shown in the Attention View: (top) for the LSTM model the end of the sentence is not translated; attention weights are very low for this part of the sentence. (Bottom) for the Transformer architecture the term 'zweisprachigen' (en: 'bilingual') is not translated; attention weights are very low for this term. + + < g r a p h i c s > + +Figure 7: Example of a mistranslated sentence containing the keyphrase 'MÜ' shown as beam search visualization: (top) suggested translation, suggested alternatives and custom correction; (bottom) updated translation tree for corrected keyword with new suggestions for continuing the sentence after the custom change. + +Finally, it is also possible to change sentences without mistakes. Sometimes, sentences are correctly translated, but different words or sentence structures are used as the current user would prefer for the context of a sentence or to express someone's own style (Figure 8). Again, it is possible to explore and select alternative words or sentences with the Beam Search View. If we wanted to start the sentence with a different word, an alternative could be selected, and the remaining sentence would get updated accordingly. + +After correcting and accepting multiple translation corrections, the Document View shows how a translation was changed (Figure 9). + +§ 4.3 MODEL FINE-TUNING AND RETRANSLATION + +After users corrected multiple sentences, they can choose to retrain the current model for not yet accepted sentences (R5). The model is then fine-tuned using the corrected sentences by the user. This is usually a small number of sentences. Afterward, the system translates the uncorrected sentences to improve translation quality as the model adapted from the corrected sentences. Since our document contains 29 times the keyphrase 'MÜ' that is wrongly translated, we retrained our model after correcting only a few (less than 5 ) of these terms to 'MT'. After retranslation, the Document View shows the difference of the translations compared to before. For both the LSTM and the Transformer model, all or almost all occurrences of 'MÜ' are now correctly translated. The user can look at the changes and accept translations or continue with iteratively improving sentences and fine-tuning the model. + + < g r a p h i c s > + +Figure 8: Correctly translated sentence is changed to another correct translation. 'SOS' is selected to show alternative beginnings for the sentence. After choosing an alternative the remaining sentence gets updated by another correct translation. + +§ 4.4 ARCHITECTURE-SPECIFIC OBSERVATIONS + +We initially designed our approach for the use with an LSTM-based model with an attention mechanism. Since other architectures exist to translate documents, we also adapted it and tested its usefulness for the current state-of-the-art Transformer architecture [36] (R6). This architecture is also attention-based, and we analyzed how well it fits our interactive visualization approach. The general workflow of our system can be used in the same way as the model we initially developed it for: the Document and Metrics Views can be used to identify sentences for further investigation, and sentences can be updated using the Sentence and Beam Search View. The main difference between the Transformer model concerning our approach is the attention mechanism that influences the Attention View and some calculated metric values. + +The Transformer architecture uses multiple layers with multiple self-attention heads instead of just attentions between encoder and decoder. There are approaches for the visualization of this more complex attention mechanism [37, 38]. The attention values for Transformers could, for example, show different linguistic characteristics for different attention heads [8]. However, including this into our system would make our approach more complex and not useful for end-users (R7) with little knowledge about this architecture. As a simple workaround to apply our visualization, we discard the self-attention and only use the decoder attention. We explored the influence of decoder attention values from different layers, averaged across all attention heads. Similar to Rikters et al. [26], we noticed that averaging attention from all layers is not meaningful since almost all source words are connected to all target words. Using one of the first layers showed similar results. For the final layer, a better alignment could be seen; however, the last token of the source word received too much attention compared to other words. Instead, using the second last layer showed a similar alignment between source and target words as it is available for the LSTM model. Therefore, we use this as a compromise for the use in our Attention View and for calculation of metric values. + +max width= + +Source Translation + +1-2 +#23 Der Stand der MÜ im Jahr 2010 wurde von vielen Menschen als unbefriedigend bewertet . Many people have been considered unsatisfactory in assessed the context state of the MU MT in 2010 as unsatisfactory . + +1-2 +#30 Der Bedarf an MÜ-Anwendungen steigt weiter : The need for MUs MT applications continues to rise :✓ + +1-2 +#46 Dies ist die älteste und einfachste MÜ-Methode, die beispielsweise auch obigem Russisch-Englisch-System zugrunde lag This is the oldest and simplest MT method that ✓ underlies, which was based, for example, on the obigem Russian-English system mentioned above + +1-2 +#48 Die Transfer-Methode Ist die klassische MÜ-Methode mit drei Schritten : The transfer method is the classic MU method MT ✓ method with three steps : + +1-2 +61 Beispielbasierte MÜ Examples based MT✓ + +1-2 + +Figure 9: Document View showing corrected translations and changes to the initial machine-generated translations. + +Since there are different approaches and architectures developed for NMT, we could incorporate them as well (R6). Some might provide better support in gaining insights into the model and offer different visualization and interaction capabilities. For others, new ways for visualization will have to be investigated. + +§ 5 USER STUDY + +We conducted an early user study during the development of our approach to evaluate our system's concept. We used a prototype with an LSTM translation model. The system had the same views as described before but limited features. A group of anonymous visualization and machine learning experts were invited to test our system online for general aspects related to visualization, interaction, and usefulness. Our goal was to make sure that we considered aspects relevant from both the visualization and the machine translation perspective in our system and to improve our approach. The user study was questionnaire-based to evaluate the effectiveness of the system, understandability of visualizations, and usability of interaction techniques. A 7-point Likert scale was used. In this study, the German Wikipedia article for autonomous driving (Autonomes Fahren) [40] was available to all participants. This allowed the participants to explore the phenomena we showed previously. The participants claimed to have good English (mean $= {5.1}$ , std. dev. $= {0.8}$ ) and very good German (mean $= {6.2}$ , std. dev. $= {1.7}$ ) knowledge. While the visualization experts claimed to have rather low knowledge about machine learning (mean: 2.5), the machine learning experts similarly indicated lower knowledge for visualization (mean: 3). + +First, participants were introduced to the system with a short overview of the features. Then, they could explore the system freely with no time restriction. Afterward, they were asked to participate in a survey regarding the usefulness of our system and its design choices. Additionally, there were free text sections for further feedback. We recruited 11 voluntary participants from our university (six experts on visualization and five for language processing). + +Table 2: Ratings from our user study for each evaluated view on a 7-point Likert scale; mean and standard deviation values are provided. + +max width= + +$\mathbf{{View}}$ Effectiveness Visualization Interaction + +1-4 +Metrics View 5.9 (1.1) 6.8 (0.4) 6.1 (0.7) + +1-4 +Keyphrase View 4.4 (1.6) 6.5 (1.2) 6.3 (1.1) + +1-4 +Beam Search View 5.6 (1.5) 6 (1.3) 4.5 (1.8) + +1-4 +Attention View ${5.6}\left( {0.8}\right)$ 6.2 (1.2) 5.9 (0.9) + +1-4 + +The general effectiveness of translating a large document containing more than 100 sentences with our approach was rated high (mean $= {5.6}$ , std. dev. $= {1.0}$ ) compared to a small document containing up to 20 sentences (mean $= {4.5}$ , std. dev. $= {1.6}$ ). The results for effectiveness, ease of understanding and intuitiveness of visualizations, and ease of interaction are given in Table 2. The ratings for the visualizations were high for all views. Best rated was the Metrics View that additionally had the lowest standard deviation. As not all our user study participants were visualization experts, we noticed that non-experts could also manage to understand and work with parallel coordinate plots. We conclude that our design choice for the visualization of metrics was appropriate. The ratings for interaction were also very high, but there was more variation. Especially the interaction for beam search was rated comparatively low and had the highest standard deviation; two language processing participants ranked it very low ( 1 and 2 ) and two (one from each participant group) very high (7). This variation might be the result of the learning curve being different for different participant groups. Since we conducted the user study, we have also improved the interaction in this view. For effectiveness, the Keyphrase View had the lowest rating. We believe the reason is that participants were not able to detect enough mistranslated sentences with this view. However, this might be due to our document provided and may differ for other documents containing more domain-specific vocabulary as we showed in our case study. + +In addition, we asked users for general feedback on our approach. Especially the Metrics View received positive feedback. Participants mentioned that it is useful for quickly detecting mistranslations through brushing and linking. For the Beam Search View, one participant noted that the alternatives provided would speed up the correction of translations. For one participant, the Attention View was useful in showing the differences in the sentence structure of different languages. Negative feedback was mostly related to interaction and specific features; some participants suggested new features. Multiple participants noted that the exploration and correction of long sentences are challenging in the Beam Search View as the size of the viewport is limited. Furthermore, a feature to delete individual words and functionality for freezing areas was suggested. From the remaining feedback, we already included, for example, an undo function for the sentence views. Also, to find sentences that might contain similar errors, one participant recommended showing sentences similar to a selected sentence, and we added a respective metric. Additionally, it was mentioned that confidence scores could be shown in the document list next to each sentence and not only in the Metrics View. This would be helpful to quickly examine the confidence value even if the document is sorted by a different metric (e.g., document order); small histograms were added next to each sentence as a quick quality overview. + +§ 6 DISCUSSION AND FUTURE WORK + +To conclude, we present a visual analytics approach for exploring, understanding, and correcting translations created by NMT. Our approach supports users in translating large domain-specific documents with interactive visualizations in different views, allows sentence correction in real-time and model adaption. + +Our qualitative user study results showed that our visual analytics system was rated positively regarding effectiveness, interpretability of visualizations, and ease of interaction. The participants mastered the translation process well with our selected visualizations. Especially, our choice of parallel coordinate plots for visualization of multiple metrics and the related interaction techniques for brushing and linking were rated positively. Our approach had a clear preference for translating large documents compared to a traditional text-based approach. Right now, users have to use metrics to decide with which sentence they will start correcting the translations. More research has to be done for better automatic detection of mistranslated sentences. For example, an additional machine learning model could be trained with sentences that were already identified as wrong translations. + +We believe our system is useful for people who have to deal with large documents and could use the features of interactive sentence correction and domain adaption. Comparing the use of our approach for LSTM and the Transfomer architecture showed almost no difference; for both, we could successfully interactively improve the translation quality of documents and see model-specific information. We argue that our general translation and visualization process can also be used with further models, while in such cases, some visualization views might need limited adaptation. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..1dcdf43478c5503787ec12bb6e413126271b7f5b --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,191 @@ +## Library of Apollo: A Virtual Library Experience in your Web Browser + +Section D - World History By period Empire, 27 B.C.-476 A.D DG277 (10 Books) DG276.C216 2013 DG276.694 2012 Coney, Michael Peto Gorgetti, Albine Ancient Italy. Rome to History DG275 (8 Books) DG276 (73 Books) General works Resistanth, G. McN Capes. W. W + +Figure 1: A wide-angle shot of the shelves. The user can scroll the shelves left and right, look around using their mouse and click & engage with the books, signage and panels. + +## Abstract + +Research libraries that house a large number of books organize, classify and lay out their collections by using increasingly rigorous classification systems. These classification systems make relations between books physically explicit within the dense library shelves, and allow for fast, easy and high quality serendipitous book discovery. We suggest that online libraries can offer the same browsing experience and serendipitous discovery that physical libraries do. To explore this, we introduce the Virtual Library, a virtual-reality experience which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. Library of Apollo is an infinitely scrollable array of shelves that has 9.4 million books distributed over 200,000 hierarchical Library of Congress Classification categories, the most common library classification system in the U.S. The users can jump between books and categories by search, clicking on the subject tags on books, and by navigating the classification tree by simple collapse and expand options. An online deployment of our system, together with user surveys and collected session data showed that our users showed a strong preference for our system for book discovery with ${41}/{45}$ of users saying they are positively inclined to recommend it to a bookish friend. + +Index Terms: Human-centered computing- - + +## 1 INTRODUCTION + +Book exploration is a fundamental part of every reader's life. Libraries often play a unique role in this exploratory process. Large collections are classified, organized and laid out through the use of continuously-updated, rigorous and systematic classification systems, such as the Library of Congress Classification (LCC) system in the U.S. Under the LCC system, books are uniquely classified into a tree of classifications, and are additionally tagged with one or more standardized subject headings. + +These categorization systems make relationships between books physically explicit, putting each book in the context of every other book within the same and nearby shelves that have similar books, adjacent in subjects, authorship, time and geography [1,2]. This allows for natural, fast and high quality serendipitous book discoveries. + +Online library interfaces are generally search result based and this takes away from the valuable serendipitous discoveries that are often made in physical libraries through browsing [3-5]. In this paper, we introduce a Virtual Library implementation, Library of Apollo, which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. Library of Apollo is an infinitely scrollable array of bookshelves that has 9.4 million books distributed over 200,000 hierarchical Library of Congress Classification (LCC) categories. The users can jump between books and categories by targeted search, clicking on the subject tags on books, and by navigating the classification tree by simple collapse and expand options. + +The main contributions of this paper are two-fold. Firstly, we contribute the design and implementation of the Virtual Library system, which replicates a physical library's collection outline and categorization features while adapting it to a web browser setting that is controlled via mouse and keyboard. We discuss the improvements we made to increase the internal cross-connectivity, ease-of-use and navigability, along with the technical details necessary to serve a large and organized collection of books virtually. Second, we deployed this system publicly on the Internet at loapollo.com, soliciting survey feedback from visitors. Our analysis of the survey data and server logs reveal changing behaviours around physical libraries and bookstores due to the ongoing COVID-19 pandemic, and their preferences for using the Virtual Library system. + +## 2 RELATED WORK + +### 2.1 Library of Congress Classification System + +There are different classification systems used for different purposes in different parts of the world, and as we have focused on large collections as those hosted by research and academic libraries, we decided to use the subject-based, hierarchical Library of Congress Classification system (LCC) [1], currently one of the most widely used library classification systems in the world [6]. + +a tale of two cities a tale of two cities A tale of two cities New York : A. E. Dirrich, 1930. A tale of two cities :wherein a physicist looks at the 1860 census of Nebraska City an Clies, A-2 Grand Island, Neb. : Prairie Ponneer Press, Stahr Mustern, 1987 The tale of two cities shelf hierarchy Twentieth century interpretations of $A$ tale of two citi Beckwith, Charles E., comp Englewood Cliffs, N.J., Premise-Hall [1972] Shelf Results English inerature Language and Literature Individual authors Dickens, Charles Separate works PR4571 Tale of two exten 19 Books Charles Dickens' A tale of two citles Karnicky, Jeffrey Telection cases A tale of two cities Glancy, Ruth F., 1948 Tale of two cities A tale of two cities + +Figure 2: Search results for ’a tale of two cities’, showing both shelf and book results. (A) Shelf results list LCC classes that are search-hits, while book results are hits for titles, authors and subject headings of books. The terms that led to the search-hits are highlighted. (B) Hovering over the 'Under Shelf' portion of book results expands the shelf hierarchy. + +LCC divides all knowledge into twenty-one basic classes, each identified by a single letter. These are then further divided into more specific subclasses, identified by two or three letter combinations (class N, Art, has subclasses NA, Architecture; NB, Sculpture etc.). Each subclass includes a loosely hierarchical arrangement of the topics pertinent to the subclass, going from the general to the more specific, often also specifying form, place and time [1,6]. LCC grows with human knowledge and is updated regularly by the Library of Congress [6]. + +### 2.2 Book Browsing and Serendipity in Libraries + +Information seeking in physical libraries takes two forms: search of known items and browsing. Patrons may begin with a search of a known item, but through undirected browsing of nearby items, often discover new and unknown books serendipitously [7]. Even though patrons could conceivably find interesting items in completely disarranged stacks, libraries aim to ease this fundamental browsing process by cataloguing, classifying and shelving books according to some classification system [2], putting like near like, deemed so through their topics, subtopics, authorship, form, place and time. + +Library shelves are meticulously organized to encourage serendipity $\left\lbrack {7,8}\right\rbrack$ and call numbers themselves are markers that indicate the semantic content of shelved items [9]. There is some quantified evidence about the neighbour effects created by Dewey Decimal and Library of Congress Classification systems: a nearby book being loaned increases the probability of a subsequent loan by at least 9-fold [10], indicating that browsing is happening regularly and effectively within these classification systems. There is also strong spoken and anecdotal preference for physical library shelf browsing $\left\lbrack {3,{11}}\right\rbrack$ and users bemoan the lack of opportunity to do so online [3-5]. + +More recently, McKay et. al. detailed the actions taken during library browsing by users, like reading signs, scanning, the number of books, shelves, bays touched and examined etc., indicating an idiosyncratic browsing behaviour by each user as well as a high success rate, with ${87}\%$ of users leaving their browsing session with one or more books [12]. These results suggest a unique and creative character to browsing within the general scope of information retrieval. + +### 2.3 Book Browsing and Serendipity in the Digital Age + +Even though libraries have been amongst the earliest adopters of computers, digitization and online access [13] and there has been a clearly expressed sentiment favoring browsing $\left\lbrack {3 - 5,{11}}\right\rbrack$ , there has been little commensurate effort to carry the physical shelf browsing experience to online libraries. Online library interfaces are almost universally search-result based, and we know the vast majority of users only see up to ten search results [14], and only interact with one or two in any depth for further information [15]. These small numbers are hardly conducive to browsing and serendipitous discovery. The recommender systems used by online vendors are generally good to discover popular books, popular tastes and genres while they often fail to provide novel ranges of selections necessary for browsing and serendipity [16]. + +There is some novel recent work that has been developed to facilitate serendipitous browsing in libraries [17, 18]. The Bohemian Bookshelf [17] aimed to encourage serendipity by creating five interlinked visualizations over its collection, based on books' covers, sizes, keywords, authorships and timelines. This playful approach was deployed on a touch kiosk in a library and was received very positively, but the collection size used for implementation and test was limited to only 250 books, a number too small to be indicative of performance over larger collections where some of the employed visualizations could become too cluttered and complex to navigate. + +The Blended Shelf [18] carried over the physical shelves into very large touch displays that offered 3D visualizations of a library collection of ${2.1}\mathrm{\;m}$ books, conforming to the standard classification used by that library. The 3D shelves were draggable by swipe, can be searched and reordered and the classifications can be navigated via a breadcrumb-cued input mechanism. However there are no deployments, tests or user studies regarding their implementation. + +We have taken a reality-based presentation approach, creating a 3D online library of ${9.4}\mathrm{\;m}$ books on infinitely scrollable shelves that conform to the LCC ordering, that is also navigable through search, cross-connected subjects, and a navigatable classification tree. + +## 3 DESIGN AND IMPLEMENTATION + +We have regarded the strongly expressed preference for physical shelf browsing $\left\lbrack {3 - 5,{11}}\right\rbrack$ together with the long cultivated art of library classification systems [1] as the guiding principles of our design process, and aimed to bring the physical library experience online while making improvements to enhance the connectedness and navigability of the served online collection. + +### 3.1 Designing to Faithfully Recreate Physical Libraries + +We had two conflicting design interests. First, we wanted to design and deploy a virtual library application at scale available to everyone on web browsers. Second, we also wanted this to be as close to a physical library experience as possible. Naïvely optimizing for the latter would have meant building an online replica of a huge library in 3D, that would then be navigated by mouse and keyboard through some combination of user motion and teleportation. However, that would have been an alien set of interactions to most people, be hard to navigate and possibly even clunky. + +Gothic ornaments in the cathedral church of York NA5471 B Architecture for the shroud Scott, John Beldon, 1946 No Author Listed Monticalia, Ill. : Vance Bibliographies, 1981 Derese nyi, Deno Europe Chemian Marlowe, George F. (George Francis) New England New York, Macmillan Co., 1947, by Halfpenny, Joseph, 1748-1811 York :, Published by J. Todd & Sons, 1795-1800 Details SATI YORDATION Extent: [51] p., 105 leaves of plotes Other Physical Details: III. (some col) Dimensions: 37 cm. Search on Amazon naments in + +Figure 3: (A) Clicks on shelved books bring up synoptic book panels. The elements in the subject heading table are clickable. (B) The search-results after the “church architecture” subject heading was clicked. The hits on subject headings are highlighted with shades of green. + +Instead we extracted the most crucial part of the physical book browsing experience from the physical library: the shelves. The user, essentially the camera, is placed in front of a set of floating shelves, and is allowed 20 degrees of freedom of camera motion left and right, and 10 degrees up and down, achievable with a mouse or a trackpad. A wide-angle view of the shelves is seen on Figure 1. The user can "scroll" the shelves left and right as slow or as fast as they want via the regular scroll gesture on their trackpads, scroll-wheels on mouses or arrow-keys on keyboards (Video Figure). + +1770/1800-3800/ Separate wates Oliver Twist 21. Socience Digby, Keeten Henry PRASSB.DSS English literature B English literature English Stareture English literature English literature + +Figure 4: (A) The scroll-view of class hierarchies. Hierarchies can be collapsed by clicking any of the intermediate elements. (B) The collapsed hierarchies after the click at A. The hierarchies can be scrolled up and down, or expanded by clicking their tag panels if they contain child categories. + +Books on the virtual shelves are arranged in order according to their LCC classifications, just as they would be in a real library. There are also panels showing LCC hierarchies above shelves indicating where the user is within the library. The panels in the center indicate the user's current location, while the left and right panels respectively indicate the previous and next classifications, and thus the content of the shelves that await the user in those directions. A click on the left or right panels scroll the shelves to bring those classes in front of the user. A click on the center panels opens the LCC classification hierarchy as seen on Figure 4. This list can be navigated by up and down scrolling as well as clicks to expand and collapse it. Clicking on the end portion of these hierarchies brings that class & its associated books in front of the user. + +The user can start typing anytime they want to bring up the search bar, which can be used to search over titles, authors, subjects and LCC classifications. If any book or classification is selected through the search results, the user is transported to the shelf containing this selected book. The books are also sized according to their real dimensions, page numbers and volumes, as seen on Figure 1. + +### 3.2 Building a Familiar and Expressive Search Bar + +Single input search bars are ubiquitous and users are intimately familiar with them. We utilized the search bar as an entry point into our library by designing it as a catch-all input system that searches over book titles, authors, subject headings of books, and LCC categories. Our search provides a way for users to enter into the hierarchically organized stacks by helping them first with identifying a relevant pool of items they might be interested in. + +Figure 2 shows shelf (class) and book search hits for the query ’A tale of two cities.' Shelf results show the LCC hierarchy from top to bottom, together with the class numbers and the number of books that are listed under that class. Book results list titles, authors and publishers, together with a table showing the subject headings listed for each book. To the right of each book result, you can also see under which shelf (class) that book can be found, hovering over that section expands the shelf hierarchy as can be seen on the right side of Figure 2. The 'under shelf' portions are always colored to clearly designate shelf hierarchies. The search results are not paginated, and can be "infinitely" scrolled downwards until the end of results. + +### 3.3 Engineering a Smooth Browsing Experience + +Clicking on a search result, both for books and shelves, zooms the user into a set of floating shelves (Figure 1). The searched-for book appears on the centre shelf and is briefly highlighted as an indicator of its position. The books on shelves conform to LCC sorting and appear as they would on a regular library. One other key feature we have added is browsing over the LCC classification hierarchy as seen on Figure 4. This scroll-view of classification hierarchies is opened by clicking on the centered LCC panels. A listed hierarchy can be expanded or collapsed by clicking on different parts of the hierarchy. Clicking on any panel within a hierarchy collapses that hierarchy down to that panel, reducing the number of hierarchies in the scroll-view. Clicking on the name-tag panel of a hierarchy expands that hierarchy. Clicking on the last panel of a hierarchy transports the user to the beginning of that hierarchical class with the first book of that class highlighted on shelves. + +Another addition to the browsing experience is through the improved connectivity that comes through search over subject headings of books. Book clicks bring up synoptic windows as seen on part 1 of Figure 3. The colored subject tables are filled with Library of Congress Subject Headings (LCSH) which we have populated for every book from a dataset maintained by domain experts at the Library of Congress [1]. These LCSH headings are clickable and trigger a search over the entire dataset's indexed LCSH fields. Part 2 of Figure 3 shows search results for an LCSH field search; notice that these books can come from entirely different shelves and classes, often very distant from each other, thus providing another way to jump between classes and books. This allows users to transcend the distance imposed by LCC in physical libraries through a subject-based search method, which allows for dynamic links between books. We believe this feature would go a long way to satisfy the expressed desire by users to occasionally see distant books [5]. + +### 3.4 Providing Access to Digitized and Physical Books + +Another killer feature of real world libraries is in-depth browsing of individual books. A library-goer can just pick up any book and read until their curiosity is sated. In order to provide a similar experience, when a user clicks on a shelved book, we provide a single page overview. Part 1 of Figure 3 shows an example of this page which displays the book's title, authorship, publishing house, colored subject headings, physical information regarding extent and dimensions, and links to lookup the same book on Amazon and Goodreads, as well as a Peek Inside button that is enabled when there is a free & digitized version available on OpenLibrary (which houses over 3 million books, or around ${30}\%$ of the total collection). A single click on this button opens a new tab with an online reader showing the contents of the book, as seen on Figure 5, providing an in-depth reading experience. The Amazon and Goodreads buttons provide easy access to purchase and social reading options, respectively. + +Digitized by the Internet Archive Gortille Ornaments in the YORK In MDCCXCV 生活中国一点十五个人学生,以下是一点“大中点”。 in 2014 http://archive.org/details/gothicornaments100hall + +Figure 5: The "Peek Inside" button on book panels opens an e-reader that uses OpenLibrary's digitized book archive. + +Search Indexed books and classes Cloudsearch Peek Inside OpenLibrary API User interaction logging DynamoDB WebGL + Website AWS S3 AWS API Gateway Users Static Files Cloudfront + +Figure 6: The cloud architecture of Library of Apollo. + +## 4 ONLINE DEPLOYMENT AND SURVEY RESULTS + +### 4.1 The Front-end and the Cloud Architecture + +We have developed the front-end of our application using Unity WebGL. The compiled WebAssembly, Javascript and HTML files are hosted on the AWS S3 service through the CDN service of AWS, Cloudfront. The books are sorted according to LCC sorting scheme $\left\lbrack {1,6}\right\rbrack$ and clustered into JSON files that contain 500 books each, gzipped and stored in S3. These data files are fetched when the user is browsing the shelves, and pre-fetched during scrolling when the user is near the end of a cluster. + +The same data that is stored in S3 is indexed on AWS Cloudsearch to power the search functionalities of our app. All search requests are performed through the REST API we have developed on AWS API Gateway. Search results are populated through the data returned from Cloudsearch. For analytics, user click-data is also recorded on DynamoDB through the same gateway. The search requests to OpenLibrary's digitized books dataset are made in a similar fashion. Our architecture is summarized on Figure 6. + +### 4.2 The Deployed Dataset + +Courtesy of the Library of Congress [19], through their MARC Open-Access program, we were able to put together ${9.4}\mathrm{\;m}$ books, complete with their LCSH and LCC data, distributed over ${200}\mathrm{k}$ LCC classes. We pre-processed this data to serve our specific needs, and stored and indexed them for use within our virtual library application as described above. Despite the size of the dataset, the website is very smooth to use and inexpensive to operate - monthly operational costs are less than \$40 USD. + +### 4.3 Deployment and Recruitment + +We deployed our library at a publicly accessible address, loapollo.com, and seeded links to the project via Reddit and word-of-mouth. Over a span of two weeks, over two hundred unique users visited the site. Each user to the library was assigned a randomly-generated persistent user ID, to track repeated session visits, as well as a session ID for any given visit. The Library front-end automatically logged click interactions with the system in a DynamoDB server for later introspection. + +Of the 224 unique visitors, 21 (9.4%) visited more than once, with one user visiting the site a total of five times over the course of two weeks. Users interacted for an average of 162 seconds, and produced an average of 8 click interactions; notably, however, both distributions are particularly long-tailed. While some users only visited for very brief moments (often just a single search, followed presumably by some scrolling), 12 users used it for over ten minutes each, and likewise 20 users generated over 20 click events each. + +Have your reading habits changed during the 20 35 1122 \$3 \$4 \$5 COVID-19 pandemic? How often did you use libraries (pre-pandemic) How often do you use libraries (post March How often did you use physical bookshops (pre-pandemic) How often do you use physical bookshops (post March 2020) How often did you browse online for books (pre-pandemic) How often do you browse online for books (post March 2020) + +Figure 7: The change in reading habits due to COVID-19 pandemic. + +### 4.4 Survey + +When users clicked on any book, a survey link was displayed in the upper-right corner, ensuring that users interacted with the system to a minimal extent prior to taking the survey. Users who accessed the link were invited to fill out a short survey with two major parts: a first section which asked about their existing (pre- and post-pandemic) book browsing habits, and another section with which asked about their experiences with the library. Finally, users could provide open-ended feedback about their experience with the Library. The full survey design can be found in the Supplemental Materials. + +We received a total of 46 survey responses. Two responses were discarded: one filled out ' 1's for every single question, and another was a clear duplicate submission. Participants reported reading an average of 13.4 books per year (SD 15.2; participants reported anywhere from 1 book per year to 80 books per year). + +In the first part of the survey (Figure 7), the participants did not report significantly changing their reading habits (mean 2.9), but evidently had a significant shift in book-browsing habits due to the pandemic. Library usage decreased significantly (pre-pandemic mean 2.36, post-pandemic mean 1.54), as did bookshop usage (pre-pandemic mean 2.98, post-pandemic mean 1.45), whereas online browsing significantly increased (pre-pandemic mean 3.14, post-pandemic mean 3.86). + +In the second part of the survey (Figure 8), the users generally expressed a strong preference for the Library, with over ${80}\% \left( {{36}/{44}}\right)$ users being "somewhat likely" or more to come back, and over 90% of users (41/44) being "somewhat likely" or more to recommend this to a friend. Users generally felt that it did replicate the feel of real libraries and bookshops to some extent, with over half (25/44) noting that it felt similar or very similar. Similarly, around half of participants found it "easier" or "much easier" to find books compared to bookshops, libraries (23/44) and online destinations (21/44), and felt that it contained "more" or "significantly more" new and interesting books compared to bookshops, libraries (23/44) and online destinations $\left( {{24}/{44}}\right)$ . Overall, survey respondents were very positive about the library and its major features. + +## 5 Discussion + +Through our online deployment and survey, we were able to gather valuable feedback from readers and book browsers. In the open feedback column, we had several useful insights generated by users. + +Likely to come back 20 Likely to recommend to a bookish friend Replicates the feeling of real libraries and bookshops Rate the navigability of the library Easier to find books compared to bookshops More new and interesting books compared to bookshops and libraries Easier to find books compared to online destinations More new and interesting books compared to online destinations + +Figure 8: The user perception of the library's navigability, ease of use, browsing quality and overall quality. + +User Interface: Three users wished for a mobile version in the feedback, with one noting "Mobile version must be created and [published] on App stores immediately", indicating the modern importance of smartphone-friendly interfaces. Indeed, although our interface does work on some mobile browsers, it does not on many older browsers due to a lack of WebGL support. We expect this limitation will improve in the future as newer devices generally do support WebGL. Two users commented that they would have liked to change the colour scheme: one user remarked that it was too dark, while another user wanted a "night mode" to make it even darker. + +Search Performance: Our project primarily focused on book-browsing, rather than search, and consequently our search feature is much simpler than e.g. Google or Amazon's search functionality. Users commented that this could be improved. Four users noted that search could be improved: providing better search capabilities (such as advanced search by author/title/subject fields), improving the presentation of results (e.g. moving favorite or recommended books to the top), improving the discoverability of category or tag search, and ordering works by popularity or relevance. Recommendations, in particular, are interesting as they must function differently in a public library setting (with limited or no access to prior user data) compared with companies like Amazon which have vast access to prior user preferences via tracking and search history. + +Browsing Capabilities Users generally praised the browsing and search features of the Library. One user noted that they were able to find " 5 books on music theory in 5 minutes", while another noted that "It has everything I am looking for". Users also had suggestions for improving the experience further: one commented that they would have liked even more physicality in the form of different rooms/areas to browse around in, while another suggested a downloadable version for organizing their own books. + +## 6 CONCLUSION + +We have presented the design of the Library of Apollo, a virtual-reality library implemented in a web-browser application and designed to support library-like browsing and discovery. Our system was deployed publicly and attracted over two hundred visitors and 44 survey responses; our survey results suggested high user affinity for the book-browsing experience and library capabilities. Through the combination of features that we designed, we have built a virtual library which lends itself to serendipitous browsing and discoveries across a large, connected book dataset. + +## ACKNOWLEDGMENTS + +Anonymous for review. + +## REFERENCES + +[1] Lois Mai Chan, Sheila S Intner, and Jean Weihs. Guide to the Library of Congress classification. ABC-CLIO, 2016. + +[2] Jim LeBlanc. Classification and shelflisting as value added: Some remarks on the relative worth and price of predictability, serendipity, and depth of access. Library resources & technical services, 39(3):294- 302, 1995. + +[3] Dana McKay. Gotta keep'em separated: Why the single search box may not be right for libraries. In Proceedings of the 12th Annual Conference of the New Zealand chapter of the ACM special interest group on computer-human interaction, pages 109-112, 2011. + +[4] Stephann Makri, Ann Blandford, Jeremy Gow, Jon Rimmer, Claire Warwick, and George Buchanan. A library or just another information resource? a case study of users' mental models of traditional and digital libraries. Journal of the American Society for Information Science and Technology, 58(3):433-445, 2007. + +[5] Annika Hinze, Dana McKay, Nicholas Vanderschantz, Claire Timpany, and Sally Jo Cunningham. Book selection behavior in the physical library: implications for ebook collections. In Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries, pages 305-314, 2012. + +[6] Library of congress classification. https://www.loc.gov/catdir/ cpso/lcc.html. Accessed: 2021-04-06. + +[7] Elizabeth B Cooksey. Too important to be left to chance-serendipity and the digital library. Science & technology libraries, 25(1-2):23-32, 2004. + +[8] Geoffrey C Bowker and Susan Leigh Star. Sorting things out: Classification and its consequences. MIT press, 2000. + +[9] Elaine Svenonius. The intellectual foundation of information organization. MIT press, 2000. + +[10] Dana McKay, Wally Smith, and Shanton Chang. Lend me some sugar: Borrowing rates of neighbouring books as evidence for browsing. In IEEE/ACM Joint Conference on Digital Libraries, pages 145-154. IEEE, 2014. + +[11] Hanna Stelmaszewska and Ann Blandford. From physical to digital: a case study of computer scientists' behaviour in physical libraries. International Journal on Digital Libraries, 4(2):82-92, 2004. + +[12] Dana McKay, Shanton Chang, and Wally Smith. Manoeuvres in the dark: Design implications of the physical mechanics of library shelf browsing. In Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval, pages 47-56, 2017. + +[13] Shiao-Feng Su. Dialogue with an opac: How visionary was swanson in 1964? The Library Quarterly, 64(2):130-161, 1994. + +[14] Amanda Spink, Dietmar Wolfram, Major BJ Jansen, and Tefko Sarace-vic. Searching the web: The public and their queries. Journal of the American society for information science and technology, 52(3):226- 234, 2001. + +[15] Micheline Hancock-Beaulieu. Evaluating the impact of an online library catalogue on subject searching behaviour at the catalogue and t the shelves. Journal of documentation, 1990. + +[16] Jonathan L Herlocker, Joseph A Konstan, Loren G Terveen, and John T Riedl. Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS), 22(1):5-53, 2004. + +[17] Alice Thudt, Uta Hinrichs, and Sheelagh Carpendale. The bohemian bookshelf: supporting serendipitous book discoveries through information visualization. In Proceedings of the SIGCHI Conference on human factors in computing systems, pages 1461-1470, 2012. + +[18] Eike Kleiner, Roman Rädle, and Harald Reiterer. Blended shelf: reality-based presentation and exploration of library collections. In CHI'13 extended abstracts on human factors in computing systems, pages 577-582. 2013. + +[19] Library of Congress. Marc open-access, marc distribution services, mdsconnect, 2021. data retrieved from Library of Congress MARC Distribution Services, https://www.loc.gov/ cds/products/marcDist.php. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..1f53b9c115c8718de099724e894ecab344b05971 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,151 @@ +§ LIBRARY OF APOLLO: A VIRTUAL LIBRARY EXPERIENCE IN YOUR WEB BROWSER + + < g r a p h i c s > + +Figure 1: A wide-angle shot of the shelves. The user can scroll the shelves left and right, look around using their mouse and click & engage with the books, signage and panels. + +§ ABSTRACT + +Research libraries that house a large number of books organize, classify and lay out their collections by using increasingly rigorous classification systems. These classification systems make relations between books physically explicit within the dense library shelves, and allow for fast, easy and high quality serendipitous book discovery. We suggest that online libraries can offer the same browsing experience and serendipitous discovery that physical libraries do. To explore this, we introduce the Virtual Library, a virtual-reality experience which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. Library of Apollo is an infinitely scrollable array of shelves that has 9.4 million books distributed over 200,000 hierarchical Library of Congress Classification categories, the most common library classification system in the U.S. The users can jump between books and categories by search, clicking on the subject tags on books, and by navigating the classification tree by simple collapse and expand options. An online deployment of our system, together with user surveys and collected session data showed that our users showed a strong preference for our system for book discovery with ${41}/{45}$ of users saying they are positively inclined to recommend it to a bookish friend. + +Index Terms: Human-centered computing- - + +§ 1 INTRODUCTION + +Book exploration is a fundamental part of every reader's life. Libraries often play a unique role in this exploratory process. Large collections are classified, organized and laid out through the use of continuously-updated, rigorous and systematic classification systems, such as the Library of Congress Classification (LCC) system in the U.S. Under the LCC system, books are uniquely classified into a tree of classifications, and are additionally tagged with one or more standardized subject headings. + +These categorization systems make relationships between books physically explicit, putting each book in the context of every other book within the same and nearby shelves that have similar books, adjacent in subjects, authorship, time and geography [1,2]. This allows for natural, fast and high quality serendipitous book discoveries. + +Online library interfaces are generally search result based and this takes away from the valuable serendipitous discoveries that are often made in physical libraries through browsing [3-5]. In this paper, we introduce a Virtual Library implementation, Library of Apollo, which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. Library of Apollo is an infinitely scrollable array of bookshelves that has 9.4 million books distributed over 200,000 hierarchical Library of Congress Classification (LCC) categories. The users can jump between books and categories by targeted search, clicking on the subject tags on books, and by navigating the classification tree by simple collapse and expand options. + +The main contributions of this paper are two-fold. Firstly, we contribute the design and implementation of the Virtual Library system, which replicates a physical library's collection outline and categorization features while adapting it to a web browser setting that is controlled via mouse and keyboard. We discuss the improvements we made to increase the internal cross-connectivity, ease-of-use and navigability, along with the technical details necessary to serve a large and organized collection of books virtually. Second, we deployed this system publicly on the Internet at loapollo.com, soliciting survey feedback from visitors. Our analysis of the survey data and server logs reveal changing behaviours around physical libraries and bookstores due to the ongoing COVID-19 pandemic, and their preferences for using the Virtual Library system. + +§ 2 RELATED WORK + +§ 2.1 LIBRARY OF CONGRESS CLASSIFICATION SYSTEM + +There are different classification systems used for different purposes in different parts of the world, and as we have focused on large collections as those hosted by research and academic libraries, we decided to use the subject-based, hierarchical Library of Congress Classification system (LCC) [1], currently one of the most widely used library classification systems in the world [6]. + + < g r a p h i c s > + +Figure 2: Search results for ’a tale of two cities’, showing both shelf and book results. (A) Shelf results list LCC classes that are search-hits, while book results are hits for titles, authors and subject headings of books. The terms that led to the search-hits are highlighted. (B) Hovering over the 'Under Shelf' portion of book results expands the shelf hierarchy. + +LCC divides all knowledge into twenty-one basic classes, each identified by a single letter. These are then further divided into more specific subclasses, identified by two or three letter combinations (class N, Art, has subclasses NA, Architecture; NB, Sculpture etc.). Each subclass includes a loosely hierarchical arrangement of the topics pertinent to the subclass, going from the general to the more specific, often also specifying form, place and time [1,6]. LCC grows with human knowledge and is updated regularly by the Library of Congress [6]. + +§ 2.2 BOOK BROWSING AND SERENDIPITY IN LIBRARIES + +Information seeking in physical libraries takes two forms: search of known items and browsing. Patrons may begin with a search of a known item, but through undirected browsing of nearby items, often discover new and unknown books serendipitously [7]. Even though patrons could conceivably find interesting items in completely disarranged stacks, libraries aim to ease this fundamental browsing process by cataloguing, classifying and shelving books according to some classification system [2], putting like near like, deemed so through their topics, subtopics, authorship, form, place and time. + +Library shelves are meticulously organized to encourage serendipity $\left\lbrack {7,8}\right\rbrack$ and call numbers themselves are markers that indicate the semantic content of shelved items [9]. There is some quantified evidence about the neighbour effects created by Dewey Decimal and Library of Congress Classification systems: a nearby book being loaned increases the probability of a subsequent loan by at least 9-fold [10], indicating that browsing is happening regularly and effectively within these classification systems. There is also strong spoken and anecdotal preference for physical library shelf browsing $\left\lbrack {3,{11}}\right\rbrack$ and users bemoan the lack of opportunity to do so online [3-5]. + +More recently, McKay et. al. detailed the actions taken during library browsing by users, like reading signs, scanning, the number of books, shelves, bays touched and examined etc., indicating an idiosyncratic browsing behaviour by each user as well as a high success rate, with ${87}\%$ of users leaving their browsing session with one or more books [12]. These results suggest a unique and creative character to browsing within the general scope of information retrieval. + +§ 2.3 BOOK BROWSING AND SERENDIPITY IN THE DIGITAL AGE + +Even though libraries have been amongst the earliest adopters of computers, digitization and online access [13] and there has been a clearly expressed sentiment favoring browsing $\left\lbrack {3 - 5,{11}}\right\rbrack$ , there has been little commensurate effort to carry the physical shelf browsing experience to online libraries. Online library interfaces are almost universally search-result based, and we know the vast majority of users only see up to ten search results [14], and only interact with one or two in any depth for further information [15]. These small numbers are hardly conducive to browsing and serendipitous discovery. The recommender systems used by online vendors are generally good to discover popular books, popular tastes and genres while they often fail to provide novel ranges of selections necessary for browsing and serendipity [16]. + +There is some novel recent work that has been developed to facilitate serendipitous browsing in libraries [17, 18]. The Bohemian Bookshelf [17] aimed to encourage serendipity by creating five interlinked visualizations over its collection, based on books' covers, sizes, keywords, authorships and timelines. This playful approach was deployed on a touch kiosk in a library and was received very positively, but the collection size used for implementation and test was limited to only 250 books, a number too small to be indicative of performance over larger collections where some of the employed visualizations could become too cluttered and complex to navigate. + +The Blended Shelf [18] carried over the physical shelves into very large touch displays that offered 3D visualizations of a library collection of ${2.1}\mathrm{\;m}$ books, conforming to the standard classification used by that library. The 3D shelves were draggable by swipe, can be searched and reordered and the classifications can be navigated via a breadcrumb-cued input mechanism. However there are no deployments, tests or user studies regarding their implementation. + +We have taken a reality-based presentation approach, creating a 3D online library of ${9.4}\mathrm{\;m}$ books on infinitely scrollable shelves that conform to the LCC ordering, that is also navigable through search, cross-connected subjects, and a navigatable classification tree. + +§ 3 DESIGN AND IMPLEMENTATION + +We have regarded the strongly expressed preference for physical shelf browsing $\left\lbrack {3 - 5,{11}}\right\rbrack$ together with the long cultivated art of library classification systems [1] as the guiding principles of our design process, and aimed to bring the physical library experience online while making improvements to enhance the connectedness and navigability of the served online collection. + +§ 3.1 DESIGNING TO FAITHFULLY RECREATE PHYSICAL LIBRARIES + +We had two conflicting design interests. First, we wanted to design and deploy a virtual library application at scale available to everyone on web browsers. Second, we also wanted this to be as close to a physical library experience as possible. Naïvely optimizing for the latter would have meant building an online replica of a huge library in 3D, that would then be navigated by mouse and keyboard through some combination of user motion and teleportation. However, that would have been an alien set of interactions to most people, be hard to navigate and possibly even clunky. + + < g r a p h i c s > + +Figure 3: (A) Clicks on shelved books bring up synoptic book panels. The elements in the subject heading table are clickable. (B) The search-results after the “church architecture” subject heading was clicked. The hits on subject headings are highlighted with shades of green. + +Instead we extracted the most crucial part of the physical book browsing experience from the physical library: the shelves. The user, essentially the camera, is placed in front of a set of floating shelves, and is allowed 20 degrees of freedom of camera motion left and right, and 10 degrees up and down, achievable with a mouse or a trackpad. A wide-angle view of the shelves is seen on Figure 1. The user can "scroll" the shelves left and right as slow or as fast as they want via the regular scroll gesture on their trackpads, scroll-wheels on mouses or arrow-keys on keyboards (Video Figure). + + < g r a p h i c s > + +Figure 4: (A) The scroll-view of class hierarchies. Hierarchies can be collapsed by clicking any of the intermediate elements. (B) The collapsed hierarchies after the click at A. The hierarchies can be scrolled up and down, or expanded by clicking their tag panels if they contain child categories. + +Books on the virtual shelves are arranged in order according to their LCC classifications, just as they would be in a real library. There are also panels showing LCC hierarchies above shelves indicating where the user is within the library. The panels in the center indicate the user's current location, while the left and right panels respectively indicate the previous and next classifications, and thus the content of the shelves that await the user in those directions. A click on the left or right panels scroll the shelves to bring those classes in front of the user. A click on the center panels opens the LCC classification hierarchy as seen on Figure 4. This list can be navigated by up and down scrolling as well as clicks to expand and collapse it. Clicking on the end portion of these hierarchies brings that class & its associated books in front of the user. + +The user can start typing anytime they want to bring up the search bar, which can be used to search over titles, authors, subjects and LCC classifications. If any book or classification is selected through the search results, the user is transported to the shelf containing this selected book. The books are also sized according to their real dimensions, page numbers and volumes, as seen on Figure 1. + +§ 3.2 BUILDING A FAMILIAR AND EXPRESSIVE SEARCH BAR + +Single input search bars are ubiquitous and users are intimately familiar with them. We utilized the search bar as an entry point into our library by designing it as a catch-all input system that searches over book titles, authors, subject headings of books, and LCC categories. Our search provides a way for users to enter into the hierarchically organized stacks by helping them first with identifying a relevant pool of items they might be interested in. + +Figure 2 shows shelf (class) and book search hits for the query ’A tale of two cities.' Shelf results show the LCC hierarchy from top to bottom, together with the class numbers and the number of books that are listed under that class. Book results list titles, authors and publishers, together with a table showing the subject headings listed for each book. To the right of each book result, you can also see under which shelf (class) that book can be found, hovering over that section expands the shelf hierarchy as can be seen on the right side of Figure 2. The 'under shelf' portions are always colored to clearly designate shelf hierarchies. The search results are not paginated, and can be "infinitely" scrolled downwards until the end of results. + +§ 3.3 ENGINEERING A SMOOTH BROWSING EXPERIENCE + +Clicking on a search result, both for books and shelves, zooms the user into a set of floating shelves (Figure 1). The searched-for book appears on the centre shelf and is briefly highlighted as an indicator of its position. The books on shelves conform to LCC sorting and appear as they would on a regular library. One other key feature we have added is browsing over the LCC classification hierarchy as seen on Figure 4. This scroll-view of classification hierarchies is opened by clicking on the centered LCC panels. A listed hierarchy can be expanded or collapsed by clicking on different parts of the hierarchy. Clicking on any panel within a hierarchy collapses that hierarchy down to that panel, reducing the number of hierarchies in the scroll-view. Clicking on the name-tag panel of a hierarchy expands that hierarchy. Clicking on the last panel of a hierarchy transports the user to the beginning of that hierarchical class with the first book of that class highlighted on shelves. + +Another addition to the browsing experience is through the improved connectivity that comes through search over subject headings of books. Book clicks bring up synoptic windows as seen on part 1 of Figure 3. The colored subject tables are filled with Library of Congress Subject Headings (LCSH) which we have populated for every book from a dataset maintained by domain experts at the Library of Congress [1]. These LCSH headings are clickable and trigger a search over the entire dataset's indexed LCSH fields. Part 2 of Figure 3 shows search results for an LCSH field search; notice that these books can come from entirely different shelves and classes, often very distant from each other, thus providing another way to jump between classes and books. This allows users to transcend the distance imposed by LCC in physical libraries through a subject-based search method, which allows for dynamic links between books. We believe this feature would go a long way to satisfy the expressed desire by users to occasionally see distant books [5]. + +§ 3.4 PROVIDING ACCESS TO DIGITIZED AND PHYSICAL BOOKS + +Another killer feature of real world libraries is in-depth browsing of individual books. A library-goer can just pick up any book and read until their curiosity is sated. In order to provide a similar experience, when a user clicks on a shelved book, we provide a single page overview. Part 1 of Figure 3 shows an example of this page which displays the book's title, authorship, publishing house, colored subject headings, physical information regarding extent and dimensions, and links to lookup the same book on Amazon and Goodreads, as well as a Peek Inside button that is enabled when there is a free & digitized version available on OpenLibrary (which houses over 3 million books, or around ${30}\%$ of the total collection). A single click on this button opens a new tab with an online reader showing the contents of the book, as seen on Figure 5, providing an in-depth reading experience. The Amazon and Goodreads buttons provide easy access to purchase and social reading options, respectively. + + < g r a p h i c s > + +Figure 5: The "Peek Inside" button on book panels opens an e-reader that uses OpenLibrary's digitized book archive. + + < g r a p h i c s > + +Figure 6: The cloud architecture of Library of Apollo. + +§ 4 ONLINE DEPLOYMENT AND SURVEY RESULTS + +§ 4.1 THE FRONT-END AND THE CLOUD ARCHITECTURE + +We have developed the front-end of our application using Unity WebGL. The compiled WebAssembly, Javascript and HTML files are hosted on the AWS S3 service through the CDN service of AWS, Cloudfront. The books are sorted according to LCC sorting scheme $\left\lbrack {1,6}\right\rbrack$ and clustered into JSON files that contain 500 books each, gzipped and stored in S3. These data files are fetched when the user is browsing the shelves, and pre-fetched during scrolling when the user is near the end of a cluster. + +The same data that is stored in S3 is indexed on AWS Cloudsearch to power the search functionalities of our app. All search requests are performed through the REST API we have developed on AWS API Gateway. Search results are populated through the data returned from Cloudsearch. For analytics, user click-data is also recorded on DynamoDB through the same gateway. The search requests to OpenLibrary's digitized books dataset are made in a similar fashion. Our architecture is summarized on Figure 6. + +§ 4.2 THE DEPLOYED DATASET + +Courtesy of the Library of Congress [19], through their MARC Open-Access program, we were able to put together ${9.4}\mathrm{\;m}$ books, complete with their LCSH and LCC data, distributed over ${200}\mathrm{k}$ LCC classes. We pre-processed this data to serve our specific needs, and stored and indexed them for use within our virtual library application as described above. Despite the size of the dataset, the website is very smooth to use and inexpensive to operate - monthly operational costs are less than $40 USD. + +§ 4.3 DEPLOYMENT AND RECRUITMENT + +We deployed our library at a publicly accessible address, loapollo.com, and seeded links to the project via Reddit and word-of-mouth. Over a span of two weeks, over two hundred unique users visited the site. Each user to the library was assigned a randomly-generated persistent user ID, to track repeated session visits, as well as a session ID for any given visit. The Library front-end automatically logged click interactions with the system in a DynamoDB server for later introspection. + +Of the 224 unique visitors, 21 (9.4%) visited more than once, with one user visiting the site a total of five times over the course of two weeks. Users interacted for an average of 162 seconds, and produced an average of 8 click interactions; notably, however, both distributions are particularly long-tailed. While some users only visited for very brief moments (often just a single search, followed presumably by some scrolling), 12 users used it for over ten minutes each, and likewise 20 users generated over 20 click events each. + + < g r a p h i c s > + +Figure 7: The change in reading habits due to COVID-19 pandemic. + +§ 4.4 SURVEY + +When users clicked on any book, a survey link was displayed in the upper-right corner, ensuring that users interacted with the system to a minimal extent prior to taking the survey. Users who accessed the link were invited to fill out a short survey with two major parts: a first section which asked about their existing (pre- and post-pandemic) book browsing habits, and another section with which asked about their experiences with the library. Finally, users could provide open-ended feedback about their experience with the Library. The full survey design can be found in the Supplemental Materials. + +We received a total of 46 survey responses. Two responses were discarded: one filled out ' 1's for every single question, and another was a clear duplicate submission. Participants reported reading an average of 13.4 books per year (SD 15.2; participants reported anywhere from 1 book per year to 80 books per year). + +In the first part of the survey (Figure 7), the participants did not report significantly changing their reading habits (mean 2.9), but evidently had a significant shift in book-browsing habits due to the pandemic. Library usage decreased significantly (pre-pandemic mean 2.36, post-pandemic mean 1.54), as did bookshop usage (pre-pandemic mean 2.98, post-pandemic mean 1.45), whereas online browsing significantly increased (pre-pandemic mean 3.14, post-pandemic mean 3.86). + +In the second part of the survey (Figure 8), the users generally expressed a strong preference for the Library, with over ${80}\% \left( {{36}/{44}}\right)$ users being "somewhat likely" or more to come back, and over 90% of users (41/44) being "somewhat likely" or more to recommend this to a friend. Users generally felt that it did replicate the feel of real libraries and bookshops to some extent, with over half (25/44) noting that it felt similar or very similar. Similarly, around half of participants found it "easier" or "much easier" to find books compared to bookshops, libraries (23/44) and online destinations (21/44), and felt that it contained "more" or "significantly more" new and interesting books compared to bookshops, libraries (23/44) and online destinations $\left( {{24}/{44}}\right)$ . Overall, survey respondents were very positive about the library and its major features. + +§ 5 DISCUSSION + +Through our online deployment and survey, we were able to gather valuable feedback from readers and book browsers. In the open feedback column, we had several useful insights generated by users. + + < g r a p h i c s > + +Figure 8: The user perception of the library's navigability, ease of use, browsing quality and overall quality. + +User Interface: Three users wished for a mobile version in the feedback, with one noting "Mobile version must be created and [published] on App stores immediately", indicating the modern importance of smartphone-friendly interfaces. Indeed, although our interface does work on some mobile browsers, it does not on many older browsers due to a lack of WebGL support. We expect this limitation will improve in the future as newer devices generally do support WebGL. Two users commented that they would have liked to change the colour scheme: one user remarked that it was too dark, while another user wanted a "night mode" to make it even darker. + +Search Performance: Our project primarily focused on book-browsing, rather than search, and consequently our search feature is much simpler than e.g. Google or Amazon's search functionality. Users commented that this could be improved. Four users noted that search could be improved: providing better search capabilities (such as advanced search by author/title/subject fields), improving the presentation of results (e.g. moving favorite or recommended books to the top), improving the discoverability of category or tag search, and ordering works by popularity or relevance. Recommendations, in particular, are interesting as they must function differently in a public library setting (with limited or no access to prior user data) compared with companies like Amazon which have vast access to prior user preferences via tracking and search history. + +Browsing Capabilities Users generally praised the browsing and search features of the Library. One user noted that they were able to find " 5 books on music theory in 5 minutes", while another noted that "It has everything I am looking for". Users also had suggestions for improving the experience further: one commented that they would have liked even more physicality in the form of different rooms/areas to browse around in, while another suggested a downloadable version for organizing their own books. + +§ 6 CONCLUSION + +We have presented the design of the Library of Apollo, a virtual-reality library implemented in a web-browser application and designed to support library-like browsing and discovery. Our system was deployed publicly and attracted over two hundred visitors and 44 survey responses; our survey results suggested high user affinity for the book-browsing experience and library capabilities. Through the combination of features that we designed, we have built a virtual library which lends itself to serendipitous browsing and discoveries across a large, connected book dataset. + +§ ACKNOWLEDGMENTS + +Anonymous for review. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..77f61f6119bc609f392ac30d5f6cc26af0efe3d8 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,369 @@ +# Exploring Smartphone-enabled Text Selection in AR-HMD + +Category: Research + +![01963e95-86e3-758c-be7b-9022a1f78f82_0_225_347_1346_404_0.jpg](images/01963e95-86e3-758c-be7b-9022a1f78f82_0_225_347_1346_404_0.jpg) + +Figure 1: (a) The overall experimental setup consisted of an HoloLens, a smartphone, and an optitrack system. (b) In the HoloLens view, a user sees two text windows. The right one is the 'instruction panel' where the subject sees the text to select. The left is the "action panel' where the subject performs the actual selection. The cursor is shown inside a green dotted box (for illustration purpose only) on the action panel. For each text selection task, the cursor position always starts from the center of the window. + +## Abstract + +Text editing is important and at the core of most complex tasks, like writing an email or browsing the web. Efficient and sophisticated techniques exist on desktops and touch devices, but are still under-explored for Augmented Reality Head Mounted Display (AR-HMD). Performing text selection, a necessary step before text editing, in AR display commonly uses techniques such as hand-tracking, voice commands, eye/head-gaze, which are cumbersome and lack precision. In this paper, we explore the use of a smartphone as an input device to support text selection in AR-HMD because of its availability, familiarity, and social acceptability. We propose four eyes-free text selection techniques, all using a smartphone - continuous touch, discrete touch, spatial movement, and raycasting. We compare them in a user study where users have to select text at various granularity levels. Our results suggest that continuous touch, in which we used the smartphone as a trackpad, outperforms the other three techniques in terms of task completion time, accuracy, and user preference. + +Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods + +## 1 INTRODUCTION + +Text input and text editing represent a significant portion of our everyday digital tasks. We need it when we browse the web, write emails, or just when we type a password. Because of this ubiquity, it has been the focus of research on most of the platforms we are using daily like desktops, tablets, and mobile phones. The recent focus of the industry on Augmented Reality Head-Mounted Display (AR-HMD), with the development of devices like the Microsoft HoloLens ${}^{1}$ and Magic Leap ${}^{2}$ , made them more and more accessible to us, and their usage is envisioned in our future everyday life. The lack of a physical keyboard and mouse (i.e., the absence of interactive surfaces) with such devices makes text input difficult and an important challenge in AR research. While text input for AR-HMD has been already well-studied [17, 37, 45, 56], very little research focused on editing text that has already been typed by a user. Normally, text editing is a complex task and the first step is to select the text to edit it. This paper will only focus on this text selection part. Such tasks have already been studied on desktop [10] with various modalities (like gaze+gesture [14], gaze with keyboard [50]) as well as touch interfaces [21]. On the other hand, no formal experiments were conducted in AR-HMD contexts. + +In general, text selection in AR-HMD can be performed using various input modalities, including notably hand-tracking, eye/head-gaze, voice commands [20], and handheld controller [33]. However, these techniques have their limitations. For instance, hand-tracking suffers from achieving character level precision [39], lacks haptic feedback [13], and provokes arm fatigue [30] during prolonged interaction. Eye-gaze and head-gaze suffer from the 'Midas Touch' problem which causes unintended activation of commands in the absence of a proper selection mechanism $\left\lbrack {{28},{31},{54},{58}}\right\rbrack$ . Moreover, frequent head movements in head-gaze interaction increase motion sickness [57]. Voice interaction might not be socially acceptable in public places [25] and it may disturb the communication flow when several users are collaborating. In the case of a dedicated handheld controller, users need to always carry extra specific hardware with them. + +Recently, researchers have been exploring to use of a smartphone as an input for the AR-HMD because of its availability (it can even be the processing unit of the HMD [44]), familiarity, social acceptability, and tangibility $\left\lbrack {8,{22},{60}}\right\rbrack$ . Undoubtedly, there is a huge potential for designing novel cross-device applications with a combination of an AR display and a smartphone. In the past, smartphones have been used for interacting with different applications running on AR-HMDs such as manipulating 3D objects [40], windows management [46], selecting graphical menus [35] and so on. However, we are unaware of any research that has investigated text selection in an AR display using a commercially available smartphone. In this work, we explored different approaches to select text when using a smartphone as an input controller. We proposed four eyes-free text selection techniques for AR display. These techniques, described in Section 3.1, differ with regard to the mapping of smartphone-based inputs - touch or spatial. We then conduct a user study to compare these four techniques in terms of text selection task performance. + +The main contributions of this paper are - (1) design and development of a set of smartphone-enabled text selection techniques for AR-HMD; (2) insights from a 20 person comparative study of these techniques in text selection tasks. + +--- + +${}^{1}$ https://www.microsoft.com/en-us/hololens + +${}^{2}$ https://www.magicleap.com/en-us + +--- + +## 2 RELATED WORK + +In this section, we review previous work on text selection and editing in AR, and on a smartphone. We also review research that combines handheld devices with HMD and large wall displays. + +### 2.1 Text Selection and Editing in AR + +Very few research focused on text editing in AR. Ghosh et al. presented EYEditor to facilitate on-the-go text-editing on a smart-glass with a combination of voice and a handheld controller [20]. They used voice to modify the text content, while manual input is used for text navigation and selection. The use of a handheld device is inspiring for our work, however, voice interaction might not be suitable in public places. Lee et al. [36] proposed two force-assisted text acquisition techniques where the user exerts a force on a thumb-sized circular button located on an iPhone 7 and selects text which is shown on a laptop emulating the Microsoft Hololens display. They envision that this miniature force-sensitive area $\left( {{12}\mathrm{\;{mm}} \times {13}\mathrm{\;{mm}}}\right)$ can be fitted into a smart-ring. Although their result is promising, a specific force-sensitive device is required. + +In this paper, we follow the direction of the two papers previously presented and continue to explore the use of a smartphone in combination with an AR-HMD. While their use for text selection is still rare, it has been investigated more broadly for other tasks. + +### 2.2 Combining Handheld Devices and HMDs/Large Wall Displays + +By combining handheld devices and HMDs, researchers try to make the most of the benefits of both [60]. On one hand, the handheld device brings a 2D high-resolution display that provides a multi-touch, tangible, and familiar interactive surface. On the other hand, HMDs provide a spatialized, 3D, and almost infinite work-space. With MultiFi [22], Grubert et al. showed that such a combination is more efficient than a single device for pointing and searching tasks. For a similar setup, Zhu and Grossman proposed a set of techniques and demonstrated how it can be used to manipulate $3\mathrm{D}$ objects [60]. Similarly, Ren et al. [46] demonstrated how it can be used to perform windows management. Finally, in VESAD [43], Normand et al. used AR to directly extend the smartphone display. + +Regarding the type of input provided by the handheld device, it is possible to only focus on using touch interactions, as it is proposed in Input Forager [1] and Dual-MR [34]. Waldow et al. compared the use of touch to perform 3D object manipulation to gaze and mid-air gestures and showed that touch was more efficient [53]. It is also possible to track the handheld device in space and allow for 3D spatial interactions. It has been done in DualCAD in which Millette and McGuffin used a smartphone tracked in space to create and manipulate shapes using both spatial interactions and touch gestures [40]. With ARPointer [48], Ro et al. proposed a similar system and showed it led to better performance for object manipulation than a mouse and keyboard and a combination of gaze and mid-air gestures. When comparing the use of touch and spatial interaction with a smartphone, Budhiraja et al. showed that touch was preferred by participants for a pointing task [7], but Büschel et al. showed that spatial interaction was more efficient and preferred for a navigation task in 3D [8]. In both cases, Chen et al. showed that the interaction should be viewport-based and not world-based [16]. + +Overall, previous research showed a handheld device provides a good alternative input for augmented reality display in various tasks. In this paper, we focus on a text selection task, which has not been studied yet. It is not clear yet if only tactile interactions should be used on the handheld device or if it should also be tracked to provide spatial interactions. Thus, we propose the two alternatives in our techniques and compare them. + +The use of handheld devices as input was also investigated in combination with large wall-displays. It is a use case close to the one presented in this paper as text is displayed inside a 2D virtual window. Campbell et al. studied the use of a Wiimote as a distant pointing device [9]. With a pointing task, the authors compared its use with an absolute mapping (i.e. raytracing) to a relative mapping, and showed that participants were faster with the absolute mapping. Vogel and Balakrishnan found similar results between the two mappings (with the difference that they directly tracked the hand), but only with large targets and when clutching was necessary [52]. They also found that participants had a lower accuracy with an absolute mapping. This lower accuracy for an absolute mapping with spatial interaction is also shown when compared with distant touch interaction of the handheld device as a trackpad, with the same task [4]. Jain et al. also compared touch interaction with spatial interaction, but with a relative mapping, and found that the spatial interaction was faster but less accurate [29]. The accuracy result is confirmed by a recent study from Siddhpuria et al. in which the authors also compared the use of absolute and relative mapping with the touch interaction, and found that the relative mapping is faster [49]. These studies were all done for a pointing task, and overall showed that using the handheld device as a trackpad (so with a relative mapping) is more efficient (to avoid clutching, one can change the transfer function [42]). In their paper, Siddhpuria et al. highlighted the fact that more studies needed to be done with a more complex task to validate their results. To our knowledge, this has been done only by Baldhauf et al. with a drawing task, and they showed that spatial interaction with an absolute mapping was faster than using the handheld device as a trackpad without any impacts on the accuracy [4]. In this paper, we take a step in this direction and use a text selection task. Considering the result from Baldauf et al., we cannot assume that touch interaction will perform better. + +### 2.3 Text Selection on Handheld Devices + +Text selection has not been yet investigated with the combination of a handheld device and an AR-HMD, but it has been studied on handheld devices independently. Using a touchscreen, adjustment handles are the primary form of text selection techniques. However, due to the fat-finger problem [5], it can be difficult to modify the selection by one character. A first solution is to allow users to only select the start and the end of the selection as it is done in TextPin in which it is shown to be more efficient than the default technique [26]. Fuccella et al. [19] and Zhang et al. [59] proposed to use the keyboard area to allow the user to control the selection using gestures and showed it was also more efficient than the default technique. Ando et al. adapted the principle of shortcuts and associated different actions with the keys of the virtual keyboard that was activated with a modifier action performed after. In the first paper, the modifier was the tilting of the device [2], and in a second one, it was a sliding gesture starting on the key [3]. The latter was more efficient than the first one and the default technique. With BezelCopy [15], a gesture on the bezel of the phone allow for a first rough selection that can be refined after. Finally, other solutions used a non-traditional smartphone. Le et al. used a fully touch-sensitive device to allow users to perform gestures on the back of the device [32]. Gaze N'Touch [47] used gaze to define the start and end of the selection. Goguey et al. explored the use of a force-sensitive screen to control the selection [21], and Eady and Girouard used a deformable screen to explore the use of the bending of the screen [18]. + +In this work, we choose to focus on commercially available smart-phones, and we will not explore in this paper, the use of deformable, or fully touch-sensitive ones. Compared to the use of shortcuts, the use of gestures seems to lead to good performance and can be performed without looking at the screen (i.e. eye-free), which avoids transition between the AR virtual display and the handheld devices. + +![01963e95-86e3-758c-be7b-9022a1f78f82_2_304_145_1187_422_0.jpg](images/01963e95-86e3-758c-be7b-9022a1f78f82_2_304_145_1187_422_0.jpg) + +Figure 2: Illustrations of our proposed interaction techniques: (a) continuous touch; (b) discrete touch; (c) spatial movement; (d) raycasting. + +## 3 DESIGNING SMARTPHONE-BASED TEXT SELECTION IN AR-HMD + +### 3.1 Proposed Techniques + +Previous work used a smartphone as an input device to interact with virtual content in AR-HMD mainly in two ways - touch input from the smartphone and tracked the smartphone spatially like AR/VR controller. Similar work on wall-displays suggested that using the smartphone as a trackpad would be the most efficient technique, but this was tested with pointing task (See Related Work). With a drawing task (which could be closer to a text selection task than a pointing task), spatial interaction was actually better [4]. + +Inspired by this, we propose four eyes-free text selection techniques for AR-HMD - two are completely based on mobile touch-screen interaction, whereas the smartphone needs to be tracked in mid-air for the latter two approaches to use spatial interactions. For spatial interaction, we choose a technique with an absolute mapping (Raycasting) and one with a relative one (Spatial Movement). The comparison between both in our case is not straightforward, previous results suggest that a relative mapping would have a better accuracy, but an absolute one would be faster. For touch interaction, we choose to not have an absolute mapping, its use with a large virtual window could lead to a poor accuracy [42], and just have technique that use a relative mapping. In addition to the traditional use of the smartphone as a trackpad (Continuous Touch), we proposed a technique that allow for a discrete selection of text (Discrete Touch). Such discrete selection mechanism has shown good results in a similar context for shape selection [29]. Overall, while we took inspiration from previous work for these techniques, they have never been assessed together for a text selection task. + +To select text successfully using any of our proposed techniques, a user needs to follow the same sequence of steps each time. First, she moves the cursor, located on the text window in an AR display, to the beginning of the text to be selected (i.e., the first character). Then, she performs a double tap on the phone to confirm the selection of that first character. She can see on the headset screen that the first character got highlighted in yellow color. At the same time, she enters into the text selection mode. Next, she continues moving the cursor to the end position of the text using one of the techniques presented below. While the cursor is moving, the text is also getting highlighted simultaneously up to the current position of the cursor. Finally, she ends the text selection with a second double tap. + +#### 3.1.1 Continuous Touch + +In continuous touch, the smartphone touchscreen acts as a trackpad (see Figure. 2(a)). It is an indirect pointing technique where the user moves her thumb on the touchscreen to change the cursor position on the AR display. For the mapping between display and touchscreen, we used a relative mode with clutching. As clutching may degrades performance [12], a control-display (CD) gain was applied to minimize it (see Section 3.2). + +#### 3.1.2 Discrete Touch + +This technique is inspired by the text selection with keyboard shortcuts available in both Mac [27] and Windows [24] OS. In this work, we tried to emulate a few keyboard shortcuts. We particularly considered imitating keyboard shortcuts related to the character, word, and line-level text selection. For example, in Mac OS, hold down 1, and pressing $\rightarrow$ or $\leftarrow$ extends text selection one character to the right or left. Whereas hold down $\widehat{U} + = =$ and pressing $\rightarrow$ or $\overset{ \leftarrow }{ \leftarrow }$ allows users to select text one word to the right or left. To perform text selection to the nearest character at the same horizontal location on the line above or below, a user needs to hold down [1] and press $\uparrow$ or $\downarrow$ respectively. In discrete touch interaction, we replicated all these shortcuts using directional swipe gestures (see Figure. 2(b)). Left or right swipe can select text at both levels - word as well as character. By default, it works at the word level. The user performs a long-tap which acts as a toggle button to switch between word and character level selection. On the other hand, up or down swipe selects text at one line above or one line below from the current position. The user can only select one character/word/line at a time with its respective swipe gesture. + +Note that, to select text using discrete touch, a user first positions the cursor on top of the starting word (not the starting character) of the text to be selected by touch dragging on the smartphone as described in the continuous touch technique. From a pilot study, we observed that moving the cursor every time to the starting word using discrete touch makes the overall interaction slow. Then, she selects that first word with the double tap and uses discrete touch to select text up to the end position as described before. + +#### 3.1.3 Spatial Movement + +This technique emulates the smartphone as an air-mouse $\left\lbrack {{38},{51}}\right\rbrack$ for AR-HMD. To control the cursor position on the headset screen, the user holds the phone in front of his torso, places his thumb on the touchscreen, and then she moves the phone in the air with small forearm motions in a plane that is perpendicular to the gaze direction (see Figure. 2(c)). While moving the phone, its tracked positional data in XY-coordinates get translated into the cursor movement in XY-coordinates inside a 2D window. When a user wants to stop the cursor movement, she simply lifts her thumb from the touchscreen. Thumb touch-down and touch-release events define the start and stop of the cursor movement on the AR display. The user determines the speed of the cursor by simply moving the phone faster and slower accordingly. We applied CD-gain between the phone movement and the cursor displacement on the text window (see Section 3.2). + +
Techniques$C{D}_{Max}$$C{D}_{Min}$$\lambda$${V}_{inf}$
Continuous Touch28.340.014336.710.039
Spatial Movement23.710.022132.830.051
+ +Table 1: Logistic function parameter values for continuous touch and spatial movement interaction + +#### 3.1.4 Raycasting + +Raycasting is a popular interaction technique in AR/VR environments to select $3\mathrm{D}$ virtual objects $\left\lbrack {6,{41}}\right\rbrack$ . In this work, we developed a smartphone-based raycasting technique for selecting text displayed on a 2D window in AR-HMD (see Figure. 2(d)). A 6 DoF tracked smartphone was used to define the origin and orientation of the ray. In the headset display, the user can see the ray in a straight line appearing from the top of the phone. By default, the ray is always visible to users in AR-HMD as long as the phone is being tracked properly. In raycasting, the user needs to do small angular wrist movements for pointing on the text content using the ray. Where the ray hits on the text window, the user sees the cursor there. Compared to other proposed methods, raycasting does not require clutching as it allows direct pointing to the target. The user confirms the target selection on the AR display by providing a touch input (i.e., double tap) from the phone. + +### 3.2 Implementation + +To prototype our proposed interaction techniques, we used a Mi-crosoft HoloLens $2\left( {{42}^{ \circ } \times {29}^{ \circ }}\right.$ screen) as an AR-HMD device and an OnePlus 5 as a smartphone. For spatial movement and raycasting interactions, real-time pose information of the smartphone is needed. An OptiTrack ${}^{3}$ system with three Flex-13 cameras was used for accurate tracking with low latency. To bring the hololens and the smartphone into a common coordinate system, we attached passive reflective markers to them and did a calibration between hololens space and optitrack space. + +In our software framework, the AR application running on HoloLens was implemented using Unity3D (2018.4) and Mixed Reality Toolkit ${}^{4}$ . To render text in HoloLens, we used TextMesh-Pro. A Windows 10 workstation was used to stream tracking data to HoloLens. All pointing techniques with the phone were also developed using Unity3D. We used UNet ${}^{5}$ library for client-server communications between devices over the WiFi network. + +For continuous touch and spatial movement interactions, we used a generalized logistic function [42] to define the control-display (CD) gain between the move events either on the touchscreen or in the air and the cursor displacement in the AR display: + +$$ +{CD}\left( v\right) = \frac{C{D}_{Max} - C{D}_{Min}}{1 + {e}^{-\lambda \times \left( {v - {V}_{inf}}\right) }} + C{D}_{Min} \tag{1} +$$ + +$C{D}_{Max}$ and $C{D}_{Min}$ are the asymptotic maximum and minimum amplitudes of CD gain and $\lambda$ is a parameter proportional to the slope of the function at $v = {V}_{\text{inf }}$ with ${V}_{\text{inf }}$ a inflection value of the function. We derived initial values from the parameters of the definitions from Nancel et al. [42], and then empirically optimized for each technique. The parameters were not changed during the study for individual participants. The values are summarized in Table 1. + +In discrete touch interaction, we implemented up, down, left, and right swipes by obtaining touch position data from the phone. We considered a 700 msec time-window (empirically found) for detecting a long-tap event. Users get vibration feedback from the phone when she performs long-tap successfully. They also receive vibration haptics while double tapping to start and end the text selection in all interaction techniques. Note that, we did not provide haptic feedback for swipe gestures. With each swipe movement, they can see that texts are getting highlighted in yellow color. This acts as visual feedback by default for touch swipes. + +![01963e95-86e3-758c-be7b-9022a1f78f82_3_928_155_707_383_0.jpg](images/01963e95-86e3-758c-be7b-9022a1f78f82_3_928_155_707_383_0.jpg) + +Figure 3: Text selection tasks used the experiments: (1) word (2) sub-word (3) word to a character (4) four words (5) one sentence (6) paragraph to three sentences (7) one paragraph (8) two paragraphs (9) three paragraphs (10) whole text. + +In the spatial movement technique, we noticed that the phone moves slightly during the double tap event. This results in a slight unintentional cursor movement. To reduce that, we suspended cursor movement for ${300}\mathrm{{msec}}$ (empirically found) when there is any touch event on the phone screen. + +In raycasting, we applied the $1\mathfrak{€}$ Filter [11] with $\beta = {80}$ and min-cutoff $= {0.6}$ (empirically tested) at the ray source to minimize jitter and latency which usually occur due to both hand tremor and double tapping [55]. We set the ray length to 8 meters by default. The user sees the full length of the ray when it is not hitting the text panel. + +## 4 EXPERIMENT + +To assess the impact of the different characteristics of these four interaction techniques we perform a comparative study with a text selection task while users are standing up. Particularly, we are interested to evaluate the performance of these techniques in terms of task completion time, accuracy, and perceived workload. + +### 4.1 Participants and Apparatus + +In our experiment, we recruited 20 unpaid participants (P1-P20) (13 males +7 females) from a local university campus. Their ages ranged from 23 to 46 years (mean $= {27.84},\mathrm{{SD}} = {6.16}$ ). Four were left-handed. All were daily users of smartphones and desktops. With respect to their experience with AR/VR technology, 7 participants ranked themselves as an expert because they are studying and working on the same field, 4 participants were beginners as they played some games in VR, while others had no prior experience. They all had either normal or corrected-to-normal vision. We used the apparatus and prototype described in Subsection 3.2. + +### 4.2 Task + +In this study, we ask participants to perform a series of text selection using our proposed techniques. Participants were standing up for the entire duration of the experiment. We reproduce different realistic usage by varying the type of text selection to do, like the selection of a word, a sentence, a paragraph, etc. Figure 3 shows all the types of text selection that were asked to the participants. Concretely, the experiment scene in HoloLens consisted of two vertical windows of ${102.4}\mathrm{\;{cm}} \times {57.6}\mathrm{\;{cm}}$ positioned at a distance of ${180}\mathrm{\;{cm}}$ from the headset at the start of the application (i.e., its visual size was ${31.75}^{ \circ } \times {18.1806}^{ \circ }$ ). The windows were anchored in the world coordinate. These two panels contain the same text. Participants are asked to select the text in the action panel (left panel in Figure 1(b)) that is highlighted in the instruction panel (right panel in Figure 1(b)). The user controls a cursor (i.e., a small circular dot in red color as shown in Figure 1(b)) using one of the techniques on the smartphone. Its position is always bounded by the window size. The text content was generated by Random Text Generator ${}^{6}$ and was displayed using the Liberation Sans font with a font-size of ${25}\mathrm{{pt}}$ (to allow a comfortable viewing from a few meters). + +--- + +${}^{3}$ https://optitrack.com/ + +${}^{4}$ https://github.com/microsoft/MixedRealityToolkit-Unity + +${}^{5}$ https://docs.unity3d.com/Manual/UNet.html + +--- + +![01963e95-86e3-758c-be7b-9022a1f78f82_4_190_187_1413_508_0.jpg](images/01963e95-86e3-758c-be7b-9022a1f78f82_4_190_187_1413_508_0.jpg) + +Figure 4: (a) Mean task completion time for our proposed four interaction techniques. Lower scores are better. (b) Mean error rate of interaction techniques. Lower scores are better. Error bars show 95% confidence interval. Statistical significances are marked with stars $( * * : p < {.01}$ and $* : p < {.05})$ . + +### 4.3 Study Design + +We used a within-subject design with 2 factor: 4 INTERACTION TECHNIQUE (Continuous touch, Discrete touch, Spatial movement, and Raycasting) × 10 TEXT SELECTION TYPE (shown in Figure 3) $\times {20}$ participants = 800 trials. The order of INTERACTION TECHNIQUE was counterbalanced across participants using a Latin Square. The order of TEXT SELECTION TYPE is randomized in each block for each INTERACTION TECHNIQUE (but same for each participant). + +### 4.4 Procedure + +We welcomed participants upon arrival. They were asked to read and sign the consent form, fill out a pre-study questionnaire to collect demographic information and prior AR/VR experience. Next, we gave them a brief introduction to the experiment background, hardware, the four interaction techniques, and the task involved in the study. After that, we helped participants to wear HoloLens comfortably and complete the calibration process for their personal interpupillary distance (IPD). For each block of INTERACTION TECHNIQUE, participants completed a practice phase followed by a test session. During the practice, the experimenter explained how the current technique worked, and participants were encouraged to ask questions. Then, they had time to train themselves with the technique until they were fully satisfied, which took around 7 minutes on average. Once they felt confident with the technique, the experimenter launched the application for the test session. They were instructed to do the task as quickly and accurately as possible in the standing condition. To avoid noise due to participants using either one or two hands, we asked to only use their dominant hand. + +At the beginning of each trial in the test session, the text to select was highlighted in the instruction panel. Once they were satisfied with their selection, participants had to press a dedicated button on the phone screen to get to the new task. They were allowed to use their non-dominant hand only to press this button. At the end of each block of INTERACTION TECHNIQUE, they answered a NASA-TLX questionnaire [23] on iPad, and moved to the next condition. + +At the end of the experiment, we asked participants a questionnaire in which they had to rank techniques by speed, accuracy, and overall preference and performed an informal post-test interview. + +The entire experiment took approximately 80 minutes in total. Participants were allowed to take breaks between sessions during which they could sit and encourage to comment at any time during the experiment. To respect COVID-19 safety protocol, participants wore FFP2 mask and maintained a 1 meter distance with the experimenter at all times. + +### 4.5 Measures + +We recorded completion time as the time taken to select the text from its first character to the last character, which is the time difference between the first and second double tap. If they selected more or less characters than expected, the trial was considered wrong. We then calculated the error rate as the percentage of wrong trials for each condition. Finally, as stated above, participants filled a NASA TLX questionnaire to measure the subjective workload of each INTERACTION TECHNIQUE, and their preference was measured using a ranking questionnaire at the end of the experiment. + +### 4.6 Hypotheses + +In our experiment, we hypothesized that: + +H1. Continuous touch, Spatial movement, and Raycasting will be faster than Discrete touch because a user needs to spend more time for multiple swipes and do frequent mode switching to select text at the character/word/sentence level. + +H2. Discrete touch will be more mentally demanding compared to all other techniques because the user needs to remember the mapping between swipe gestures and text granularity, as well as the long-tap for mode switching. + +H3. The user will perceive that Spatial movement will be more physically demanding as it involves more forearm movements. H4. The user will make more errors in Raycasting, and it will be more frustrating because double tapping for target confirmation while holding the phone in one-hand will introduce more jitter [55]. + +--- + +${}^{6}$ http://randomtextgenerator.com/ + +--- + +![01963e95-86e3-758c-be7b-9022a1f78f82_5_165_165_1480_679_0.jpg](images/01963e95-86e3-758c-be7b-9022a1f78f82_5_165_165_1480_679_0.jpg) + +Figure 5: Mean scores for the ranking questionnaire which are in 3 point likert scale. Higher marks are better. Error bars show 95% confidence interval. Statistical significances are marked with stars (**: $p < {.01}$ and *: $p < {.05}$ ). + +H5. Overall, Continuous touch would be the most preferred text selection technique as it works similarly to the trackpad which is already familiar to users. + +## 5 RESULT + +To test our hypothesis, we conducted a series of analyses using IBM SPSS software. Shapiro-Wilk tests showed that the task completion time, total error, and questionnaire data were not normally distributed. Therefore, we used the Friedman test with the interaction technique as an independent variable to analyze our experimental data. When significant effects were found, we reported post hoc tests using the Wilcoxon signed-rank test and applied Bonferroni corrections for all pair-wise comparisons. We set an $\alpha = {0.05}$ in all significance tests. Due to a logging issue, we had to discard one participant and did the analysis with 19 instead of 20 participants. + +### 5.1 Task Completion Time + +There was a statistically significant difference in task completion time depending on which interaction technique was used for text selection $\left\lbrack {{\chi }^{2}\left( 3\right) = {33.37}, p < {.001}}\right\rbrack$ (see Figure 4(a)). Post hoc tests showed that Continuous touch $\left\lbrack {\mathrm{M} = {5.16},\mathrm{{SD}} = {0.84}}\right\rbrack$ , Spatial movement $\left\lbrack {\mathrm{M} = {5.73},\mathrm{{SD}} = {1.38}}\right\rbrack$ , and Raycasting $\lbrack \mathrm{M} = {5.43},\mathrm{{SD}} =$ 1.66] were faster than Discrete touch $\left\lbrack {\mathrm{M} = {8.78},\mathrm{{SD}} = {2.09}}\right\rbrack$ . + +### 5.2 Error Rate + +We found significant effects of the interaction technique on error rate $\left\lbrack {{\chi }^{2}\left( 3\right) = {39.45}, p < {.001}}\right\rbrack$ (see Figure 4(b)). Post hoc tests showed that Raycasting $\left\lbrack {\mathrm{M} = {24.21},\mathrm{\;{SD}} = {13.46}}\right\rbrack$ was more error-prone than Continuous touch $\left\lbrack {\mathrm{M} = {1.05},\mathrm{{SD}} = {3.15}}\right\rbrack$ , Discrete touch $\lbrack \mathrm{M} = {4.73}$ , $\mathrm{{SD}} = {9.05}\rbrack$ , and Spatial movement $\left\lbrack {\mathrm{M} = {8.42},\mathrm{{SD}} = {12.58}}\right\rbrack$ . + +### 5.3 Questionnaires + +For NASA-TLX, we found significant differences for mental demand $\left\lbrack {{\chi }^{2}\left( 3\right) = {9.65}, p = {.022}}\right\rbrack$ , physical demand $\left\lbrack {{\chi }^{2}\left( 3\right) = {29.75}}\right.$ , $p < {.001}\rbrack$ , performance $\left\lbrack {{\chi }^{2}\left( 3\right) = {40.14}, p < {.001}}\right\rbrack$ , frustration $\left\lbrack {\chi }^{2}\right.$ $\left( 3\right) = {39.53}, p < {.001}\rbrack$ , and effort $\left\lbrack {{\chi }^{2}\left( 3\right) = {32.69}, p < {.001}}\right\rbrack$ . Post hoc tests showed that Raycasting and Discrete touch had significantly higher mental demand compared to Continuous touch and Spatial movement. On the other hand, physical demand was lowest for Continuous touch, whereas users rated significantly higher physical demand for Raycasting and Spatial movement. In terms of performance, Raycasting was rated significantly lower than the other techniques. Raycasting was also rated significantly more frustrating. Moreover, Continuous touch was least frustrating and better in performance than Spatial movement. Figure 6 shows a bar chart of the NASA-TLX workload sub-scales for our experiment. + +For ranking questionnaires, there were significant differences for speed $\left\lbrack {{\chi }^{2}\left( 3\right) = {26.40}, p < {.001}}\right\rbrack$ , accuracy $\left\lbrack {{\chi }^{2}\left( 3\right) = {45.5}}\right.$ , $p < {.001}\rbrack$ , and preference $\left\lbrack {{\chi }^{2}\left( 3\right) = {38.56}, p < {.001}}\right\rbrack$ . Post hoc test showed that users ranked Discrete touch as the slowest and Raycasting as the least accurate technique. The most preferred technique was Continuous touch whereas Raycasting was the least. Users also favored discrete touch as well as Spatial movement based text selection approach. Figure 5 summarises participants responses for ranking questionnaires. + +## 6 DISCUSSION & DESIGN IMPLICATIONS + +Our results suggest that Continuous Touch is the technique that was preferred by the participants (confirming $\mathbf{{H5}}$ ). It was the least physically demanding technique and the less frustrating one. It was also more satisfying regarding performance than the two spatial ones (Raycasting and Spatial Movement). Finally, it was less mentally demanding than Discrete Touch and Raycasting. Participants pointed out that this technique was simple, intuitive, and familiar to them as they are using trackpad and touchscreen every day. During the training session, we noticed that they took the least time to understand its working principle. In the interview, P8 commented, "I can select text fast and accurately. Although I noticed a bit of overshooting in the cursor positioning, it can be adjusted by tuning CD gain". P17 said, "I can keep my hands down while selecting text. This gives me more comfort". + +![01963e95-86e3-758c-be7b-9022a1f78f82_6_148_152_1502_603_0.jpg](images/01963e95-86e3-758c-be7b-9022a1f78f82_6_148_152_1502_603_0.jpg) + +Figure 6: Mean scores for the NASA-TLX task load questionnaire which are in range of 1 to 10. Lower marks are better, except for performance. Error bars show 95% confidence interval. Statistical significances are marked with stars (**: $p < {.01}$ and *: $p < {.05}$ ). + +On the other hand, Raycasting was the least preferred technique and led to the lowest task accuracy (participants were also the least satisfied with their performance using this technique). This can be explained by the fact that it was the most physically demanding and the most frustrating (confirming H5). Finally, it was more mentally demanding than Continuous Touch and Spatial Movement. In their comments, participants reported about the lack of stability due to the one-handed phone holding posture. Some participants complained that they felt uncomfortable to hold this OnePlus 5 phone in one hand as it was a bit bigger compared to their hand size. This introduced even more jitter for them in Raycasting while double-tapping for target confirmation. P10 commented, "I am sure I will perform Raycasting with fewer errors if I can use my both hands to hold the phone". Moreover, from the logged data, we noticed that they made more mistakes when the target character was positioned inside a word rather than either at the beginning or at the end, which was confirmed in the discussion with participants. + +As we expected, Discrete Touch was the slowest technique (confirming $\mathbf{{H1}}$ ), but was not the most mentally demanding, as it was only more demanding than Continuous Touch (rejecting $\mathbf{{H2}}$ ). It is also more physically demanding than Continuous Touch, but less than Spatial Movement and Raycasting. Several participants mentioned that it is excellent for the short word to word or sentence to sentence selection, but not for long text as multiple swipes are required. They also pointed out that performing mode switching with a long-tap of 700 msec was a bit tricky and lost some time there during text selection. Although they got better with it over time, still they are uncertain to do it successfully in one attempt. + +Finally, contrary to our expectation, Spatial Movement was not the most physically demanding technique, as it was less demanding than Raycasting (but more than Continuous Touch and Discrete Touch). It was also less mentally demanding than Raycasting and led to less frustration. However, it led to more frustration and participants were less satisfied with their performance with this technique than with Continuous Touch. According to participants, with this technique, moving the forearm needs physical effort undoubtedly, but they only need to move it for a very short distance which was fine for them. From the user interview, we came to know that they did not use much clutching (less than with Continuous Touch). P13 mentioned, "In Spatial Movement, I completed most of the tasks without using clutching at all". + +Overall, our results suggest that between Touch and Spatial interactions, it would be better to use Touch for text selection, which confirms findings from Siddhpuria et al. for pointing tasks [49]. Continuous Touch was overall preferred, faster, and less demanding than Discrete Touch, which goes against results from the work by Jain et al. for shape selection [29]. Such difference can be explained by the fact that with text selection, there is a minimum of two levels of discretization (characters and words), which makes it mentally demanding. It can also be explained by the high number of words (and even more characters) in a text, contrary to the number of shapes in Jain et al. experiment. This led to a high number of discrete actions for the selection, and thus, a higher physical demand. However, surprisingly, most of the participants appreciated the idea of Discrete Touch. If a tactile interface is not available on the handheld device, our results suggest to use a spatial interaction technique that uses a relative mapping, as we did with Spatial Movement. We could not find any differences in time, contrary to the work by Campbell et al. [9], but it leads to fewer errors, which confirms what was found by Vogel and Balakrishnan [52]. It is also less physically and mentally demanding and leads to less frustration than an absolute mapping. On the technical side, a spatial interaction technique with a relative mapping can be easily achieved without an external sensor (as it was done for example by Siddhpuria et al. [49]). + +## 7 LIMITATIONS + +There were two major limitations. First, we used an external tracking system which limits us to lab study only. As a result, it is difficult to understand the social acceptability of each technique until we consider the real-world on-the-go situation. However, technical progress in inside-out tracking ${}^{7}$ means that it will be possible, soon, to have smartphones that can track themselves accurately in 3D space. Second, some of our participants had difficulties holding the phone in one-hand because the phone was a bit bigger for their hands. They mentioned that although they were trying to move their thumb faster in continuous touch and discrete touch interactions, they were not able to do it comfortably due to the afraid of dropping the phone. This bigger phone size also influenced their raycasting performance particularly when they need to do a double tap for target confirmation. Hence, using one phone size for all was an important constraint in this experiment. + +--- + +${}^{7}$ https://developers.google.com/ar + +--- + +## 8 CONCLUSION AND FUTURE WORK + +In this research, we investigated the use of a smartphone as an eyes-free interactive controller to select text in augmented reality head-mounted display. We proposed four interaction techniques: 2 that use the tactile surface of the smartphone: Continuous Touch and Discrete Touch, and two that track the device in space: Spatial Movement and Raycasting. We evaluated these four techniques in a text selection task study. The results suggested that techniques using the tactile surface of the device are more suited for text selection than spatial one, Continuous Touch being the most efficient. If a tactile surface was not available, it would be better to use a spatial technique (i.e. with the device tracked in space) that uses a relative mapping between the user gesture and the virtual screen, compared to a classic Raycasting technique that uses an absolute mapping. + +In this work, we have focused on interaction techniques based on smartphone inputs. This allowed us to better understand which approach should be favored in that context. In the future, it would be interesting to explore a more global usage scenario such as a text editing interface in AR-HMD using smartphone-based input where users need to perform other interaction tasks such as text input and commands execution simultaneously. Another direction to future work is to compare phone-based techniques to other input techniques like hand tracking, head/eye gaze, and voice commands. Furthermore, we only considered standing condition, but it would be interesting to study text selection performance while the user is walking. + +## REFERENCES + +[1] M. Al-Sada, F. Ishizawa, J. Tsurukawa, and T. Nakajima. Input forager: + +A user-driven interaction adaptation approach for head worn displays. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, MUM '16, p. 115-122. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/3012709. 3012719 + +[2] T. Ando, T. Isomoto, B. Shizuki, and S. Takahashi. Press & tilt: One-handed text selection and command execution on smartphone. In Proceedings of the 30th Australian Conference on Computer-Human Interaction, OzCHI '18, p. 401-405. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3292147.3292178 + +[3] T. Ando, T. Isomoto, B. Shizuki, and S. Takahashi. One-handed rapid text selection and command execution method for smartphones. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA '19, p. 1-6. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290607. 3312850 + +[4] M. Baldauf, P. Fröhlich, J. Buchta, and T. Stürmer. From touchpad to smart lens. International Journal of Mobile Human Computer Interaction, 5:1-20, 08 2015. doi: 10.4018/jmhci.2013040101 + +[5] S. Boring, D. Ledo, X. A. Chen, N. Marquardt, A. Tang, and S. Greenberg. The fat thumb: Using the thumb's contact size for single-handed mobile interaction. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '12, p. 39-48. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2371574.2371582 + +[6] D. A. Bowman, E. Kruijff, J. J. LaViola, and I. Poupyrev. 3D User Interfaces: Theory and Practice. Addison Wesley Longman Publishing Co., Inc., USA, 2004. + +[7] R. Budhiraja, G. A. Lee, and M. Billinghurst. Using a hhd with a hmd for mobile ar interaction. In 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1-6, 2013. doi: 10. 1109/ISMAR.2013.6671837 + +[8] W. Büschel, A. Mitschick, T. Meyer, and R. Dachselt. Investigating smartphone-based pan and zoom in $3\mathrm{\;d}$ data spaces in augmented + +reality. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI + +'19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3338286.3340113 + +[9] B. Campbell, K. O'Brien, M. Byrne, and B. Bachman. Fitts' law predictions with an alternative pointing device (wiimote(r)). Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 52,09 2008. doi: 10.1177/154193120805201904 + +[10] S. K. Card, W. K. English, and B. J. Burr. Evaluation of mouse, rate-controlled isometric joystick, step keys, and text keys for text selection on a crt. Ergonomics, 21(8):601-613, 1978. doi: 10.1080/ 00140137808931762 + +[11] G. Casiez, N. Roussel, and D. Vogel. 1€ Filter: A Simple Speed-based Low-pass Filter for Noisy Input in Interactive Systems. In CHI'12, the 30th Conference on Human Factors in Computing Systems, pp. 2527- 2530. ACM, Austin, United States, May 2012. doi: 10.1145/2207676. 2208639 + +[12] G. Casiez, D. Vogel, Q. Pan, and C. Chaillou. Rubberedge: Reducing clutching by combining position and rate control with elastic feedback. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, UIST '07, p. 129-138. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1294211. 1294234 + +[13] L.-W. Chan, H.-S. Kao, M. Y. Chen, M.-S. Lee, J. Hsu, and Y.-P. Hung. Touching the Void: Direct-Touch Interaction for Intangible Displays, p. 2625-2634. Association for Computing Machinery, New York, NY, USA, 2010. + +[14] I. Chatterjee, R. Xiao, and C. Harrison. Gaze+gesture: Expressive, precise and targeted free-space interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI '15, p. 131-138. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2818346.2820752 + +[15] C. Chen, S. T. Perrault, S. Zhao, and W. T. Ooi. Bezelcopy: An efficient cross-application copy-paste technique for touchscreen smartphones. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces, AVI '14, p. 185-192. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2598153. 2598162 + +[16] Y. Chen, K. Katsuragawa, and E. Lank. Understanding viewport-and world-based pointing with everyday smart devices in immersive augmented reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/ 3313831.3376592 + +[17] J. J. Dudley, K. Vertanen, and P. O. Kristensson. Fast and precise touch-based text entry for head-mounted augmented reality with variable occlusion. ACM Trans. Comput.-Hum. Interact., 25(6), Dec. 2018. doi: 10.1145/3232163 + +[18] A. K. Eady and A. Girouard. Caret manipulation using deformable input in mobile devices. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '15, p. 587-591. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2677199.2687916 + +[19] V. Fuccella, P. Isokoski, and B. Martin. Gestures and widgets: Performance in text editing on multi-touch capable mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ' 13, p. 2785-2794. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654.2481385 + +[20] D. Ghosh, P. S. Foong, S. Zhao, C. Liu, N. Janaka, and V. Erusu. Eyeditor: Towards on-the-go heads-up text editing using voice and manual input. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/ 3313831.3376173 + +[21] A. Goguey, S. Malacria, and C. Gutwin. Improving discoverability and expert performance in force-sensitive text selection for touch devices with mode gauges. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, p. 1-12. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/ 3173574.3174051 + +[22] J. Grubert, M. Heinisch, A. Quigley, and D. Schmalstieg. Multifi: Multi fidelity interaction with displays on and around the body. In Proceedings of the 33rd Annual ACM Conference on Human Factors + +in Computing Systems, CHI '15, p. 3933-3942. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123. 2702331 + +[23] S. G. Hart. Nasa-task load index (nasa-tlx); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, vol. 50, pp. 904-908. Sage publications Sage CA: Los Angeles, CA, 2006. + +[24] C. Hoffman. 42+ text-editing keyboard shortcuts that work almost everywhere. https://www.howtogeek.com/115664/42-text-editing-keyboard-shortcuts-that-work-almost-everywhere/, 2020. Accessed: 2020-11-01. + +[25] Y.-T. Hsieh, A. Jylhä, V. Orso, L. Gamberini, and G. Jacucci. Designing a willing-to-use-in-public hand gestural interaction technique for smart glasses. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 4203-4215. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2858036. 2858436 + +[26] W. Huan, H. Tu, and Z. Li. Enabling finger pointing based text selection on touchscreen mobile devices. In Proceedings of the Seventh International Symposium of Chinese CHI, Chinese CHI '19, p. 93-96. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3332169.3332172 + +[27] A. Inc. Mac keyboard shortcuts. https://support.apple.com/ en-us/HT201236, 2020. Accessed: 2020-11-01. + +[28] R. J. K. Jacob. What you look at is what you get: Eye movement-based interaction techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '90, p. 11-18. Association for Computing Machinery, New York, NY, USA, 1990. doi: 10.1145/ 97243.97246 + +[29] M. Jain, A. Cockburn, and S. Madhvanath. Comparison of Phone-Based Distal Pointing Techniques for Point-Select Tasks. In P. Kotzé, G. Marsden, G. Lindgaard, J. Wesson, and M. Winckler, eds., 14th International Conference on Human-Computer Interaction (INTERACT), vol. LNCS-8118 of Human-Computer Interaction - INTERACT 2013, pp. 714-721. Springer, Cape Town, South Africa, Sept. 2013. Part 15: Mobile Interaction Design. doi: 10.1007/978-3-642-40480-1 49 + +[30] S. Jang, W. Stuerzlinger, S. Ambike, and K. Ramani. Modeling cumulative arm fatigue in mid-air interaction based on perceived exertion and kinetics of arm motion. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 3328-3339. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3025453.3025523 + +[31] M. Kytö, B. Ens, T. Piumsomboon, G. A. Lee, and M. Billinghurst. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2018. + +[32] H. V. Le, S. Mayer, M. Weiß, J. Vogelsang, H. Weingärtner, and N. Henze. Shortcut gestures for mobile text editing on fully touch sensitive smartphones. ACM Trans. Comput.-Hum. Interact., 27(5), Aug. 2020. doi: 10.1145/3396233 + +[33] M. Leap. Magic leap handheld controller. https: //developer.magicleap.com/en-us/learn/guides/ design-magic-leap-one-control, 2020. Accessed: 2020- 11-01. + +[34] C.-J. Lee and H.-K. Chu. Dual-mr: Interaction with mixed reality using smartphones. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, VRST '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/ 3281505.3281618 + +[35] H. Lee, D. Kim, and W. Woo. Graphical menus using a mobile phone for wearable ar systems. In 2011 International Symposium on Ubiquitous Virtual Reality, pp. 55-58. IEEE, 2011. + +[36] L. Lee, Y. Zhu, Y. Yau, T. Braud, X. Su, and P. Hui. One-thumb text acquisition on force-assisted miniature interfaces for mobile headsets. In 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 1-10, 2020. doi: 10.1109/ + +PerCom45495.2020.9127378 + +[37] L. H. Lee, K. Yung Lam, Y. P. Yau, T. Braud, and P. Hui. Hibey: Hide the keyboard in augmented reality. In 2019 IEEE International + +Conference on Pervasive Computing and Communications (PerCom, pp. 1-10, 2019. doi: 10.1109/PERCOM.2019.8767420 + +[38] N. C. Ltd. Nintendo wii. http://wii.com/, 2006. Accessed: 2020- 11-03. + +[39] P. Lubos, G. Bruder, and F. Steinicke. Analysis of direct selection in head-mounted display environments. In 2014 IEEE Symposium on 3D User Interfaces (3DUI), pp. 11-18, 2014. doi: 10.1109/3DUI.2014. 6798834 + +[40] A. Millette and M. J. McGuffin. Dualcad: Integrating augmented reality with a desktop gui and smartphone interaction. In 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), pp. 21-26, 2016. doi: 10.1109/ISMAR-Adjunct.2016.0030 + +[41] M. R. Mine. Virtual environment interaction techniques. Technical report, USA, 1995. + +[42] M. Nancel, O. Chapuis, E. Pietriga, X.-D. Yang, P. P. Irani, and M. Beaudouin-Lafon. High-precision pointing on large wall displays using small handheld devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, p. 831-840. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654.2470773 + +[43] E. Normand and M. J. McGuffin. Enlarging a smartphone with ar to create a handheld vesad (virtually extended screen-aligned display). In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 123-133, 2018. doi: 10.1109/ISMAR.2018.00043 + +[44] Nreal. https://www.nreal.ai/, 2020. Accessed: 2020-11-03. + +[45] D.-M. Pham and W. Stuerzlinger. Hawkey: Efficient and versatile text entry for virtual reality. In 25th ACM Symposium on Virtual Reality Software and Technology, VRST '19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3359996.3364265 + +[46] J. Ren, Y. Weng, C. Zhou, C. Yu, and Y. Shi. Understanding window management interactions in ar headset + smartphone interface. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA '20, p. 1-8. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3334480. 3382812 + +[47] R. Rivu, Y. Abdrabou, K. Pfeuffer, M. Hassib, and F. Alt. Gaze'n'touch: Enhancing text selection on mobile devices using gaze. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA '20, p. 1-8. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3334480.3382802 + +[48] H. Ro, J. Byun, Y. Park, N. Lee, and T. Han. Ar pointer: Advanced ray-casting interface using laser pointer metaphor for object manipulation in 3d augmented reality environment. Applied Sciences (Switzerland), 9(15), Aug. 2019. doi: 10.3390/app9153078 + +[49] S. Siddhpuria, S. Malacria, M. Nancel, and E. Lank. Pointing at a Distance with Everyday Smart Devices, p. 1-11. Association for Computing Machinery, New York, NY, USA, 2018. + +[50] S. Sindhwani, C. Lutteroth, and G. Weber. Retype: Quick text editing with keyboard and gaze. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/ 3290605.3300433 + +[51] R. Technology. iphone air mouse. http://mobilemouse.com/, 2008. Accessed: 2020-11-03. + +[52] D. Vogel and R. Balakrishnan. Distant freehand pointing and clicking on very large, high resolution displays. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology, UIST '05, p. 33-42. Association for Computing Machinery, New York, NY, USA, 2005. doi: 10.1145/1095034.1095041 + +[53] K. Waldow, M. Misiak, U. Derichs, O. Clausen, and A. Fuhrmann. An evaluation of smartphone-based interaction in ar for constrained object manipulation. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, VRST ' 18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/ 3281505.3281608 + +[54] C. Ware and H. H. Mikaelian. An evaluation of an eye tracker as a device for computer input2. In Proceedings of the SIGCHI/GI Conference + +on Human Factors in Computing Systems and Graphics Interface, CHI '87, p. 183-188. Association for Computing Machinery, New York, NY, USA, 1986. doi: 10.1145/29933.275627 + +[55] D. Wolf, J. Gugenheimer, M. Combosch, and E. Rukzio. Understanding the heisenberg effect of spatial interaction: A selection induced error for spatially tracked input devices. In Proceedings of the ${2020}\mathrm{{CHI}}$ Conference on Human Factors in Computing Systems, CHI '20, p. 1-10. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376876 + +[56] W. Xu, H. Liang, A. He, and Z. Wang. Pointing and selection methods for text entry in augmented reality head mounted displays. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 279-288, 2019. doi: 10.1109/ISMAR.2019.00026 + +[57] W. Xu, H.-N. Liang, Y. Zhao, D. Yu, and D. Monteiro. Dmove: Directional motion-based interaction for augmented reality head-mounted displays. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605. 3300674 + +[58] Y. Yan, Y. Shi, C. Yu, and Y. Shi. Headcross: Exploring head-based crossing selection on head-mounted displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 4(1), Mar. 2020. doi: 10.1145/3380983 + +[59] M. Zhang and J. O. Wobbrock. Gedit: Keyboard gestures for mobile text editing. In Proceedings of Graphics Interface 2020, GI 2020, pp. 470-473. Canadian Human-Computer Communications Society, 2020. doi: 10.20380/GI2020.47 + +[60] F. Zhu and T. Grossman. Bishare: Exploring bidirectional interactions between smartphones and head-mounted augmented reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376233 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..49890b8247f020c132e40cdb983577611d30ae7b --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,227 @@ +§ EXPLORING SMARTPHONE-ENABLED TEXT SELECTION IN AR-HMD + +Category: Research + +Continuous Text Selection User Study b We are sufficient instead recovered increased are. Pad tail free ten new lower even lead reached out bearing evillad. Resources sursections amount acs yo on no primary + +Figure 1: (a) The overall experimental setup consisted of an HoloLens, a smartphone, and an optitrack system. (b) In the HoloLens view, a user sees two text windows. The right one is the 'instruction panel' where the subject sees the text to select. The left is the "action panel' where the subject performs the actual selection. The cursor is shown inside a green dotted box (for illustration purpose only) on the action panel. For each text selection task, the cursor position always starts from the center of the window. + +§ ABSTRACT + +Text editing is important and at the core of most complex tasks, like writing an email or browsing the web. Efficient and sophisticated techniques exist on desktops and touch devices, but are still under-explored for Augmented Reality Head Mounted Display (AR-HMD). Performing text selection, a necessary step before text editing, in AR display commonly uses techniques such as hand-tracking, voice commands, eye/head-gaze, which are cumbersome and lack precision. In this paper, we explore the use of a smartphone as an input device to support text selection in AR-HMD because of its availability, familiarity, and social acceptability. We propose four eyes-free text selection techniques, all using a smartphone - continuous touch, discrete touch, spatial movement, and raycasting. We compare them in a user study where users have to select text at various granularity levels. Our results suggest that continuous touch, in which we used the smartphone as a trackpad, outperforms the other three techniques in terms of task completion time, accuracy, and user preference. + +Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods + +§ 1 INTRODUCTION + +Text input and text editing represent a significant portion of our everyday digital tasks. We need it when we browse the web, write emails, or just when we type a password. Because of this ubiquity, it has been the focus of research on most of the platforms we are using daily like desktops, tablets, and mobile phones. The recent focus of the industry on Augmented Reality Head-Mounted Display (AR-HMD), with the development of devices like the Microsoft HoloLens ${}^{1}$ and Magic Leap ${}^{2}$ , made them more and more accessible to us, and their usage is envisioned in our future everyday life. The lack of a physical keyboard and mouse (i.e., the absence of interactive surfaces) with such devices makes text input difficult and an important challenge in AR research. While text input for AR-HMD has been already well-studied [17, 37, 45, 56], very little research focused on editing text that has already been typed by a user. Normally, text editing is a complex task and the first step is to select the text to edit it. This paper will only focus on this text selection part. Such tasks have already been studied on desktop [10] with various modalities (like gaze+gesture [14], gaze with keyboard [50]) as well as touch interfaces [21]. On the other hand, no formal experiments were conducted in AR-HMD contexts. + +In general, text selection in AR-HMD can be performed using various input modalities, including notably hand-tracking, eye/head-gaze, voice commands [20], and handheld controller [33]. However, these techniques have their limitations. For instance, hand-tracking suffers from achieving character level precision [39], lacks haptic feedback [13], and provokes arm fatigue [30] during prolonged interaction. Eye-gaze and head-gaze suffer from the 'Midas Touch' problem which causes unintended activation of commands in the absence of a proper selection mechanism $\left\lbrack {{28},{31},{54},{58}}\right\rbrack$ . Moreover, frequent head movements in head-gaze interaction increase motion sickness [57]. Voice interaction might not be socially acceptable in public places [25] and it may disturb the communication flow when several users are collaborating. In the case of a dedicated handheld controller, users need to always carry extra specific hardware with them. + +Recently, researchers have been exploring to use of a smartphone as an input for the AR-HMD because of its availability (it can even be the processing unit of the HMD [44]), familiarity, social acceptability, and tangibility $\left\lbrack {8,{22},{60}}\right\rbrack$ . Undoubtedly, there is a huge potential for designing novel cross-device applications with a combination of an AR display and a smartphone. In the past, smartphones have been used for interacting with different applications running on AR-HMDs such as manipulating 3D objects [40], windows management [46], selecting graphical menus [35] and so on. However, we are unaware of any research that has investigated text selection in an AR display using a commercially available smartphone. In this work, we explored different approaches to select text when using a smartphone as an input controller. We proposed four eyes-free text selection techniques for AR display. These techniques, described in Section 3.1, differ with regard to the mapping of smartphone-based inputs - touch or spatial. We then conduct a user study to compare these four techniques in terms of text selection task performance. + +The main contributions of this paper are - (1) design and development of a set of smartphone-enabled text selection techniques for AR-HMD; (2) insights from a 20 person comparative study of these techniques in text selection tasks. + +${}^{1}$ https://www.microsoft.com/en-us/hololens + +${}^{2}$ https://www.magicleap.com/en-us + +§ 2 RELATED WORK + +In this section, we review previous work on text selection and editing in AR, and on a smartphone. We also review research that combines handheld devices with HMD and large wall displays. + +§ 2.1 TEXT SELECTION AND EDITING IN AR + +Very few research focused on text editing in AR. Ghosh et al. presented EYEditor to facilitate on-the-go text-editing on a smart-glass with a combination of voice and a handheld controller [20]. They used voice to modify the text content, while manual input is used for text navigation and selection. The use of a handheld device is inspiring for our work, however, voice interaction might not be suitable in public places. Lee et al. [36] proposed two force-assisted text acquisition techniques where the user exerts a force on a thumb-sized circular button located on an iPhone 7 and selects text which is shown on a laptop emulating the Microsoft Hololens display. They envision that this miniature force-sensitive area $\left( {{12}\mathrm{\;{mm}} \times {13}\mathrm{\;{mm}}}\right)$ can be fitted into a smart-ring. Although their result is promising, a specific force-sensitive device is required. + +In this paper, we follow the direction of the two papers previously presented and continue to explore the use of a smartphone in combination with an AR-HMD. While their use for text selection is still rare, it has been investigated more broadly for other tasks. + +§ 2.2 COMBINING HANDHELD DEVICES AND HMDS/LARGE WALL DISPLAYS + +By combining handheld devices and HMDs, researchers try to make the most of the benefits of both [60]. On one hand, the handheld device brings a 2D high-resolution display that provides a multi-touch, tangible, and familiar interactive surface. On the other hand, HMDs provide a spatialized, 3D, and almost infinite work-space. With MultiFi [22], Grubert et al. showed that such a combination is more efficient than a single device for pointing and searching tasks. For a similar setup, Zhu and Grossman proposed a set of techniques and demonstrated how it can be used to manipulate $3\mathrm{D}$ objects [60]. Similarly, Ren et al. [46] demonstrated how it can be used to perform windows management. Finally, in VESAD [43], Normand et al. used AR to directly extend the smartphone display. + +Regarding the type of input provided by the handheld device, it is possible to only focus on using touch interactions, as it is proposed in Input Forager [1] and Dual-MR [34]. Waldow et al. compared the use of touch to perform 3D object manipulation to gaze and mid-air gestures and showed that touch was more efficient [53]. It is also possible to track the handheld device in space and allow for 3D spatial interactions. It has been done in DualCAD in which Millette and McGuffin used a smartphone tracked in space to create and manipulate shapes using both spatial interactions and touch gestures [40]. With ARPointer [48], Ro et al. proposed a similar system and showed it led to better performance for object manipulation than a mouse and keyboard and a combination of gaze and mid-air gestures. When comparing the use of touch and spatial interaction with a smartphone, Budhiraja et al. showed that touch was preferred by participants for a pointing task [7], but Büschel et al. showed that spatial interaction was more efficient and preferred for a navigation task in 3D [8]. In both cases, Chen et al. showed that the interaction should be viewport-based and not world-based [16]. + +Overall, previous research showed a handheld device provides a good alternative input for augmented reality display in various tasks. In this paper, we focus on a text selection task, which has not been studied yet. It is not clear yet if only tactile interactions should be used on the handheld device or if it should also be tracked to provide spatial interactions. Thus, we propose the two alternatives in our techniques and compare them. + +The use of handheld devices as input was also investigated in combination with large wall-displays. It is a use case close to the one presented in this paper as text is displayed inside a 2D virtual window. Campbell et al. studied the use of a Wiimote as a distant pointing device [9]. With a pointing task, the authors compared its use with an absolute mapping (i.e. raytracing) to a relative mapping, and showed that participants were faster with the absolute mapping. Vogel and Balakrishnan found similar results between the two mappings (with the difference that they directly tracked the hand), but only with large targets and when clutching was necessary [52]. They also found that participants had a lower accuracy with an absolute mapping. This lower accuracy for an absolute mapping with spatial interaction is also shown when compared with distant touch interaction of the handheld device as a trackpad, with the same task [4]. Jain et al. also compared touch interaction with spatial interaction, but with a relative mapping, and found that the spatial interaction was faster but less accurate [29]. The accuracy result is confirmed by a recent study from Siddhpuria et al. in which the authors also compared the use of absolute and relative mapping with the touch interaction, and found that the relative mapping is faster [49]. These studies were all done for a pointing task, and overall showed that using the handheld device as a trackpad (so with a relative mapping) is more efficient (to avoid clutching, one can change the transfer function [42]). In their paper, Siddhpuria et al. highlighted the fact that more studies needed to be done with a more complex task to validate their results. To our knowledge, this has been done only by Baldhauf et al. with a drawing task, and they showed that spatial interaction with an absolute mapping was faster than using the handheld device as a trackpad without any impacts on the accuracy [4]. In this paper, we take a step in this direction and use a text selection task. Considering the result from Baldauf et al., we cannot assume that touch interaction will perform better. + +§ 2.3 TEXT SELECTION ON HANDHELD DEVICES + +Text selection has not been yet investigated with the combination of a handheld device and an AR-HMD, but it has been studied on handheld devices independently. Using a touchscreen, adjustment handles are the primary form of text selection techniques. However, due to the fat-finger problem [5], it can be difficult to modify the selection by one character. A first solution is to allow users to only select the start and the end of the selection as it is done in TextPin in which it is shown to be more efficient than the default technique [26]. Fuccella et al. [19] and Zhang et al. [59] proposed to use the keyboard area to allow the user to control the selection using gestures and showed it was also more efficient than the default technique. Ando et al. adapted the principle of shortcuts and associated different actions with the keys of the virtual keyboard that was activated with a modifier action performed after. In the first paper, the modifier was the tilting of the device [2], and in a second one, it was a sliding gesture starting on the key [3]. The latter was more efficient than the first one and the default technique. With BezelCopy [15], a gesture on the bezel of the phone allow for a first rough selection that can be refined after. Finally, other solutions used a non-traditional smartphone. Le et al. used a fully touch-sensitive device to allow users to perform gestures on the back of the device [32]. Gaze N'Touch [47] used gaze to define the start and end of the selection. Goguey et al. explored the use of a force-sensitive screen to control the selection [21], and Eady and Girouard used a deformable screen to explore the use of the bending of the screen [18]. + +In this work, we choose to focus on commercially available smart-phones, and we will not explore in this paper, the use of deformable, or fully touch-sensitive ones. Compared to the use of shortcuts, the use of gestures seems to lead to good performance and can be performed without looking at the screen (i.e. eye-free), which avoids transition between the AR virtual display and the handheld devices. + +(a) (b) (c) (d) + +Figure 2: Illustrations of our proposed interaction techniques: (a) continuous touch; (b) discrete touch; (c) spatial movement; (d) raycasting. + +§ 3 DESIGNING SMARTPHONE-BASED TEXT SELECTION IN AR-HMD + +§ 3.1 PROPOSED TECHNIQUES + +Previous work used a smartphone as an input device to interact with virtual content in AR-HMD mainly in two ways - touch input from the smartphone and tracked the smartphone spatially like AR/VR controller. Similar work on wall-displays suggested that using the smartphone as a trackpad would be the most efficient technique, but this was tested with pointing task (See Related Work). With a drawing task (which could be closer to a text selection task than a pointing task), spatial interaction was actually better [4]. + +Inspired by this, we propose four eyes-free text selection techniques for AR-HMD - two are completely based on mobile touch-screen interaction, whereas the smartphone needs to be tracked in mid-air for the latter two approaches to use spatial interactions. For spatial interaction, we choose a technique with an absolute mapping (Raycasting) and one with a relative one (Spatial Movement). The comparison between both in our case is not straightforward, previous results suggest that a relative mapping would have a better accuracy, but an absolute one would be faster. For touch interaction, we choose to not have an absolute mapping, its use with a large virtual window could lead to a poor accuracy [42], and just have technique that use a relative mapping. In addition to the traditional use of the smartphone as a trackpad (Continuous Touch), we proposed a technique that allow for a discrete selection of text (Discrete Touch). Such discrete selection mechanism has shown good results in a similar context for shape selection [29]. Overall, while we took inspiration from previous work for these techniques, they have never been assessed together for a text selection task. + +To select text successfully using any of our proposed techniques, a user needs to follow the same sequence of steps each time. First, she moves the cursor, located on the text window in an AR display, to the beginning of the text to be selected (i.e., the first character). Then, she performs a double tap on the phone to confirm the selection of that first character. She can see on the headset screen that the first character got highlighted in yellow color. At the same time, she enters into the text selection mode. Next, she continues moving the cursor to the end position of the text using one of the techniques presented below. While the cursor is moving, the text is also getting highlighted simultaneously up to the current position of the cursor. Finally, she ends the text selection with a second double tap. + +§ 3.1.1 CONTINUOUS TOUCH + +In continuous touch, the smartphone touchscreen acts as a trackpad (see Figure. 2(a)). It is an indirect pointing technique where the user moves her thumb on the touchscreen to change the cursor position on the AR display. For the mapping between display and touchscreen, we used a relative mode with clutching. As clutching may degrades performance [12], a control-display (CD) gain was applied to minimize it (see Section 3.2). + +§ 3.1.2 DISCRETE TOUCH + +This technique is inspired by the text selection with keyboard shortcuts available in both Mac [27] and Windows [24] OS. In this work, we tried to emulate a few keyboard shortcuts. We particularly considered imitating keyboard shortcuts related to the character, word, and line-level text selection. For example, in Mac OS, hold down 1, and pressing $\rightarrow$ or $\leftarrow$ extends text selection one character to the right or left. Whereas hold down $\widehat{U} + = =$ and pressing $\rightarrow$ or $\overset{ \leftarrow }{ \leftarrow }$ allows users to select text one word to the right or left. To perform text selection to the nearest character at the same horizontal location on the line above or below, a user needs to hold down [1] and press $\uparrow$ or $\downarrow$ respectively. In discrete touch interaction, we replicated all these shortcuts using directional swipe gestures (see Figure. 2(b)). Left or right swipe can select text at both levels - word as well as character. By default, it works at the word level. The user performs a long-tap which acts as a toggle button to switch between word and character level selection. On the other hand, up or down swipe selects text at one line above or one line below from the current position. The user can only select one character/word/line at a time with its respective swipe gesture. + +Note that, to select text using discrete touch, a user first positions the cursor on top of the starting word (not the starting character) of the text to be selected by touch dragging on the smartphone as described in the continuous touch technique. From a pilot study, we observed that moving the cursor every time to the starting word using discrete touch makes the overall interaction slow. Then, she selects that first word with the double tap and uses discrete touch to select text up to the end position as described before. + +§ 3.1.3 SPATIAL MOVEMENT + +This technique emulates the smartphone as an air-mouse $\left\lbrack {{38},{51}}\right\rbrack$ for AR-HMD. To control the cursor position on the headset screen, the user holds the phone in front of his torso, places his thumb on the touchscreen, and then she moves the phone in the air with small forearm motions in a plane that is perpendicular to the gaze direction (see Figure. 2(c)). While moving the phone, its tracked positional data in XY-coordinates get translated into the cursor movement in XY-coordinates inside a 2D window. When a user wants to stop the cursor movement, she simply lifts her thumb from the touchscreen. Thumb touch-down and touch-release events define the start and stop of the cursor movement on the AR display. The user determines the speed of the cursor by simply moving the phone faster and slower accordingly. We applied CD-gain between the phone movement and the cursor displacement on the text window (see Section 3.2). + +max width= + +Techniques $C{D}_{Max}$ $C{D}_{Min}$ $\lambda$ ${V}_{inf}$ + +1-5 +Continuous Touch 28.34 0.0143 36.71 0.039 + +1-5 +Spatial Movement 23.71 0.0221 32.83 0.051 + +1-5 + +Table 1: Logistic function parameter values for continuous touch and spatial movement interaction + +§ 3.1.4 RAYCASTING + +Raycasting is a popular interaction technique in AR/VR environments to select $3\mathrm{D}$ virtual objects $\left\lbrack {6,{41}}\right\rbrack$ . In this work, we developed a smartphone-based raycasting technique for selecting text displayed on a 2D window in AR-HMD (see Figure. 2(d)). A 6 DoF tracked smartphone was used to define the origin and orientation of the ray. In the headset display, the user can see the ray in a straight line appearing from the top of the phone. By default, the ray is always visible to users in AR-HMD as long as the phone is being tracked properly. In raycasting, the user needs to do small angular wrist movements for pointing on the text content using the ray. Where the ray hits on the text window, the user sees the cursor there. Compared to other proposed methods, raycasting does not require clutching as it allows direct pointing to the target. The user confirms the target selection on the AR display by providing a touch input (i.e., double tap) from the phone. + +§ 3.2 IMPLEMENTATION + +To prototype our proposed interaction techniques, we used a Mi-crosoft HoloLens $2\left( {{42}^{ \circ } \times {29}^{ \circ }}\right.$ screen) as an AR-HMD device and an OnePlus 5 as a smartphone. For spatial movement and raycasting interactions, real-time pose information of the smartphone is needed. An OptiTrack ${}^{3}$ system with three Flex-13 cameras was used for accurate tracking with low latency. To bring the hololens and the smartphone into a common coordinate system, we attached passive reflective markers to them and did a calibration between hololens space and optitrack space. + +In our software framework, the AR application running on HoloLens was implemented using Unity3D (2018.4) and Mixed Reality Toolkit ${}^{4}$ . To render text in HoloLens, we used TextMesh-Pro. A Windows 10 workstation was used to stream tracking data to HoloLens. All pointing techniques with the phone were also developed using Unity3D. We used UNet ${}^{5}$ library for client-server communications between devices over the WiFi network. + +For continuous touch and spatial movement interactions, we used a generalized logistic function [42] to define the control-display (CD) gain between the move events either on the touchscreen or in the air and the cursor displacement in the AR display: + +$$ +{CD}\left( v\right) = \frac{C{D}_{Max} - C{D}_{Min}}{1 + {e}^{-\lambda \times \left( {v - {V}_{inf}}\right) }} + C{D}_{Min} \tag{1} +$$ + +$C{D}_{Max}$ and $C{D}_{Min}$ are the asymptotic maximum and minimum amplitudes of CD gain and $\lambda$ is a parameter proportional to the slope of the function at $v = {V}_{\text{ inf }}$ with ${V}_{\text{ inf }}$ a inflection value of the function. We derived initial values from the parameters of the definitions from Nancel et al. [42], and then empirically optimized for each technique. The parameters were not changed during the study for individual participants. The values are summarized in Table 1. + +In discrete touch interaction, we implemented up, down, left, and right swipes by obtaining touch position data from the phone. We considered a 700 msec time-window (empirically found) for detecting a long-tap event. Users get vibration feedback from the phone when she performs long-tap successfully. They also receive vibration haptics while double tapping to start and end the text selection in all interaction techniques. Note that, we did not provide haptic feedback for swipe gestures. With each swipe movement, they can see that texts are getting highlighted in yellow color. This acts as visual feedback by default for touch swipes. + +Supplied feelings mr of dissuade recurred no it offering honoured. Am of of in By an outlived insisted procured improved am. Paid hill fine ten now love even leaf. collecting devonshire favourable excellence. Her sixteen end ashamed cottage yet Warmly warmth six one any wisdom. Family giving is pulled beauty chatty highly no. Blessing appetite domestic did mrs judgment rendered entirely. Highly indeed had. Excited him now natural saw passage offices you minuter. At by asked being court hopes. Farther so friends am to detract. Forbade concern do private be. Offending Windows talking painted pasture yet its express parties us. Sure last upon he sam packages we. For assurance concluded then something depending discourse see led - collected. Packages oh no denoting n., advanced humored. Pressed be so thought. reached get hearing invited. Resources ourselves sweetness ye do no perfectly. on. Felicity informed yet had admitted strictly how you as knew next. Of believed or diverted no rejoiced. ${\mathrm{E}}^{2}$ friendship sufficient cosistance can prosperous met. As game he show it park do. Was has unknown few certain ten promise. No finished my an likewise cheerful + +Figure 3: Text selection tasks used the experiments: (1) word (2) sub-word (3) word to a character (4) four words (5) one sentence (6) paragraph to three sentences (7) one paragraph (8) two paragraphs (9) three paragraphs (10) whole text. + +In the spatial movement technique, we noticed that the phone moves slightly during the double tap event. This results in a slight unintentional cursor movement. To reduce that, we suspended cursor movement for ${300}\mathrm{{msec}}$ (empirically found) when there is any touch event on the phone screen. + +In raycasting, we applied the $1\mathfrak{€}$ Filter [11] with $\beta = {80}$ and min-cutoff $= {0.6}$ (empirically tested) at the ray source to minimize jitter and latency which usually occur due to both hand tremor and double tapping [55]. We set the ray length to 8 meters by default. The user sees the full length of the ray when it is not hitting the text panel. + +§ 4 EXPERIMENT + +To assess the impact of the different characteristics of these four interaction techniques we perform a comparative study with a text selection task while users are standing up. Particularly, we are interested to evaluate the performance of these techniques in terms of task completion time, accuracy, and perceived workload. + +§ 4.1 PARTICIPANTS AND APPARATUS + +In our experiment, we recruited 20 unpaid participants (P1-P20) (13 males +7 females) from a local university campus. Their ages ranged from 23 to 46 years (mean $= {27.84},\mathrm{{SD}} = {6.16}$ ). Four were left-handed. All were daily users of smartphones and desktops. With respect to their experience with AR/VR technology, 7 participants ranked themselves as an expert because they are studying and working on the same field, 4 participants were beginners as they played some games in VR, while others had no prior experience. They all had either normal or corrected-to-normal vision. We used the apparatus and prototype described in Subsection 3.2. + +§ 4.2 TASK + +In this study, we ask participants to perform a series of text selection using our proposed techniques. Participants were standing up for the entire duration of the experiment. We reproduce different realistic usage by varying the type of text selection to do, like the selection of a word, a sentence, a paragraph, etc. Figure 3 shows all the types of text selection that were asked to the participants. Concretely, the experiment scene in HoloLens consisted of two vertical windows of ${102.4}\mathrm{\;{cm}} \times {57.6}\mathrm{\;{cm}}$ positioned at a distance of ${180}\mathrm{\;{cm}}$ from the headset at the start of the application (i.e., its visual size was ${31.75}^{ \circ } \times {18.1806}^{ \circ }$ ). The windows were anchored in the world coordinate. These two panels contain the same text. Participants are asked to select the text in the action panel (left panel in Figure 1(b)) that is highlighted in the instruction panel (right panel in Figure 1(b)). The user controls a cursor (i.e., a small circular dot in red color as shown in Figure 1(b)) using one of the techniques on the smartphone. Its position is always bounded by the window size. The text content was generated by Random Text Generator ${}^{6}$ and was displayed using the Liberation Sans font with a font-size of ${25}\mathrm{{pt}}$ (to allow a comfortable viewing from a few meters). + +${}^{3}$ https://optitrack.com/ + +${}^{4}$ https://github.com/microsoft/MixedRealityToolkit-Unity + +${}^{5}$ https://docs.unity3d.com/Manual/UNet.html + +12 40% 35% 30% Error Rate 20% 15% 10% 5% 0% Continuous Touch Discrete Touch Spatial Movement Raycasting (b) 10 Mean Time (sec) 8 6 2 0 Continuous Touch Discrete Touch Spatial Movement Raycasting (a) + +Figure 4: (a) Mean task completion time for our proposed four interaction techniques. Lower scores are better. (b) Mean error rate of interaction techniques. Lower scores are better. Error bars show 95% confidence interval. Statistical significances are marked with stars $( * * : p < {.01}$ and $* : p < {.05})$ . + +§ 4.3 STUDY DESIGN + +We used a within-subject design with 2 factor: 4 INTERACTION TECHNIQUE (Continuous touch, Discrete touch, Spatial movement, and Raycasting) × 10 TEXT SELECTION TYPE (shown in Figure 3) $\times {20}$ participants = 800 trials. The order of INTERACTION TECHNIQUE was counterbalanced across participants using a Latin Square. The order of TEXT SELECTION TYPE is randomized in each block for each INTERACTION TECHNIQUE (but same for each participant). + +§ 4.4 PROCEDURE + +We welcomed participants upon arrival. They were asked to read and sign the consent form, fill out a pre-study questionnaire to collect demographic information and prior AR/VR experience. Next, we gave them a brief introduction to the experiment background, hardware, the four interaction techniques, and the task involved in the study. After that, we helped participants to wear HoloLens comfortably and complete the calibration process for their personal interpupillary distance (IPD). For each block of INTERACTION TECHNIQUE, participants completed a practice phase followed by a test session. During the practice, the experimenter explained how the current technique worked, and participants were encouraged to ask questions. Then, they had time to train themselves with the technique until they were fully satisfied, which took around 7 minutes on average. Once they felt confident with the technique, the experimenter launched the application for the test session. They were instructed to do the task as quickly and accurately as possible in the standing condition. To avoid noise due to participants using either one or two hands, we asked to only use their dominant hand. + +At the beginning of each trial in the test session, the text to select was highlighted in the instruction panel. Once they were satisfied with their selection, participants had to press a dedicated button on the phone screen to get to the new task. They were allowed to use their non-dominant hand only to press this button. At the end of each block of INTERACTION TECHNIQUE, they answered a NASA-TLX questionnaire [23] on iPad, and moved to the next condition. + +At the end of the experiment, we asked participants a questionnaire in which they had to rank techniques by speed, accuracy, and overall preference and performed an informal post-test interview. + +The entire experiment took approximately 80 minutes in total. Participants were allowed to take breaks between sessions during which they could sit and encourage to comment at any time during the experiment. To respect COVID-19 safety protocol, participants wore FFP2 mask and maintained a 1 meter distance with the experimenter at all times. + +§ 4.5 MEASURES + +We recorded completion time as the time taken to select the text from its first character to the last character, which is the time difference between the first and second double tap. If they selected more or less characters than expected, the trial was considered wrong. We then calculated the error rate as the percentage of wrong trials for each condition. Finally, as stated above, participants filled a NASA TLX questionnaire to measure the subjective workload of each INTERACTION TECHNIQUE, and their preference was measured using a ranking questionnaire at the end of the experiment. + +§ 4.6 HYPOTHESES + +In our experiment, we hypothesized that: + +H1. Continuous touch, Spatial movement, and Raycasting will be faster than Discrete touch because a user needs to spend more time for multiple swipes and do frequent mode switching to select text at the character/word/sentence level. + +H2. Discrete touch will be more mentally demanding compared to all other techniques because the user needs to remember the mapping between swipe gestures and text granularity, as well as the long-tap for mode switching. + +H3. The user will perceive that Spatial movement will be more physically demanding as it involves more forearm movements. H4. The user will make more errors in Raycasting, and it will be more frustrating because double tapping for target confirmation while holding the phone in one-hand will introduce more jitter [55]. + +${}^{6}$ http://randomtextgenerator.com/ + +Continuous Touch Discrete Touch Spatial Movement Raycasting Accuracy Preference 3.5 3.0 2.5 2.0 1.5 1.0 0.5 *** 0 Speed + +Figure 5: Mean scores for the ranking questionnaire which are in 3 point likert scale. Higher marks are better. Error bars show 95% confidence interval. Statistical significances are marked with stars (**: $p < {.01}$ and *: $p < {.05}$ ). + +H5. Overall, Continuous touch would be the most preferred text selection technique as it works similarly to the trackpad which is already familiar to users. + +§ 5 RESULT + +To test our hypothesis, we conducted a series of analyses using IBM SPSS software. Shapiro-Wilk tests showed that the task completion time, total error, and questionnaire data were not normally distributed. Therefore, we used the Friedman test with the interaction technique as an independent variable to analyze our experimental data. When significant effects were found, we reported post hoc tests using the Wilcoxon signed-rank test and applied Bonferroni corrections for all pair-wise comparisons. We set an $\alpha = {0.05}$ in all significance tests. Due to a logging issue, we had to discard one participant and did the analysis with 19 instead of 20 participants. + +§ 5.1 TASK COMPLETION TIME + +There was a statistically significant difference in task completion time depending on which interaction technique was used for text selection $\left\lbrack {{\chi }^{2}\left( 3\right) = {33.37},p < {.001}}\right\rbrack$ (see Figure 4(a)). Post hoc tests showed that Continuous touch $\left\lbrack {\mathrm{M} = {5.16},\mathrm{{SD}} = {0.84}}\right\rbrack$ , Spatial movement $\left\lbrack {\mathrm{M} = {5.73},\mathrm{{SD}} = {1.38}}\right\rbrack$ , and Raycasting $\lbrack \mathrm{M} = {5.43},\mathrm{{SD}} =$ 1.66] were faster than Discrete touch $\left\lbrack {\mathrm{M} = {8.78},\mathrm{{SD}} = {2.09}}\right\rbrack$ . + +§ 5.2 ERROR RATE + +We found significant effects of the interaction technique on error rate $\left\lbrack {{\chi }^{2}\left( 3\right) = {39.45},p < {.001}}\right\rbrack$ (see Figure 4(b)). Post hoc tests showed that Raycasting $\left\lbrack {\mathrm{M} = {24.21},\mathrm{\;{SD}} = {13.46}}\right\rbrack$ was more error-prone than Continuous touch $\left\lbrack {\mathrm{M} = {1.05},\mathrm{{SD}} = {3.15}}\right\rbrack$ , Discrete touch $\lbrack \mathrm{M} = {4.73}$ , $\mathrm{{SD}} = {9.05}\rbrack$ , and Spatial movement $\left\lbrack {\mathrm{M} = {8.42},\mathrm{{SD}} = {12.58}}\right\rbrack$ . + +§ 5.3 QUESTIONNAIRES + +For NASA-TLX, we found significant differences for mental demand $\left\lbrack {{\chi }^{2}\left( 3\right) = {9.65},p = {.022}}\right\rbrack$ , physical demand $\left\lbrack {{\chi }^{2}\left( 3\right) = {29.75}}\right.$ , $p < {.001}\rbrack$ , performance $\left\lbrack {{\chi }^{2}\left( 3\right) = {40.14},p < {.001}}\right\rbrack$ , frustration $\left\lbrack {\chi }^{2}\right.$ $\left( 3\right) = {39.53},p < {.001}\rbrack$ , and effort $\left\lbrack {{\chi }^{2}\left( 3\right) = {32.69},p < {.001}}\right\rbrack$ . Post hoc tests showed that Raycasting and Discrete touch had significantly higher mental demand compared to Continuous touch and Spatial movement. On the other hand, physical demand was lowest for Continuous touch, whereas users rated significantly higher physical demand for Raycasting and Spatial movement. In terms of performance, Raycasting was rated significantly lower than the other techniques. Raycasting was also rated significantly more frustrating. Moreover, Continuous touch was least frustrating and better in performance than Spatial movement. Figure 6 shows a bar chart of the NASA-TLX workload sub-scales for our experiment. + +For ranking questionnaires, there were significant differences for speed $\left\lbrack {{\chi }^{2}\left( 3\right) = {26.40},p < {.001}}\right\rbrack$ , accuracy $\left\lbrack {{\chi }^{2}\left( 3\right) = {45.5}}\right.$ , $p < {.001}\rbrack$ , and preference $\left\lbrack {{\chi }^{2}\left( 3\right) = {38.56},p < {.001}}\right\rbrack$ . Post hoc test showed that users ranked Discrete touch as the slowest and Raycasting as the least accurate technique. The most preferred technique was Continuous touch whereas Raycasting was the least. Users also favored discrete touch as well as Spatial movement based text selection approach. Figure 5 summarises participants responses for ranking questionnaires. + +§ 6 DISCUSSION & DESIGN IMPLICATIONS + +Our results suggest that Continuous Touch is the technique that was preferred by the participants (confirming $\mathbf{{H5}}$ ). It was the least physically demanding technique and the less frustrating one. It was also more satisfying regarding performance than the two spatial ones (Raycasting and Spatial Movement). Finally, it was less mentally demanding than Discrete Touch and Raycasting. Participants pointed out that this technique was simple, intuitive, and familiar to them as they are using trackpad and touchscreen every day. During the training session, we noticed that they took the least time to understand its working principle. In the interview, P8 commented, "I can select text fast and accurately. Although I noticed a bit of overshooting in the cursor positioning, it can be adjusted by tuning CD gain". P17 said, "I can keep my hands down while selecting text. This gives me more comfort". + +9 Performance Effort Frustration 7 5 3 1 Mental Demand Physical Demand Temporal Demand + +Figure 6: Mean scores for the NASA-TLX task load questionnaire which are in range of 1 to 10. Lower marks are better, except for performance. Error bars show 95% confidence interval. Statistical significances are marked with stars (**: $p < {.01}$ and *: $p < {.05}$ ). + +On the other hand, Raycasting was the least preferred technique and led to the lowest task accuracy (participants were also the least satisfied with their performance using this technique). This can be explained by the fact that it was the most physically demanding and the most frustrating (confirming H5). Finally, it was more mentally demanding than Continuous Touch and Spatial Movement. In their comments, participants reported about the lack of stability due to the one-handed phone holding posture. Some participants complained that they felt uncomfortable to hold this OnePlus 5 phone in one hand as it was a bit bigger compared to their hand size. This introduced even more jitter for them in Raycasting while double-tapping for target confirmation. P10 commented, "I am sure I will perform Raycasting with fewer errors if I can use my both hands to hold the phone". Moreover, from the logged data, we noticed that they made more mistakes when the target character was positioned inside a word rather than either at the beginning or at the end, which was confirmed in the discussion with participants. + +As we expected, Discrete Touch was the slowest technique (confirming $\mathbf{{H1}}$ ), but was not the most mentally demanding, as it was only more demanding than Continuous Touch (rejecting $\mathbf{{H2}}$ ). It is also more physically demanding than Continuous Touch, but less than Spatial Movement and Raycasting. Several participants mentioned that it is excellent for the short word to word or sentence to sentence selection, but not for long text as multiple swipes are required. They also pointed out that performing mode switching with a long-tap of 700 msec was a bit tricky and lost some time there during text selection. Although they got better with it over time, still they are uncertain to do it successfully in one attempt. + +Finally, contrary to our expectation, Spatial Movement was not the most physically demanding technique, as it was less demanding than Raycasting (but more than Continuous Touch and Discrete Touch). It was also less mentally demanding than Raycasting and led to less frustration. However, it led to more frustration and participants were less satisfied with their performance with this technique than with Continuous Touch. According to participants, with this technique, moving the forearm needs physical effort undoubtedly, but they only need to move it for a very short distance which was fine for them. From the user interview, we came to know that they did not use much clutching (less than with Continuous Touch). P13 mentioned, "In Spatial Movement, I completed most of the tasks without using clutching at all". + +Overall, our results suggest that between Touch and Spatial interactions, it would be better to use Touch for text selection, which confirms findings from Siddhpuria et al. for pointing tasks [49]. Continuous Touch was overall preferred, faster, and less demanding than Discrete Touch, which goes against results from the work by Jain et al. for shape selection [29]. Such difference can be explained by the fact that with text selection, there is a minimum of two levels of discretization (characters and words), which makes it mentally demanding. It can also be explained by the high number of words (and even more characters) in a text, contrary to the number of shapes in Jain et al. experiment. This led to a high number of discrete actions for the selection, and thus, a higher physical demand. However, surprisingly, most of the participants appreciated the idea of Discrete Touch. If a tactile interface is not available on the handheld device, our results suggest to use a spatial interaction technique that uses a relative mapping, as we did with Spatial Movement. We could not find any differences in time, contrary to the work by Campbell et al. [9], but it leads to fewer errors, which confirms what was found by Vogel and Balakrishnan [52]. It is also less physically and mentally demanding and leads to less frustration than an absolute mapping. On the technical side, a spatial interaction technique with a relative mapping can be easily achieved without an external sensor (as it was done for example by Siddhpuria et al. [49]). + +§ 7 LIMITATIONS + +There were two major limitations. First, we used an external tracking system which limits us to lab study only. As a result, it is difficult to understand the social acceptability of each technique until we consider the real-world on-the-go situation. However, technical progress in inside-out tracking ${}^{7}$ means that it will be possible, soon, to have smartphones that can track themselves accurately in 3D space. Second, some of our participants had difficulties holding the phone in one-hand because the phone was a bit bigger for their hands. They mentioned that although they were trying to move their thumb faster in continuous touch and discrete touch interactions, they were not able to do it comfortably due to the afraid of dropping the phone. This bigger phone size also influenced their raycasting performance particularly when they need to do a double tap for target confirmation. Hence, using one phone size for all was an important constraint in this experiment. + +${}^{7}$ https://developers.google.com/ar + +§ 8 CONCLUSION AND FUTURE WORK + +In this research, we investigated the use of a smartphone as an eyes-free interactive controller to select text in augmented reality head-mounted display. We proposed four interaction techniques: 2 that use the tactile surface of the smartphone: Continuous Touch and Discrete Touch, and two that track the device in space: Spatial Movement and Raycasting. We evaluated these four techniques in a text selection task study. The results suggested that techniques using the tactile surface of the device are more suited for text selection than spatial one, Continuous Touch being the most efficient. If a tactile surface was not available, it would be better to use a spatial technique (i.e. with the device tracked in space) that uses a relative mapping between the user gesture and the virtual screen, compared to a classic Raycasting technique that uses an absolute mapping. + +In this work, we have focused on interaction techniques based on smartphone inputs. This allowed us to better understand which approach should be favored in that context. In the future, it would be interesting to explore a more global usage scenario such as a text editing interface in AR-HMD using smartphone-based input where users need to perform other interaction tasks such as text input and commands execution simultaneously. Another direction to future work is to compare phone-based techniques to other input techniques like hand tracking, head/eye gaze, and voice commands. Furthermore, we only considered standing condition, but it would be interesting to study text selection performance while the user is walking. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..e6f6996b237541362cc9175f9a4df3072fe185c3 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,248 @@ +# Touching 4D Objects with 3D Tactile Feedback + +Category: Research + +## Abstract + +This paper introduces a novel interactive system for presenting $4\mathrm{D}$ objects in 4D space through tactile sensation. A user is able to experience 4D space by actively touching $4\mathrm{D}$ objects with the hands, where the hand holding the controller is physically stimulated at a three-dimensional hypersurface of the $4\mathrm{D}$ object. When a hand is placed on the $3\mathrm{D}$ projection of a $4\mathrm{D}$ object, the force generated at the interface between the hand and the object is calculated by referring to the distance from the viewpoint to each point in the frontal surface of the object. The calculated force in the hand is converted to the vibration patterns to be displayed from the tactile glove. The system supplements the $4\mathrm{D}$ information such as tilt or unevenness, which is difficult to be visually recognized. + +Keywords: 4D space, 4D visualization, 4D interaction, tactile visualization + +Index Terms: Human-centered computing-Visualization-Visualization application domains-Scientific visualization + +## 1 INTRODUCTION + +With the development of computer graphics technology, various methods for displaying 4D objects have been proposed. Furthermore, the VR technology allowed users to observe 4D objects with a higher degree of immersion. Some researchers produced new methods of visualization and interaction [1], [5], [9], being expected to improve the understanding and cognition of $4\mathrm{D}$ space. + +While most of these approaches use only the visual information by producing 4D space, humans also use auditory and tactile perception to recognize $3\mathrm{D}$ representations. However, there are few studies focusing on these additional sensations for $4\mathrm{D}$ spatial representations. In order to present richer four-dimensional information to users, auditory and tactile information is necessary as well as visual information. Therefore, we develop a novel system for displaying 4D objects using tactile sensations. + +Needless to say, humans cannot touch objects defined in 4D space, since their bodies are situated in 3D space. The tactile sensations are the information mapped onto their skin, which is arranged in 2D surface. Therefore we can infer that the ones situated in $4\mathrm{D}$ space touch 4D objects with their 3D skin, and the tactile sensation is regarded as the information mapped into 3D hypersurface, if it exists. Moreover, in general, humans' tactile perception appears to be mapped into their actual 3D space in combination with the body arrangement. Our idea is to introduce the experience as if humans touch the $4\mathrm{D}$ objects by getting the $3\mathrm{D}$ -mapped tactile-like information of $4\mathrm{D}$ objects through their skin. + +In this paper, we propose a novel $4\mathrm{D}$ interaction system with tactile representation. We focus on the aspect of tactile sensation that can represent the unevenness of an object by displaying pressure sensation. We convert the tactile information into vibration patterns for ease of handling. For presenting the touch information, a tactile glove installing vibration actuators is developed. + +The introduced system works as follows. First, a 4D object is projected to a 3D screen in the VR environment. Second, a hand wearing the tactile glove enters the screen. Third, the system calculates the necessary tactile information. Finally, the hand receives tactile sensations so that each part of the hand corresponds to each part of $3\mathrm{D}$ hypersurface of the $4\mathrm{D}$ hand. A user is able to perceive information of the projected 3D hyperplane such as the slope and unevenness as if he/she touches the 4D objects, in the sense that he/she receives 4D tactile-like information through the tactile sensation. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_0_972_360_636_399_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_0_972_360_636_399_0.jpg) + +Figure 1: Overall view of the system. + +## 2 RELATED WORKS + +Several approaches have been introduced for presenting $4\mathrm{D}$ spatial information through tactile or haptic sensations. Hashimoto et al. [2] developed a method that visualized multidimensional data with haptic representation. They use a haptic device capable of 6-DoF input and force feedback to control a pointer floating in the virtual environment. When the pointer overlaps the displayed data, the value corresponding to the location is expressed in torque. The system also allows a user to explore four or more dimensional environment by operating the $3\mathrm{D}$ slice with twisting input. Zhang et al. [7], [9] proposed a method using a haptic device for exploring and manipulating knotted spheres and cloth-like objects in 4D space. In the exploration of the objects, constraining the movement of the device to the projected objects improves the understanding of complex structure. In the manipulation of the objects, haptic feedback is presented by rendering the reaction of pulling force. + +In the study of tactile presentation, Martinez et al. [3] proposed a haptic display by introducing a vibrotactile glove that enables users to perceive the shape of virtual 3D objects. Ohka et al. [6] developed a multimodal display, which is capable of stimulating muscles and tendons of the forearms and tactile receptors in fingers for presenting the tactile-haptic reality. + +These approaches, however, are not intended for simulating $4\mathrm{D}$ tactile stimulation. + +## 3 SYSTEM CONFIGURATION + +As shown in Figure 1, we develop a 4D visualization system consisting of a personal computer (HP, Intel Core i7-8700 3.20GHz, 8GB RAM, NVIDIA GeForce GTX 1060), a 6-DoF head-mounted display (HMD) with a motion controller (HTC VIVE), a single-board computer (Raspberry Pi Zero), and a tactile glove. A user wears the HMD, and holds the motion controller in the hand wearing the tactile glove. The user observes 3D-projected 4D space in the virtual environment through the HMD. The position of the hand is recognized by the motion controller. When the user touches a 4D object, the tactile glove presents the corresponding tactile sensation to the user's hand. The software is implemented with Unity 2018.3.3f1 and SteamVR Unity Plugin v2.2.0. + +Figure 2 shows a tactile glove equipped with a total of 30 vibration motors. Five motors are arranged on the palm side of the hand, five on the back side, and two on the front and the back of each finger as shown in Figure 3. The motors are driven by electric current, and the strength of the vibration stimuli is controlled by the single-board computer with pulse-width modulation. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_1_194_146_632_718_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_1_194_146_632_718_0.jpg) + +Figure 3: Arrangement of vibration motors in tactile glove. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_1_225_945_571_376_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_1_225_945_571_376_0.jpg) + +Figure 4: Structure of the 4D polytope compared to the 3D polytope. + +## 4 4D VISUALIZATION SYSTEM + +In this section, we describe the $4\mathrm{D}$ visualization system which projects 4D objects into 3D virtual environment. The system is based on an algorithm introdued in McIntosh's 4D Blocks [4]. + +### 4.1 Definition of 4D objects + +In the system, we define 4D objects as 4D convex polytopes. Non-convex polytopes are constructed by combining convex polytopes. As shown in Figure 4, boundaries are composed of 3D objects called cells, and their intersections are divided into 3 different features; vertices, edges, and faces, depending on the number of dimensions. + +### 4.2 Projection + +Figure 5 shows the 4D projection model. 4D objects arranged in 4D space are projected to a $3\mathrm{D}$ screen. Positions of $4\mathrm{D}$ objects are described by $4\mathrm{D}$ vectors in the $4\mathrm{D}$ coordinate system, and their orientations are described by $4 \times 4$ orthogonal matrices. The center of the 3D screen is located at the origin of the 4D space. The $3\mathrm{D}$ screen has a dimension of $2 \times 2 \times 2$ , spreading in ${x}_{w}{y}_{w}{z}_{w}$ plane. Data defined in the $4\mathrm{D}$ world-coordinate system ${x}_{w}{y}_{w}{z}_{w}{w}_{w}$ are converted to data in the $3\mathrm{D}$ screen-coordinate system ${x}_{s}{y}_{s}{z}_{s}$ by removing $w$ - coordinated component. 4D objects are orthogonally projected in the 3D screen by this transformation. From the viewpoint of the suitability of tactile presentation, we selected the orthographic projection method. The system also supports perspective projection, which can be occasionally selected by a user. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_1_932_152_713_566_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_1_932_152_713_566_0.jpg) + +Figure 5: 3D and 4D projection model. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_1_952_797_665_340_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_1_952_797_665_340_0.jpg) + +Figure 6: Examples of drawings with similar drawings in 3D. + +### 4.3 Drawing + +In the system, face polygons and edge lines are used for drawing. Polygons and lines do not have normals in $4\mathrm{D}$ space, and are drawn as contours of cells. In order to make it easier to distinguish cells displayed on the 3D screen, the system draws the center of cells with semi-transparent color-coded polygons, in addition to edges and faces. Figure 6 shows the examples of drawings. + +For accurate drawing, back-cell culling and hidden-hypersurface removal are applied. Occluded cells are determined by taking the dot product of a view direction and cell normals, and faces that belong to visible cells are selected so that they will be used for drawing. + +As the system doesn't adopt voxel rendering [1], we cannot use depth buffering for hidden-hypersurface removal. Instead, we use clip units, which represents an area where an object obscures other objects behind it. Figure 7 shows an example scene where clip units work. As shown in Figure 8, clip units are identified by contact subsurfaces of front-facing surfaces and back-facing surfaces with respect to the viewer. Each clip unit is a halfspace whose boundary includes the subsurface, and the boundary is orthogonal to the screen. Intersections of clip units are used for clipping. Drawing polygons and lines are clipped by clip units of objects closer to the viewer. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_2_297_149_432_300_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_2_297_149_432_300_0.jpg) + +Figure 7: An example scene where clip units work. We don't draw the intersection of rectangle $B$ and the area hidden by $A$ . + +![01963ea2-b75f-787c-a529-e3c16c0adab7_2_183_548_653_289_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_2_183_548_653_289_0.jpg) + +Figure 8: Calculating a clip unit in a 3D scene. Note that clip units are dimension-independent concepts. In $n\mathrm{D}$ ,"surfaces" mean $\left( {n - 1}\right) \mathrm{D}$ components, and "subsurfaces" mean $\left( {n - 2}\right) \mathrm{D}$ components. + +### 4.4 Control + +4D orientations have six degrees of freedom [8]. A user is able to rotate a 4D object by moving the motion controller while pressing the trigger. To naturally correspond the controller's 6-DoF operation in 3D space with the object's 6-DoF rotation in 4D space, the 3-DoF translational operation (Figure 9(a)) is related with the rotation involving the $w$ -axis, and the 3-DoF rotational operation (Figure 9(b)) is associated with the rotation not involving the $w$ -axis. + +## 5 TACTILE REPRESENTATION + +In this section, we introduce a tactile representation method. For explaining the concept, we firstly describe how to express the touching sensation of a 2D-projected 3D object, and then expand it to the 3D projection of a 4D object. + +### 5.1 Analogy of touching 3D objects in 2D space + +When humans touch a screen where a 3D object is displayed, they feel the object as if it is situated. For example, suppose that we move the right palm straight towards the front wall. Figure 10 depicts three different situations. When the wall faces straight in front, a uniform pressure will be applied on the palm (Figure 10(a)). Alternatively, when the wall faces a little to the right, the thumb will first be stimulated. When the hand is pressed against the wall as it is, the wrist will bend and the entire palm will touch the wall, but the thumb side will receive a stronger force than the little finger side (Figure 10(b)). When the hand touches a corner of the wall, stronger sensation is perceived at the center of the hand (Figure 10(c)). These differences can be distinguished by the pressure applied to a palm. + +The calculation of tactile stimulus generation proceeds as follows. We consider 3D space where a 3D object is situated, and the space is displayed in a 2D screen. When a user wearing a tactile glove touches the screen, hand-touched area is projected on the surface of the object. Then the distance between the projected hand and the screen is calculated. Figure 11 shows the three different situations of a wall, related with the cases presented in Figure 10. If the distance to the left is shorter than that to the right, it means the wall faces to the right. In this way, the strength of pressure is calculated by referring to the relative relationship of the distances. Figure 12 shows the results of the calculation. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_2_944_178_687_1012_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_2_944_178_687_1012_0.jpg) + +Figure 9: Rotating 4D objects by the motion controller operation. + +Let $\Omega$ be a set of all actuators installed in the tactile glove. First of all, the actuators are projected orthogonally through the screen, and a subset $L \subset \Omega$ of actuators being projected on the object is detected. For each projected points of actuators $i \in L$ , the distance ${l}_{i}$ from the screen is calculated. Then the normalized relative strength ${s}_{i}$ is calculated: + +$$ +{s}_{i} = \max \left\{ {\alpha \left( {{l}_{min} - {l}_{i}}\right) + 1,0}\right\} , \tag{1} +$$ + +where ${l}_{\min } = \mathop{\min }\limits_{{k \in L}}{l}_{k}$ and $\alpha$ is a gradient constant. The formula is derived as shown in Figure 13. + +In general, corners give stronger pressure than a flat wall, and apexes give stronger pressure than corners. In the same way, an uneven wall give stronger pressure than a flat wall. By considering this fact, the strength of the stimulus ${h}_{i}$ is adaptively adjusted according to the degree of sharpness and inclination: + +$$ +{h}_{i} = \left( {1 - \beta \frac{{\sum }_{k \in L}{s}_{k}}{N}}\right) {s}_{i}, \tag{2} +$$ + +where $\beta$ is a reducing constant $\left( {0 \leq \beta \leq 1}\right)$ and $N = \# \left\{ {i \in L \mid {s}_{i} > 0}\right\}$ is the number of active actuators. Figure 14 shows examples of the application of the formula. + +Finally, the strength ${h}_{i}$ is converted to voltage value ${V}_{i}$ : + +$$ +{V}_{i} = \gamma {h}_{i}, \tag{3} +$$ + +where $\gamma$ is a constant. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_3_191_148_642_337_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_3_191_148_642_337_0.jpg) + +Figure 10: Various situations when touching a wall. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_3_186_561_653_284_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_3_186_561_653_284_0.jpg) + +Figure 11: Calculation of tactile stimulus generation. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_3_187_922_643_229_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_3_187_922_643_229_0.jpg) + +Figure 12: Calculation results. + +### 5.2 Applying to 4D system + +The idea and the calculation method presented in the previous section can be extended to the 3D projection of 4D objects. In the 3D system, a user receives 2D-mapped stimulus patterns through his/her palm of the hand (Figure 15(a)). In the 4D system, the user receives 3D-mapped data through all the skin of the hand holding the controller (Figure 15(b)). Touching the 2D screen in the 3D system corresponds to putting a hand in the $3\mathrm{D}$ screen in the $4\mathrm{D}$ system. When the hand enters the screen, the stimulus is calculated in the same way as the 2D cases. + +Figure 16 shows the model of tactile calculation. The VR environment of the system consists of the $3\mathrm{D}$ screen and a rendered user's hand. 4D object is kept at a distance with the screen so that the hand prevents from colliding with the screen. When the hand enters the defined $4\mathrm{D}$ space, the relative position of each actuator from the screen ${d}_{i} = \left( {{x}_{{d}_{i}},{y}_{{d}_{i}},{z}_{{d}_{i}}}\right)$ is detected in the 3D screen coordinate system ${x}_{s}{y}_{s}{z}_{s}$ , and the position is converted to the 4D world coordinate system ${x}_{w}{y}_{w}{z}_{w}{w}_{w}$ as + +$$ +{c}_{i} = \left( {{x}_{{c}_{i}},{y}_{{c}_{i}},{z}_{{c}_{i}},{w}_{{c}_{i}}}\right) = \left( {{x}_{{d}_{i}},{y}_{{d}_{i}},{z}_{{d}_{i}},0}\right) . \tag{4} +$$ + +Then the line ${\mathbf{L}}_{i}$ extending from ${c}_{i}$ towards positive direction of ${w}_{w}$ axis is defined: + +$$ +{\mathbf{L}}_{i} = {c}_{i} + \left( {0,0,0, t}\right) \left( {t \geq 0}\right) . \tag{5} +$$ + +By clipping ${\mathbf{L}}_{i}$ with the $4\mathrm{D}$ object, the intersection point + +$$ +{q}_{i} = \left( {{x}_{{q}_{i}},{y}_{{q}_{i}},{z}_{{q}_{i}},{w}_{{q}_{i}}}\right) = \left( {{x}_{{c}_{i}},{y}_{{c}_{i}},{z}_{{c}_{i}},{t}^{\prime }}\right) \tag{6} +$$ + +of ${\mathbf{L}}_{i}$ and the object is detected. If ${\mathbf{L}}_{i}$ isn’t clipped by the object, the corresponding location of the user's hand is not projected on the object. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_3_995_149_580_909_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_3_995_149_580_909_0.jpg) + +Figure 14: Visualization of equation 2. $\left( {\beta = {0.5}}\right)$ + +Here, ${l}_{i}$ is calculated as the distance between ${c}_{i}$ and ${q}_{i}$ : + +$$ +{l}_{i} = \sqrt{{\left( {x}_{{c}_{i}} - {x}_{{q}_{i}}\right) }^{2} + {\left( {y}_{{c}_{i}} - {y}_{{q}_{i}}\right) }^{2} + {\left( {z}_{{c}_{i}} - {z}_{{q}_{i}}\right) }^{2} + {\left( {w}_{{c}_{i}} - {w}_{{q}_{i}}\right) }^{2}}. \tag{7} +$$ + +${l}_{i}$ is calculated for all locations, and converted into the strength of the stimulus by referring to the equations (1), (2) and (3). + +## 6 RESULTS OF TACTILE DISPLAY + +The system is able to display multiple objects from any viewpoint, however in this section, we deal with a single object situated in the center of the screen for a simple example. + +We implemented a function to visually display the strength of the stimulus calculated at each location for visually validating the operation. For precise rendering of tactile stimuli, the calculation is conducted at 242 locations arranged in grids situated on the user's hand. As shown in Figure 17 (a), (b), and (c), the locations and the amplitude of each tactile stimulus are imposed on the left side of the 3D screen. The displayed stimuli in a cube show the area to be mapped on a virtual hand, so that the user is able to intuitively recognize the presented tactile sensation given to the hand. Based on the 242 calculated values in the cube, the stimuli at 30 points corresponded to the motor locations, which is colored in cyan, are simultaneously given to the motors for presenting tactile sensation. + +As shown in Figure 17, when a user touches a hypercube with its cell facing the front, a uniform stimulus is generated. The magnitude of this stimulus is the same, even when only part of the hand is touched. Note that in all the following figures, the hand is in the same orientation. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_4_244_169_556_458_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_4_244_169_556_458_0.jpg) + +Figure 15: Tactile system model. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_4_290_701_443_516_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_4_290_701_443_516_0.jpg) + +Figure 16: Tactile calculation model. + +If the cell is tilted slightly to the left, the stimulus becomes greater on the right and smaller on the left (Figure 18(a)). The gradient increases as the slope increases, and eventually the leftmost stimulus disappears (Figure 18(b)). + +By manipulating the hypercube, the user is able to touch and recognize the shape intuitively. When the face is facing the user, the wall-shaped tactile sensation is presented as shown in Figure 19 (a). If the edge is situated in the front, the user feels line-shaped sensation as presented in Figure (b). The vertex presents an isolated strong stimuli, and the sensation gets gradually weaker in the peripheral area as shown in Figure (c). + +The user is also able to recognize differences between objects which look the same. Three different objects depicted in Figure 20 (b), (c), (d) are observed as the same from certain directions (Figure 20(a)). The three objects are recognized differently by the touch sensation. Figure 21(a) presents a 4D cone, where the centered area gives strong stimuli, and the stimuli gradually decreases in the perifieral area. When touching flat surface, the stimuli appear uniformly as shown in Figure (b). For the hollow shape, the stimuli in the central area is weaker than the surrounding area as presented in Figure (c). + +The tactile system was experienced and evaluated by one subject. Tactile stimuli rendered by vibration patterns were correctly presented as designed, and the subject could distinguish the above differences as well. He had a little difficulty to recognize the exact boundary of faces by different stimuli, however it could be compensated by moving the hand appropriately. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_4_943_147_684_797_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_4_943_147_684_797_0.jpg) + +Figure 17: Putting a hand inside the cube representing the surface of a hypercube. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_4_927_1031_701_502_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_4_927_1031_701_502_0.jpg) + +Figure 18: Touching the hypercube tilted slightly to the left. + +## 7 CONCLUSION AND FUTURE WORK + +In this paper, we proposed the system to display the 4D shape by rendering the $4\mathrm{D}$ tactile sensation. $4\mathrm{D}$ cognition can be supplemented by combining the visual information with the tactile information. By increasing subjects, the system should be verified objectively and quantitatively in future work. + +Possible improvements are considered. Since this system uses simple vibration motors for tactile rendering, the resolution of strength and position is not enough for subjects to recognize more detailed shapes. The choice of high-quality actuators will solve this problem. Moreover, finely-controlled stimuli may be able to express much rich information such as friction and directional force. + +The 4D space can be enriched by introducing four-dimensional physics. In the actual 3D world, an object moves by collision according to the laws of physics. By combining with 4D physics, the tactile experience will be much realistic. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_5_166_147_681_843_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_5_166_147_681_843_0.jpg) + +Figure 19: Touching face, edge and vertex of the hypercube. + +## REFERENCES + +[1] A. Chu, C. Fu, A. J. Hanson, and P. Heng. Gl4d: A gpu-based architecture for interactive $4\mathrm{\;d}$ visualization. IEEE Transactions on Visualization and Computer Graphics, 15:1587-1594, Oct. 2009. doi: 10.1109/TVCG. 2009.147 + +[2] W. Hashimoto and H. Iwata. Multi-dimensional data browser with haptic sensation. Transactions of the Virtual Reality Society of Japan, 2(3):9-16, Sept. 1997. doi: 10.18974/tvrsjp.2.3_9 + +[3] J. Martínez, A. García, M. Oliver, J. P. Molina, and P. González. Identifying virtual 3d geometric shapes with a vibrotactile glove. IEEE Computer Graphics and Applications, 36(1):42-51, 2016. doi: 10.1109/ MCG.2014.81 + +[4] J. McIntosh. The four dimensional blocks, 2014. Retrieved June 2020. + +[5] T. Miwa, Y. Sakai, and S. Hashimoto. Learning 4-d spatial representations through perceptual experience with hypercubes. IEEE Transactions on Cognitive and Developmental Systems, 10(2):250-266, June 2018. doi: 10.1109/TCDS.2017.2710420 + +[6] M. Ohka, K. Kato, T. Fujiwara, and Y. Mitsuya. Virtual object handling using a tactile-haptic display system. In IEEE International Conference Mechatronics and Automation, 2005, vol. 1, pp. 292-297 Vol. 1, 2005. doi: 10.1109/ICMA.2005.1626562 + +[7] J. Weng and H. Zhang. Perceptualization of geometry using intelligent haptic and visual sensing. vol. 8654, Feb. 2013. doi: 10.1117/12/2002536 + +[8] X. Yan, C. Fu, and A. J. Hanson. Multitouching the fourth dimension. Computer, 45(9):80-88, 2012. doi: 10.1109/MC.2012.77 + +[9] H. Zhang and A. J. Hanson. Shadow-driven 4d haptic visualization. IEEE Transactions on Visualization and Computer Graphics, 13(6):1688-1695, Nov. 2007. doi: 10.1109/TVCG.2007.70593 + +![01963ea2-b75f-787c-a529-e3c16c0adab7_5_938_244_701_931_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_5_938_244_701_931_0.jpg) + +Figure 20: 4D objects and their 3D analogues. + +![01963ea2-b75f-787c-a529-e3c16c0adab7_5_943_1422_684_569_0.jpg](images/01963ea2-b75f-787c-a529-e3c16c0adab7_5_943_1422_684_569_0.jpg) + +Figure 21: Touching 4D objects that looks the same when seen from the front. + diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..8a4abb83c1fa374f3f9a5f64d27b5d01d332471e --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,219 @@ +§ TOUCHING 4D OBJECTS WITH 3D TACTILE FEEDBACK + +Category: Research + +§ ABSTRACT + +This paper introduces a novel interactive system for presenting $4\mathrm{D}$ objects in 4D space through tactile sensation. A user is able to experience 4D space by actively touching $4\mathrm{D}$ objects with the hands, where the hand holding the controller is physically stimulated at a three-dimensional hypersurface of the $4\mathrm{D}$ object. When a hand is placed on the $3\mathrm{D}$ projection of a $4\mathrm{D}$ object, the force generated at the interface between the hand and the object is calculated by referring to the distance from the viewpoint to each point in the frontal surface of the object. The calculated force in the hand is converted to the vibration patterns to be displayed from the tactile glove. The system supplements the $4\mathrm{D}$ information such as tilt or unevenness, which is difficult to be visually recognized. + +Keywords: 4D space, 4D visualization, 4D interaction, tactile visualization + +Index Terms: Human-centered computing-Visualization-Visualization application domains-Scientific visualization + +§ 1 INTRODUCTION + +With the development of computer graphics technology, various methods for displaying 4D objects have been proposed. Furthermore, the VR technology allowed users to observe 4D objects with a higher degree of immersion. Some researchers produced new methods of visualization and interaction [1], [5], [9], being expected to improve the understanding and cognition of $4\mathrm{D}$ space. + +While most of these approaches use only the visual information by producing 4D space, humans also use auditory and tactile perception to recognize $3\mathrm{D}$ representations. However, there are few studies focusing on these additional sensations for $4\mathrm{D}$ spatial representations. In order to present richer four-dimensional information to users, auditory and tactile information is necessary as well as visual information. Therefore, we develop a novel system for displaying 4D objects using tactile sensations. + +Needless to say, humans cannot touch objects defined in 4D space, since their bodies are situated in 3D space. The tactile sensations are the information mapped onto their skin, which is arranged in 2D surface. Therefore we can infer that the ones situated in $4\mathrm{D}$ space touch 4D objects with their 3D skin, and the tactile sensation is regarded as the information mapped into 3D hypersurface, if it exists. Moreover, in general, humans' tactile perception appears to be mapped into their actual 3D space in combination with the body arrangement. Our idea is to introduce the experience as if humans touch the $4\mathrm{D}$ objects by getting the $3\mathrm{D}$ -mapped tactile-like information of $4\mathrm{D}$ objects through their skin. + +In this paper, we propose a novel $4\mathrm{D}$ interaction system with tactile representation. We focus on the aspect of tactile sensation that can represent the unevenness of an object by displaying pressure sensation. We convert the tactile information into vibration patterns for ease of handling. For presenting the touch information, a tactile glove installing vibration actuators is developed. + +The introduced system works as follows. First, a 4D object is projected to a 3D screen in the VR environment. Second, a hand wearing the tactile glove enters the screen. Third, the system calculates the necessary tactile information. Finally, the hand receives tactile sensations so that each part of the hand corresponds to each part of $3\mathrm{D}$ hypersurface of the $4\mathrm{D}$ hand. A user is able to perceive information of the projected 3D hyperplane such as the slope and unevenness as if he/she touches the 4D objects, in the sense that he/she receives 4D tactile-like information through the tactile sensation. + + < g r a p h i c s > + +Figure 1: Overall view of the system. + +§ 2 RELATED WORKS + +Several approaches have been introduced for presenting $4\mathrm{D}$ spatial information through tactile or haptic sensations. Hashimoto et al. [2] developed a method that visualized multidimensional data with haptic representation. They use a haptic device capable of 6-DoF input and force feedback to control a pointer floating in the virtual environment. When the pointer overlaps the displayed data, the value corresponding to the location is expressed in torque. The system also allows a user to explore four or more dimensional environment by operating the $3\mathrm{D}$ slice with twisting input. Zhang et al. [7], [9] proposed a method using a haptic device for exploring and manipulating knotted spheres and cloth-like objects in 4D space. In the exploration of the objects, constraining the movement of the device to the projected objects improves the understanding of complex structure. In the manipulation of the objects, haptic feedback is presented by rendering the reaction of pulling force. + +In the study of tactile presentation, Martinez et al. [3] proposed a haptic display by introducing a vibrotactile glove that enables users to perceive the shape of virtual 3D objects. Ohka et al. [6] developed a multimodal display, which is capable of stimulating muscles and tendons of the forearms and tactile receptors in fingers for presenting the tactile-haptic reality. + +These approaches, however, are not intended for simulating $4\mathrm{D}$ tactile stimulation. + +§ 3 SYSTEM CONFIGURATION + +As shown in Figure 1, we develop a 4D visualization system consisting of a personal computer (HP, Intel Core i7-8700 3.20GHz, 8GB RAM, NVIDIA GeForce GTX 1060), a 6-DoF head-mounted display (HMD) with a motion controller (HTC VIVE), a single-board computer (Raspberry Pi Zero), and a tactile glove. A user wears the HMD, and holds the motion controller in the hand wearing the tactile glove. The user observes 3D-projected 4D space in the virtual environment through the HMD. The position of the hand is recognized by the motion controller. When the user touches a 4D object, the tactile glove presents the corresponding tactile sensation to the user's hand. The software is implemented with Unity 2018.3.3f1 and SteamVR Unity Plugin v2.2.0. + +Figure 2 shows a tactile glove equipped with a total of 30 vibration motors. Five motors are arranged on the palm side of the hand, five on the back side, and two on the front and the back of each finger as shown in Figure 3. The motors are driven by electric current, and the strength of the vibration stimuli is controlled by the single-board computer with pulse-width modulation. + + < g r a p h i c s > + +Figure 3: Arrangement of vibration motors in tactile glove. + + < g r a p h i c s > + +Figure 4: Structure of the 4D polytope compared to the 3D polytope. + +§ 4 4D VISUALIZATION SYSTEM + +In this section, we describe the $4\mathrm{D}$ visualization system which projects 4D objects into 3D virtual environment. The system is based on an algorithm introdued in McIntosh's 4D Blocks [4]. + +§ 4.1 DEFINITION OF 4D OBJECTS + +In the system, we define 4D objects as 4D convex polytopes. Non-convex polytopes are constructed by combining convex polytopes. As shown in Figure 4, boundaries are composed of 3D objects called cells, and their intersections are divided into 3 different features; vertices, edges, and faces, depending on the number of dimensions. + +§ 4.2 PROJECTION + +Figure 5 shows the 4D projection model. 4D objects arranged in 4D space are projected to a $3\mathrm{D}$ screen. Positions of $4\mathrm{D}$ objects are described by $4\mathrm{D}$ vectors in the $4\mathrm{D}$ coordinate system, and their orientations are described by $4 \times 4$ orthogonal matrices. The center of the 3D screen is located at the origin of the 4D space. The $3\mathrm{D}$ screen has a dimension of $2 \times 2 \times 2$ , spreading in ${x}_{w}{y}_{w}{z}_{w}$ plane. Data defined in the $4\mathrm{D}$ world-coordinate system ${x}_{w}{y}_{w}{z}_{w}{w}_{w}$ are converted to data in the $3\mathrm{D}$ screen-coordinate system ${x}_{s}{y}_{s}{z}_{s}$ by removing $w$ - coordinated component. 4D objects are orthogonally projected in the 3D screen by this transformation. From the viewpoint of the suitability of tactile presentation, we selected the orthographic projection method. The system also supports perspective projection, which can be occasionally selected by a user. + + < g r a p h i c s > + +Figure 5: 3D and 4D projection model. + + < g r a p h i c s > + +Figure 6: Examples of drawings with similar drawings in 3D. + +§ 4.3 DRAWING + +In the system, face polygons and edge lines are used for drawing. Polygons and lines do not have normals in $4\mathrm{D}$ space, and are drawn as contours of cells. In order to make it easier to distinguish cells displayed on the 3D screen, the system draws the center of cells with semi-transparent color-coded polygons, in addition to edges and faces. Figure 6 shows the examples of drawings. + +For accurate drawing, back-cell culling and hidden-hypersurface removal are applied. Occluded cells are determined by taking the dot product of a view direction and cell normals, and faces that belong to visible cells are selected so that they will be used for drawing. + +As the system doesn't adopt voxel rendering [1], we cannot use depth buffering for hidden-hypersurface removal. Instead, we use clip units, which represents an area where an object obscures other objects behind it. Figure 7 shows an example scene where clip units work. As shown in Figure 8, clip units are identified by contact subsurfaces of front-facing surfaces and back-facing surfaces with respect to the viewer. Each clip unit is a halfspace whose boundary includes the subsurface, and the boundary is orthogonal to the screen. Intersections of clip units are used for clipping. Drawing polygons and lines are clipped by clip units of objects closer to the viewer. + + < g r a p h i c s > + +Figure 7: An example scene where clip units work. We don't draw the intersection of rectangle $B$ and the area hidden by $A$ . + + < g r a p h i c s > + +Figure 8: Calculating a clip unit in a 3D scene. Note that clip units are dimension-independent concepts. In $n\mathrm{D}$ ,"surfaces" mean $\left( {n - 1}\right) \mathrm{D}$ components, and "subsurfaces" mean $\left( {n - 2}\right) \mathrm{D}$ components. + +§ 4.4 CONTROL + +4D orientations have six degrees of freedom [8]. A user is able to rotate a 4D object by moving the motion controller while pressing the trigger. To naturally correspond the controller's 6-DoF operation in 3D space with the object's 6-DoF rotation in 4D space, the 3-DoF translational operation (Figure 9(a)) is related with the rotation involving the $w$ -axis, and the 3-DoF rotational operation (Figure 9(b)) is associated with the rotation not involving the $w$ -axis. + +§ 5 TACTILE REPRESENTATION + +In this section, we introduce a tactile representation method. For explaining the concept, we firstly describe how to express the touching sensation of a 2D-projected 3D object, and then expand it to the 3D projection of a 4D object. + +§ 5.1 ANALOGY OF TOUCHING 3D OBJECTS IN 2D SPACE + +When humans touch a screen where a 3D object is displayed, they feel the object as if it is situated. For example, suppose that we move the right palm straight towards the front wall. Figure 10 depicts three different situations. When the wall faces straight in front, a uniform pressure will be applied on the palm (Figure 10(a)). Alternatively, when the wall faces a little to the right, the thumb will first be stimulated. When the hand is pressed against the wall as it is, the wrist will bend and the entire palm will touch the wall, but the thumb side will receive a stronger force than the little finger side (Figure 10(b)). When the hand touches a corner of the wall, stronger sensation is perceived at the center of the hand (Figure 10(c)). These differences can be distinguished by the pressure applied to a palm. + +The calculation of tactile stimulus generation proceeds as follows. We consider 3D space where a 3D object is situated, and the space is displayed in a 2D screen. When a user wearing a tactile glove touches the screen, hand-touched area is projected on the surface of the object. Then the distance between the projected hand and the screen is calculated. Figure 11 shows the three different situations of a wall, related with the cases presented in Figure 10. If the distance to the left is shorter than that to the right, it means the wall faces to the right. In this way, the strength of pressure is calculated by referring to the relative relationship of the distances. Figure 12 shows the results of the calculation. + + < g r a p h i c s > + +Figure 9: Rotating 4D objects by the motion controller operation. + +Let $\Omega$ be a set of all actuators installed in the tactile glove. First of all, the actuators are projected orthogonally through the screen, and a subset $L \subset \Omega$ of actuators being projected on the object is detected. For each projected points of actuators $i \in L$ , the distance ${l}_{i}$ from the screen is calculated. Then the normalized relative strength ${s}_{i}$ is calculated: + +$$ +{s}_{i} = \max \left\{ {\alpha \left( {{l}_{min} - {l}_{i}}\right) + 1,0}\right\} , \tag{1} +$$ + +where ${l}_{\min } = \mathop{\min }\limits_{{k \in L}}{l}_{k}$ and $\alpha$ is a gradient constant. The formula is derived as shown in Figure 13. + +In general, corners give stronger pressure than a flat wall, and apexes give stronger pressure than corners. In the same way, an uneven wall give stronger pressure than a flat wall. By considering this fact, the strength of the stimulus ${h}_{i}$ is adaptively adjusted according to the degree of sharpness and inclination: + +$$ +{h}_{i} = \left( {1 - \beta \frac{{\sum }_{k \in L}{s}_{k}}{N}}\right) {s}_{i}, \tag{2} +$$ + +where $\beta$ is a reducing constant $\left( {0 \leq \beta \leq 1}\right)$ and $N = \# \left\{ {i \in L \mid {s}_{i} > 0}\right\}$ is the number of active actuators. Figure 14 shows examples of the application of the formula. + +Finally, the strength ${h}_{i}$ is converted to voltage value ${V}_{i}$ : + +$$ +{V}_{i} = \gamma {h}_{i}, \tag{3} +$$ + +where $\gamma$ is a constant. + + < g r a p h i c s > + +Figure 10: Various situations when touching a wall. + + < g r a p h i c s > + +Figure 11: Calculation of tactile stimulus generation. + + < g r a p h i c s > + +Figure 12: Calculation results. + +§ 5.2 APPLYING TO 4D SYSTEM + +The idea and the calculation method presented in the previous section can be extended to the 3D projection of 4D objects. In the 3D system, a user receives 2D-mapped stimulus patterns through his/her palm of the hand (Figure 15(a)). In the 4D system, the user receives 3D-mapped data through all the skin of the hand holding the controller (Figure 15(b)). Touching the 2D screen in the 3D system corresponds to putting a hand in the $3\mathrm{D}$ screen in the $4\mathrm{D}$ system. When the hand enters the screen, the stimulus is calculated in the same way as the 2D cases. + +Figure 16 shows the model of tactile calculation. The VR environment of the system consists of the $3\mathrm{D}$ screen and a rendered user's hand. 4D object is kept at a distance with the screen so that the hand prevents from colliding with the screen. When the hand enters the defined $4\mathrm{D}$ space, the relative position of each actuator from the screen ${d}_{i} = \left( {{x}_{{d}_{i}},{y}_{{d}_{i}},{z}_{{d}_{i}}}\right)$ is detected in the 3D screen coordinate system ${x}_{s}{y}_{s}{z}_{s}$ , and the position is converted to the 4D world coordinate system ${x}_{w}{y}_{w}{z}_{w}{w}_{w}$ as + +$$ +{c}_{i} = \left( {{x}_{{c}_{i}},{y}_{{c}_{i}},{z}_{{c}_{i}},{w}_{{c}_{i}}}\right) = \left( {{x}_{{d}_{i}},{y}_{{d}_{i}},{z}_{{d}_{i}},0}\right) . \tag{4} +$$ + +Then the line ${\mathbf{L}}_{i}$ extending from ${c}_{i}$ towards positive direction of ${w}_{w}$ axis is defined: + +$$ +{\mathbf{L}}_{i} = {c}_{i} + \left( {0,0,0,t}\right) \left( {t \geq 0}\right) . \tag{5} +$$ + +By clipping ${\mathbf{L}}_{i}$ with the $4\mathrm{D}$ object, the intersection point + +$$ +{q}_{i} = \left( {{x}_{{q}_{i}},{y}_{{q}_{i}},{z}_{{q}_{i}},{w}_{{q}_{i}}}\right) = \left( {{x}_{{c}_{i}},{y}_{{c}_{i}},{z}_{{c}_{i}},{t}^{\prime }}\right) \tag{6} +$$ + +of ${\mathbf{L}}_{i}$ and the object is detected. If ${\mathbf{L}}_{i}$ isn’t clipped by the object, the corresponding location of the user's hand is not projected on the object. + + < g r a p h i c s > + +Figure 14: Visualization of equation 2. $\left( {\beta = {0.5}}\right)$ + +Here, ${l}_{i}$ is calculated as the distance between ${c}_{i}$ and ${q}_{i}$ : + +$$ +{l}_{i} = \sqrt{{\left( {x}_{{c}_{i}} - {x}_{{q}_{i}}\right) }^{2} + {\left( {y}_{{c}_{i}} - {y}_{{q}_{i}}\right) }^{2} + {\left( {z}_{{c}_{i}} - {z}_{{q}_{i}}\right) }^{2} + {\left( {w}_{{c}_{i}} - {w}_{{q}_{i}}\right) }^{2}}. \tag{7} +$$ + +${l}_{i}$ is calculated for all locations, and converted into the strength of the stimulus by referring to the equations (1), (2) and (3). + +§ 6 RESULTS OF TACTILE DISPLAY + +The system is able to display multiple objects from any viewpoint, however in this section, we deal with a single object situated in the center of the screen for a simple example. + +We implemented a function to visually display the strength of the stimulus calculated at each location for visually validating the operation. For precise rendering of tactile stimuli, the calculation is conducted at 242 locations arranged in grids situated on the user's hand. As shown in Figure 17 (a), (b), and (c), the locations and the amplitude of each tactile stimulus are imposed on the left side of the 3D screen. The displayed stimuli in a cube show the area to be mapped on a virtual hand, so that the user is able to intuitively recognize the presented tactile sensation given to the hand. Based on the 242 calculated values in the cube, the stimuli at 30 points corresponded to the motor locations, which is colored in cyan, are simultaneously given to the motors for presenting tactile sensation. + +As shown in Figure 17, when a user touches a hypercube with its cell facing the front, a uniform stimulus is generated. The magnitude of this stimulus is the same, even when only part of the hand is touched. Note that in all the following figures, the hand is in the same orientation. + + < g r a p h i c s > + +Figure 15: Tactile system model. + + < g r a p h i c s > + +Figure 16: Tactile calculation model. + +If the cell is tilted slightly to the left, the stimulus becomes greater on the right and smaller on the left (Figure 18(a)). The gradient increases as the slope increases, and eventually the leftmost stimulus disappears (Figure 18(b)). + +By manipulating the hypercube, the user is able to touch and recognize the shape intuitively. When the face is facing the user, the wall-shaped tactile sensation is presented as shown in Figure 19 (a). If the edge is situated in the front, the user feels line-shaped sensation as presented in Figure (b). The vertex presents an isolated strong stimuli, and the sensation gets gradually weaker in the peripheral area as shown in Figure (c). + +The user is also able to recognize differences between objects which look the same. Three different objects depicted in Figure 20 (b), (c), (d) are observed as the same from certain directions (Figure 20(a)). The three objects are recognized differently by the touch sensation. Figure 21(a) presents a 4D cone, where the centered area gives strong stimuli, and the stimuli gradually decreases in the perifieral area. When touching flat surface, the stimuli appear uniformly as shown in Figure (b). For the hollow shape, the stimuli in the central area is weaker than the surrounding area as presented in Figure (c). + +The tactile system was experienced and evaluated by one subject. Tactile stimuli rendered by vibration patterns were correctly presented as designed, and the subject could distinguish the above differences as well. He had a little difficulty to recognize the exact boundary of faces by different stimuli, however it could be compensated by moving the hand appropriately. + + < g r a p h i c s > + +Figure 17: Putting a hand inside the cube representing the surface of a hypercube. + + < g r a p h i c s > + +Figure 18: Touching the hypercube tilted slightly to the left. + +§ 7 CONCLUSION AND FUTURE WORK + +In this paper, we proposed the system to display the 4D shape by rendering the $4\mathrm{D}$ tactile sensation. $4\mathrm{D}$ cognition can be supplemented by combining the visual information with the tactile information. By increasing subjects, the system should be verified objectively and quantitatively in future work. + +Possible improvements are considered. Since this system uses simple vibration motors for tactile rendering, the resolution of strength and position is not enough for subjects to recognize more detailed shapes. The choice of high-quality actuators will solve this problem. Moreover, finely-controlled stimuli may be able to express much rich information such as friction and directional force. + +The 4D space can be enriched by introducing four-dimensional physics. In the actual 3D world, an object moves by collision according to the laws of physics. By combining with 4D physics, the tactile experience will be much realistic. + + < g r a p h i c s > + +Figure 19: Touching face, edge and vertex of the hypercube. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..668f826f1b6476d43758c69b7e0fd9283b0aedf2 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,447 @@ +# How Tall is that Bar Chart? Virtual Reality, Distance Compression and Visualizations + +1st Author Name + +Affiliation + +City, Country + +e-mail address + +2nd Author Name + +Affiliation + +City, Country + +e-mail address + +3rd Author Name + +Affiliation + +City, Country + +e-mail address + +![01963e9d-24a5-7959-931f-69dfc7f29684_0_277_529_1264_551_0.jpg](images/01963e9d-24a5-7959-931f-69dfc7f29684_0_277_529_1264_551_0.jpg) + +Figure 1. The virtual environments used in Study 1, each with differing levels of depth cues. Participants could look around with the HMD in VR and used the mouse to look around in the screen virtual environment conditions. + +## ABSTRACT + +As VR technology becomes more available, VR applications will be increasing used to present information visualizations. While data visualization in VR is an interesting topic, there remain questions about how effective or accurate such visualization can be. One known phenomenon with VR environments is that people tend to unconsciously compress or underestimate distances. However, it is unknown if or how this effect will alter the perception of data visualizations in VR. To this end, we replicate portions of Cleveland and McGill's foundational perceptual visualization studies, in VR. Through a series of three studies we find that distance compression does negatively affect estimations of actual lengths (heights of bars), but does not appear to impact relative comparisons. Additionally, by replicating the position-angle experiments, we find that (as with traditional 2D visualizations) people are better at relative length evaluations than relative angles. Finally, by looking at these open questions, we develop a series of best practices for performing data visualization in a VR environment. + +## Author Keywords + +Visualization; VR. + +## ACM Classification Keywords + +H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous + +## INTRODUCTION + +As Virtual Reality (VR) technology continues to be developed and expanded, workplace tasks such as viewing information visualizations, are becoming more likely to be executed in a VR environment. While much of the research around traditional screen-based visualizations likely applies to VR, it is unclear how specific VR-related phenomena might alter how effective or accurate these visualizations are. Of particular note, it has been shown that in VR environments people tend to unconsciously underestimate distances, in a phenomenon called distance compression $\left\lbrack {1,9,{10},{22},{26},{28},{33},{34},{39},{40},{45}}\right\rbrack$ . However, to this point, designers of VR visualizations have not had any guidance about how distance compression will alter visualization effectiveness or user accuracy. For example, even a simple bar chart uses the heights/lengths of the bars to represent data and it is unclear how distance compression will alter one's ability to measure or compare the lengths of the bars. + +To solve this problem, we looked to foundational work by Cleveland and McGill which looked at graphical perception of paper based visualizations [6] and has also been replicated in a digital context [14]. We performed 3 studies, replicating the position-length and the position-angle experiments in a VR environment. Our first study, using the bar chart position-length experiment, provided bar charts in virtual environments both on the screen and in VR and asked participants to measure actual distances (i.e., the bar is $1\mathrm{\;m}$ tall) and relative distances (i.e., bar $A$ is ${80}\%$ as tall as bar $B$ ). We explored the suggestion that varying degrees of depth cues could reduce distance compression $\left\lbrack {{10},{12}}\right\rbrack$ , as well as bar charts of varying scales. In a second study, rather than depth cues we looked at perspective, providing participants different ways to move around the look at the various scaled charts in VR. Finally, in the last experiment, we implemented the position-angle experiment, looking at how actual lengths/angles and relative lengths/angles were measured in bar charts, scatter plots, and pie charts. + +Through these studies, we have 5 contributions. First, we confirm the existence of distance compression in VR visualizations, but see that is also applies to similarly to screen-based environments. We show that distance compression does negatively affect actual length/distance measurements, but may only have a small or negligible effect on relative comparisons. Depth cues had no discernable effect on the accuracy of measurements, but do appear to affect one's perception of their ability to be accurate. As-in traditional 2D visualizations, people are better at relative length evaluations than relative angles. Finally, we provide a set of design guidelines for designers to inform the implementation and creation of effective and useful visualizations in VR. + +## RELATED WORK + +## VR and Visualization + +VR visualizations fall under the field of 'Immersive Analytics', though this also refers to augmented reality (AR) and Mixed Reality (MR) Visualizations [7]. In the late 90s people were beginning to talk about VR, and visualization, even though systems of the day (mostly VR Caves) did not quite provide adequate capabilities [5,21,31,32,36]. More recent work provides concrete examples in the environmental $\left\lbrack {{15},{16},{27},{30}}\right\rbrack$ , medical $\left\lbrack {{11},{19},{47}}\right\rbrack$ , and archeology $\left\lbrack {4,{24},{38}}\right\rbrack$ domains. A notable example is ImAxes [8] which is a dynamic system were users draw and connect axis in midair in VR allowing for multiple dynamic chart types. + +When interacting with a VR visualization, one should keep in mind that multiple views and input modalities may not transfer from the screen to VR [23]. Furthermore, the affordances of an interaction may be different in VR [2]. For example, Simpson et al. found that walking around the dataset in VR was not better than using a controller to rotate it [37]. Cybersickness must also be considered, as certain design choices, such as using a controller to move around, may work on a screen but quickly induce nausea in VR $\left\lbrack {{48},{49}}\right\rbrack$ . There may be many ways to combat this, for example, Cliquet et al. suggest allowing the user to sit [7]. + +## Visualization and Perception + +Cleveland and McGill's foundational work [6] showed that people are better estimating lengths than areas, and better at estimating areas than volumes. Furthermore, people are better at position estimations (i.e., a scatter plot) than angle estimations (i.e., pie chart). These results have been confirmed and replicated more recently by Heer and Bostock with a Mechanical Turk based study [14]. + +## 2D vs 3D + +The usefulness and effectiveness of $2\mathrm{D}$ visualizations have often been compared to $3\mathrm{D}.2\mathrm{D}$ is considered generally the best approach $\left\lbrack {{35},{43}}\right\rbrack$ , especially for tasks that require precision [18] and tasks that suffer from perspective distortion (e.g., distance estimation) [17]. However, 3D visualizations can still be useful, particularly when the data has a high levels of detail, structure, and/or complexity (e.g., 3+ dimensions) $\left\lbrack {{17},{18},{25},{44}}\right\rbrack$ or when the task involves exploring 3D representations of the real world (e.g., terrain or other real world objects) $\left\lbrack {{17},{18}}\right\rbrack$ . 3D may also prove useful by providing ways to explore overlap in network graphs $\left\lbrack {{13},{41},{42}}\right\rbrack$ . + +Now most visualizations can be considered 3D visualizations in VR - even though they might be mapped onto a plane, the ability to look at them from multiple angles might cause occlusion or other problems of perspective. However, we bring up the debate between 2D and 3D graphs because there is some indication that the binocular depth cues provided by modern VR tip the equation in favor of $3\mathrm{D}$ in some situations. When only considering scatterplots, for example, providing binocular cues has shown that $3\mathrm{D}$ visualizations tend win over $2\mathrm{D}\left\lbrack {{25},{29},{44}}\right\rbrack$ , although this is not always the case $\left\lbrack {{35},{43}}\right\rbrack$ . + +## VR and Perception + +People tend to underestimate distances in VR using a Head Mounted Display (HMD) [1,9,10,26,28,33,34,39,40,45]. This effect exists even when the VR environment is very similar to, or a recording of the real world [28,34,40], and may be partially caused by the limited field of vision of the HMD [9,22]. Physical factors, such as the weight of the HMD may be also be important [45], especially as the effort perceived to be necessary to walk a particular distance (e.g., if one was wearing a heavy backpack) can have effects perception of that distance [46]. The parameters which affect distance compression have been investigated but are not fully understood. Furthermore, it is unclear how distance compression may affect visualization tasks like comparing two bars in a bar chart. + +One possible solution might be a lack of realistic depth cues $\left\lbrack {{10},{12}}\right\rbrack$ in VR. Some of these cues, such as light, texture, shape, luminance, linearity of light, object occlusion, motion etc., can be manipulated to be more or less available in a virtual environment. Unlike a 2D screen, VR headsets provide binocular cues because they render a separate image for each eye (although at least one study has suggested that monocular/binocular cues alone are not responsible for distance compression [9]). In fact, current VR headsets, such as the HTC Vive, allow for most depth cues that are available in the real world to be implemented in VR [10]. Our first study looks at how fidelity of depth cues might change distance compression when looking at a visualization. + +It has also been suggested that this distance compression might be affected by perceptually different distance zones. Armbrüster et al., suggest that distance compression is smaller for objects in peripersonal space $\left( { < 1\mathrm{\;m}}\right)$ where an object is within arm's reach [1]. Cutting calls this zone personal space $\left( { < {1.5}\mathrm{\;m}}\right)$ , splitting up larger distances into action space ( $< {30}\mathrm{\;m}$ , interaction of some sort is feasible) and vista space $({30}\mathrm{\;m} +$ , further than one would expect to be able to act) [10]. To further investigate these categories of distance, the first two studies use three different sizes of charts, roughly corresponding to personal, action, and vista bar heights. + +## STUDY 1 - DEPTH CUES AND SCALE + +The chronic underestimation of distances is troublesome for VR, particularly when considering visualizations tend to encode data using absolute or relative distances. Even a simple bar chart uses length to communicate data to the viewer. + +Our first study aimed to confirm that visualization in VR are compromised by distance underestimation. Since it has been suggested that more depth cues $\left\lbrack {{10},{12}}\right\rbrack$ could lesson underestimation, we designed three virtual environments with differing levels of depth-cues. + +Few Depth Cues: Objects had a consistent luminance and no texture. Shadows were disabled and the sky was a medium gray. A simple textured floor was provided as floating over a void was nauseating in VR. This condition represented a simple chart without embellishments.. + +Some Depth Cues: Bars now had a slightly crumpled paper texture and responded to the lights in the scene, casting shadows. The floor contained an arbitrary grid, and was also textured slightly. Aerial perspective was applied, allowing distant objects to fade into the sunset sky somewhat. We consider this a 'best practices' chart, with minimal added embellishments, all of which directly contribute depth cues to the environment. + +Rich Depth Cues: In addition to texture, luminance, and aerial perspective, the scene was augmented with objects that could be used to determine relative sizes. Trees, a light post, some bushes provided general cues about scale. A house, car, and park bench were also in the scene, as these have relatively standard sizes. Similarity, a skyscraper was in the scene, as a floor of a building is also about the same size. These objects were not immediately in view, the participant would have to look at them directly. Grass and flowers on the ground provided cues of relative density. This condition, while being very rich with depth cues, was also a bit extreme; one can imagine that not every visualization has a place for trees, cars, and buildings (Figure 1). However, Bateman et al [3] showed that embellishments that add context to the visualization can improve memorability, and given the prevalence of infographics, it is not impossible that some visualization s might provide relevant, contextual objects (e.g., a visualization about deforestation could contain trees). + +We were also interested in measuring this effect at multiple scales. The corresponding chart heights and task specific bar lengths can be seen in Table 1. In all conditions, the participant viewed the visualization from $4\mathrm{\;m}$ back. + +Personal scale: The entire visualization could be seen at one time when looking straight ahead without looking significantly up or down. + +House Scale: The larger visualization required that the participant need to look up somewhat to see the entire chart. + +Skyscraper Scale: The visualization was extremely tall, requiring the viewer to tilt their head back and look way up. While a barchart as high as a skyscraper is unlikely to be very useful, we included this scale because for very large or complex visualizations it is possible that a user could to navigate to a view where some of the data is very far away. + +Finally, we compared VR with an on-screen condition which featured virtual environments with the same scales and depth-cues (without, of course, the binocular depth cues provided be the HMD). This gave us a scale (3) by depth-cues (3) by screen/VR (2) factorial study. We also added a real-world condition, with a simple bar chart contained on a monitor. This was to provide a baseline as it was similar to the foundational work done by Cleveland and McGill [6]. + +## Task + +Cleveland and McGill [6] provide several tasks for evaluating perception of lengths in a visualization. We chose to mimic their position length experiment, specifically using their Type-1 task (as this had the lowest error). This task provides a 5 value bar chart, with two side by side bars marked with a dot which have percentage differences ranging from 18% to 83%. The participant is asked to evaluate, without explicitly measuring, what percentage the smaller bar is of the larger. Our task mimics theirs, down to the way they chose relevant values for the bars in the task, except that we used a single bar chart instead of two side by side bar charts. We also colored the bars of interest because for dot at the bottom would be insufficient for differentiation when looking way up in the skyscraper scale. + +Participants completed 7 blocks (depth-cues (3) x screen/VR (2) + 1 real-world), counterbalanced with a Latin square design. In each block they viewed 18 bar charts (126 total), 6 of each scale, in random order, except in the real-world condition where all 18 bar charts fit on the screen at ${30}\mathrm{\;{cm}}$ tall. Using [6]'s template as a guide, we randomly generated sets of 6 tasks; each set containing two tasks with a percentage difference between 10% and 40%, two between 40% and 60% and two between 60% and 90%. For every bar chart, participants were asked to specify the percentage the smaller bar was of the larger, and the absolute height of the smaller bar in the virtual environment (or the real-world height on the screen, in the real-world block). While relative comparisons (bar compared to axis) is indeed the more common visualization task, we also asked about actual heights as this was relevant to the distance compression literature and is a relevant visualization task in situations(e.g., terrain map where $1\mathrm{\;m} = 1\mathrm{\;{km}}$ in the real world). Participants were told not to walk around in VR, but could look around in any direction. In the on-screen virtual environment, participant could not move, but could look around using the mouse. Their 'virtual head' was placed at the same height as their real head had been in VR. + +Like in the original position-length experiment [6], participants were instructed to not explicitly measure (e.g., using a finger) or explicitly calculate distance. Also, because all participants in the pilot study expressed that the task was too hard and that they had no confidence in their estimations, we included an instruction that the task was supposed to be hard and not to feel discouraged. + +## Measures + +Before the study participants filled out a demographics questionnaire asking about VR and game experience. Participants responded verbally when asked for percentages and heights. Answers were recorded and later merged with the logged study data. After the study, they filled out a simulator sickness questionnaire (SSQ)[20] and a questionnaire asking them whether they thought they performed better when estimating percentages or heights, which depth-cues virtual environment they thought they performed best in, whether they were better screen/VR, and finally were asked to rate each of the 7 blocks in terms of how well they thought they performed. + +Like in the original position-length experiment [6], we took the log error of the percentage estimations + +$$ +\text{ LogErrorP } = {\log }_{2}\left( {\left| {{\text{ percent }}_{\text{guessed }} - {\text{ percent }}_{\text{actual }}}\right| + \frac{1}{8}}\right) +$$ + +For the height estimations, we calculated the error as a percentage of the actual height they were estimating. This meant if the bar was ${20}\mathrm{\;m}$ tall, and the participant guessed either ${18}\mathrm{\;m}$ , or ${22}\mathrm{\;m}$ , they had a height error of ${10}\%$ . + +$$ +\text{ ErrorH } = \frac{\left| {\text{ height }}_{\text{guessed }} - {\text{ height }}_{\text{actual }}\right| }{{\text{ height }}_{\text{actual }}} +$$ + +## Participants + +Study 1 had 18 participants recruited, with one removed for not following the instructions consistently, and one removed as an outlier with results more than 2 standard deviations from the mean, resulting in data for 16 participants. Details about the participants can be found in Table 1. Participants were remunerated with a ${25}\mathrm{{CAD}}$ gift card. + +## Results + +Results were analyzed with two (depth-cues (3) x screen/VR (2) x scale (3)) RM-ANOVAs (the real-world condition was not part of this RM-ANOVA, instead providing a sanity check that our participants performed similarly to [6]. Overall measurement means and standard deviations can be found in Table 1 and charts are in Figure 2. + +![01963e9d-24a5-7959-931f-69dfc7f29684_3_937_427_704_493_0.jpg](images/01963e9d-24a5-7959-931f-69dfc7f29684_3_937_427_704_493_0.jpg) + +Figure 2. Log percent error (left) and height estimation error (right) for screen/VR (top row), scale (middle row) and level of depth cues (bottom row) used in Study 1. (Note: Error bars show standard error.) + +## Percentage Log Error + +There was a main effect of screen/VR (F(1,15) = 27.63, p < ${0.01})$ . When given the same virtual environments on the screen and in VR, participants had less error in VR (Figure 2). There was no main effect of depth-cues (p=.53). There was a main effect of scale $\left( {\mathrm{F}\left( {2,{30}}\right) = {185},\mathrm{p} < {0.01}}\right)$ (Figure 2). People were significantly worse at larger distances. + +There was an interaction effect of screen/VR x scale (F(2,30) $= {32.49},\mathrm{p} < {0.01})$ . At the personal scale, screen-based virtual environments has less error than VR, but this reversed at the larger scales. + +The real-world condition had a mean log percent error of 1.7 (SD: 0.94), which is slightly higher, but very similar to value achieved by the original paper based task [6] which was 1.5. + +## Height Error + +As expected, 77.5% of all height evaluations were underestimations. There was a main effect of screen/VR $\left( {\mathrm{F}\left( {1,{15}}\right) = {5.80},\mathrm{p} < {0.05}}\right)$ , with participants having less error in VR. There was a main effect of scale $(\mathrm{F}\left( {2,{30}}\right) = {5.403}$ , p $< {0.05}$ ), with higher error at larger scales. There was no main effect of depth-cues $\left( {\mathrm{p} = {15}}\right)$ and no interaction effects. + +The real-world condition had a mean height error of ${20}\%$ (SD: 16%). + +## Subjective Rankings + +Participants indicated whether they were better in VR (62%), the screen-based virtual environment (19%) or performed equally well on both (19%). Most participants indicated that they performed best in the rich virtual environment (81%), while a few indicated they were best in the some virtual environment (19%). The 7 blocks (depth cues (3) x screen/VR (2) + 1 real-world), sorted by mean participant rank, are: rich/VR (1.6), some/VR (3.0), rich/screen (3.1), some/screen (4.5), real-world (4.7), few/VR (4.9), few/screen (6.1). Although we did not formally record participants with audio or video, we tried to take notes if they commented on the helpfulness (or lack of helpfulness) of a virtual environment during or after the study. Most participants commented that the task was very hard, particularly with heights (e.g., "I don't think I am very good at this", "I have no idea how tall that is"). Every single participant commented at least once about some facet of the rich depth cue conditions as helpful (e.g., "The trees help", "I like the building, I can count the floors", "How big is a house, that chart is as big as a house"). + +## Summary of Results + +In general, people were quite good at evaluating percentages, but poor at evaluating heights. They were better in VR, perhaps due to binocular depth cues, or due to physical sensations such as tilting one's head back to look up. Scale was as expected, important, with larger distances resulting in more error in both measurements. Depth cues did not seem to be influential in either height or percentage evaluations. This first study confirmed that distance underestimation occurs in a visualization context, in this case bar charts, in VR. However, the results suggest that percentage estimations are not nearly as negatively affected as height estimations. Participants were off by 9.7% on average, which is very small when considering that most answers were given as a multiple of five, introducing an expected error of 2.5%. Furthermore, VR had less error than the equivalent screen-based virtual environments, meaning that VR might be a better option than a screen based visualization where one needs to look around, at least in some situations. + +Subjectively, people felt they were more accurate at percentages and better in VR. However, even though we did not find a significant difference between the different depth-cue conditions, people collectively felt that they were more accurate in the rich depth cue conditions. This is interesting because it means that while depth cues might be less impactful on task performance, it does seem to be impactful on user comfort and their own perception of competency. + +## STUDY 2 - MULTIPLE PERSPECTIVES + +In this study we were interested in how perspective and motion would change perception of distances. In particular, the fixed position near the base of the bar chart used in Study 1 meant that for the larger scales of charts, users would experience significant perspective issues such as foreshortening. + +Other than providing movement and the ability to take a new perspective, we also made a few changes to Study 1. Since we found no effect of depth-cues on task performance, we fixed this factor at our sunset-like, some depth cues virtual environment which we consider a reasonable best practice. Despite the preference of participants for the park-like rich depth cue environment, we acknowledge that complex context-rich settings full of trees and buildings may not be universally suitable for all visualizations. The scales we used in this study did not change, however since we were interested in letting participants take perspectives that were possibly far away, we made our bar charts ${1.5}\mathrm{x}$ as wide to me more visible from a distance. This meant that we moved the front and center starting position back ${1.5}\mathrm{\;m}$ such that at the personal scale the participant would still see the whole chart. We then calculated, using this front location, two other fixed positions ( ${15}\mathrm{\;m}$ to the left &right), and a variable back position such that the view was elevated and far enough away that both colored bars could be seen in their entirety. This back position was different in every bar chart and provided the participant with perspective that did not require them to look up or down and removed foreshortening effects. + +![01963e9d-24a5-7959-931f-69dfc7f29684_4_925_769_711_432_0.jpg](images/01963e9d-24a5-7959-931f-69dfc7f29684_4_925_769_711_432_0.jpg) + +Figure 3. Front platform near personal scale bar chart. Players would view the world from the center of the platform. In some conditions platforms functioned as elevators. + +Unlike the first study, where participants stood on the ground, here participants stood on virtual platforms (Figure 3). These hexagonal platforms featured transparent glass railings, interaction instructions, and lights that would light up when the participant stood on a centrally located pressure plate. (Participants were asked to return to the center between tasks). The width of the platform (about $2\mathrm{\;m}$ ) corresponded to the maximum walkable space as calibrated by the HTC Vive. This meant that participants could always walk around on a platform and even lean over the railings safely. Other than being functional in terms of movement, the consistently sized platforms could be used for relative sizing, like the objects in the rich depth cues virtual environment. + +Participants were always on a platform, however, we provided the following movement modes. + +Front Platform Only: Like our naïve perspective chosen in the first study, the participant was stuck at the front and bottom of the chart. The participant could not teleport to a new location or move the platform up or down. However, unlike the first study, participants could walk around on the platform. + +Table 1: Study Details + +Study 1 Study 2 Study 3 + +
Study 1Study 2Study 3
Task TypePosition-Length [4]Position-Length[4]Position-Angle[4]
Scale Personal House SkyscraperBar/ScatterPie
Height3m$3\mathrm{\;m}$13m5m
Task Bar Height<1.7m<1.7m< 5m-
Height30m30m30m12m
Task Bar Height<17m<17m< 12m-
Height180m180m--
Task Bar Height< 100m<100m--
ParticipantsAgeM 35.0. SD: 10.7M: 34.5. SD: 11.0M: 28.5. SD: 4.3
$N$16 (4 female)10 (2 female)10 (3 female)
Measurement4 used metric5 used metric6 used metric
VR Experience11 tried before, 3 very familiar, 2 experts3 no experience, 2 tried before, 5 very familiar7 tried before, 2 very familiar, 1 expert
MeasuresLog Percent Error$\mathrm{M};{2.67},\mathrm{{SD}};{1.37}$M: 1.96, SD: 1.18M: 1.85. SD: 1.67
Height ErrorM: 32%. SD: 23%M: 34%. SD: 22%M: 34%. SD: 27%
Angle Error--M: 20%, SD: 16%
Simulator SicknessM: 5.1. SD: 4.5M: 10.1. SD: 6.2M: 6.2. SD: 4.8
Performed Better Atpercents (88%), heights (0%), both (12%)percents (90%), heights (0%), both (10%)percents (50%), heights (10%), both (40%)
+ +Back Platform Only: The participant started on a variable location back platform and could walk around but not teleport or use the platform elevator. + +Teleport Anywhere: Participants started at the front location and could teleport/move the platform they were on by pressing the trigger, aiming a visible arc pointer to a valid location (marked by a repeating blue pattern) and then releasing the trigger. Participants could teleport anywhere in a ${100}\mathrm{\;m}$ square centered on the chart. Participants could not move so close to the chart that they intersected it (invalid area was marked in a red). The elevator platform could be moved up and down by using a diegetic interface on the touchpad. + +Teleport 4 Platforms: Participants could teleport to front, left, and right, fixed location, elevator platforms as well as the variable location back platform. Additional platforms are only seen when teleporting so they do not block the chart. Platforms were selected by aiming the arc pointer directly at, near, or in the general direction of a platform. + +Teleport Front/Back: Participants could teleport like in the Four Platform condition, but could only access the front and back elevator platforms. + +![01963e9d-24a5-7959-931f-69dfc7f29684_5_152_1615_718_255_0.jpg](images/01963e9d-24a5-7959-931f-69dfc7f29684_5_152_1615_718_255_0.jpg) + +Figure 4. Types of movement allowed in Study 2. + +## Task + +Tasks were generated the same as they were in Study 1. Participants completed 5 counterbalanced blocks, one for each movement type. Each block was introduced in a training mode where participants could try out the relevant movement interactions. During the study tasks, participants were asked to teleport at least once before giving their answers (if applicable). They were asked to return to the center of the platform in between each of the 90 tasks. + +## Measures + +The measures employed were similar to Study 1, except that the final questionnaire asked them to rank their performance with the 5 movement types. + +## Participants + +Study 2 had 11 participants, one was excluded as they were a clear outlier (more than two standard deviations from the mean). All participant details can be found in Table 1. The study took one hour and participants were remunerated with a 25 CAD gift card. + +## Results + +We performed two (movement-type (5) x scale (3)) RM-ANOVAs. Overall measurement means and standard deviations are in Table 1, and charts are in Figure 5. + +![01963e9d-24a5-7959-931f-69dfc7f29684_5_936_1432_700_324_0.jpg](images/01963e9d-24a5-7959-931f-69dfc7f29684_5_936_1432_700_324_0.jpg) + +Figure 5. Log percent error (left) and height estimation error (right) for each movement type (top row) and scale (bottom row) used in Study 2. (Note: error bars show standard error.) + +## Percentage Log Error + +There was a main effect of movement-type $(\mathrm{\;F}\left( {4,{36}}\right) = {51.6}$ , p $< {0.001})$ . Post-hoc tests showed that Front Platform Only was significantly worse than all other conditions. There was no effect of scale $\left( {\mathrm{p} = {0.12}}\right)$ and no interaction effects. Overall measure averages can be found in Table 1. + +## Height Error + +There was no main effect of movement-type $\left( {\mathrm{p} = {.31}}\right)$ but there was an effect of scale $\left( {\mathrm{F}\left( {2,{18}}\right) = {6.7},\mathrm{p} < {0.01}}\right)$ . Post hoc tests showed that the largest scale lead to significantly higher error than the smallest and medium scale $\left( {\mathrm{p} < {0.05}}\right)$ . + +## Subjective Rankings + +The movement-types, sorted by mean rank, are: continuous-teleport (1.8), teleport-front/back (2.0), teleport-4-platforms (2.1), back-platform-only (4.1), front-platform-only (4.7). + +## Summary of Results + +Adding movement/different perspectives to the viewpoint used in the first study always resulted in improvements to percentage estimations. The log percentage error of these improved conditions is about 1.7 which is very close to the 1.5 log percentage error achieved by Cleveland and McGill's position-length type 1 task which we modelled our study after. Furthermore, the effect of scale on percentage estimations was essentially eliminated when participants had the opportunity to view the visualization from far away and view the entire set of bars at once. Thus, it appears that relative distances tasks, like the percentage estimation we used here, are robust to the perceptual effects of VR if one can view the chart from different perspectives. + +On the other hand, directly estimating the height was still problematic when movement and perspective were added. There was no condition which improved people's height estimations and larger scales were still more difficult. Thus movement/perspective was not successful at improving people's ability to estimate heights. + +## STUDY 3 - OTHER CHART TYPES + +Now that we have established, that at least when it comes to bar charts, estimating relative distances may be robust to the effects of distance compression in VR, we were interested in looking at this effect in other chart types. To this end we ran a third study using bar charts, pie charts and scatter plots (Figure 6). We used the sunset environment from study 1 and the continuous teleport movement from study 2 as it was ranked the highest. Also, because the first two studies had confirmed that people are bad at estimating the size of skyscrapers, we had 9 of the 18 charts in each condition fit all data directly ahead without needing to look up and the other 9 about twice as tall. + +![01963e9d-24a5-7959-931f-69dfc7f29684_6_154_1673_712_157_0.jpg](images/01963e9d-24a5-7959-931f-69dfc7f29684_6_154_1673_712_157_0.jpg) + +Figure 6. Chart types used in Study 3. + +Bar Chart: Like with the bar chart in the previous studies, participants were asked to make percentage estimations between two colored bars and to estimate the height of the smaller colored bar. Colored bars were always consecutive but their order was randomized. + +Scatter plot: Instead of bars, this chart used spherical markers. The markers were spread out horizontally on the x-axis such that they were all between ${0.5}\mathrm{\;m}$ and ${2.5}\mathrm{\;m}$ apart, however, the colored markers were always consecutive and ${1.5}\mathrm{\;m}$ apart. Participants were asked to make percentage estimations between the colored markers with respect to height (y-axis) and to estimate the height of the shorter colored marker. + +Pie Chart: This five-section pie chart had two colored segments and three distinctly colored white sections. Colored segments were always in a random consecutive position and were assigned either dark or light purple randomly. Like the Cleveland and McGill's [6] position-angle experiment, participants were asked to make percentage estimations between the colored segments. However, as a pie chart uses angles rather than heights to encode data, participants were instructed to estimate the angle of the smaller segment. + +## Task + +Tasks were generated similar to Cleveland and McGill's [6] position-angle experiment. This meant that 5 numbers summing to 100 were generated, with percentage differences ranging from 10% to 97%. Participants completed 3 counterbalanced blocks, one for each chart type. Each block was introduced in a training mode where the researcher walked them through the exact questions used in this task. During the study tasks, participants were asked to teleport, walk around, or use the elevator at least once before giving their answers. They were asked to return to the center of the platform in between each of the 54 tasks. + +## Measures + +The measures employed were similar to Study 1 and 2, except that the final questionnaire asked them to rank their performance with each chart. + +Additionally, to compare participant's angle estimations to height estimations, we used a very similar formula to calculate the angle estimation error as a percentage of the actual angle they were estimating. + +$$ +\text{ ErrorA } = \frac{\left| {\text{ angle }}_{\text{guessed }} - {\text{ angle }}_{\text{actual }}\right| }{{\text{ angle }}_{\text{actual }}} +$$ + +## Participants + +Study 3 had 10 participants. All participant details can be found in Table 1. The study took 40 minutes and participants were remunerated with a ${25}\mathrm{{CAD}}$ gift card. + +## Results + +We performed a (chart-type (5) x scale (2)) RM-ANOVA. Overall measurement means and standard deviations can be found in Table 1 and charts are in Figure 7. + +## Percentage Log Error + +There was a main effect of chart-type $(\mathrm{F}\left( {2,{18}}\right) = {11.06},\mathrm{p} <$ 0.001). Post-hoc tests showed that Pie Charts were significantly worse than all other condition (p<0.05). There was no effect of scale $\left( {\mathrm{p} = {0.30}}\right)$ and no interaction effects. + +## Height and Angle Error + +There was a main effect of chart-type $(\mathrm{\;F}\left( {2,{18}}\right) = {11.39},\mathrm{p} <$ 0.01 ). Post hoc tests showed that angle estimations had significantly less error than heights (p<0.01). There was no effect of scale $\left( {\mathrm{p} = {0.65}}\right)$ and no interaction effects. + +## Subjective Rankings + +The chart-types, sorted by mean rank, are: bar (1.4), pie (2), scatter (2.7). + +![01963e9d-24a5-7959-931f-69dfc7f29684_7_162_479_698_157_0.jpg](images/01963e9d-24a5-7959-931f-69dfc7f29684_7_162_479_698_157_0.jpg) + +Figure 7. Log percent error (left) and height estimation error (right) for each chart type in Study 3. (Note: error bars show standard error.) + +## Summary of Results + +When it comes to percentage estimations, our results mirror Cleveland and McGills [6] results: people are better at lengths than they are at angles, by a factor of 2.1 (1.96 in the original study). This result suggests that for percentage estimations, other perceptual tasks (e.g., area) should follow the same patterns in VR as the original work. + +However, when looking at angle/height estimations, participants were better at angles. This could be because the on only needs to look at the innermost pie chart point to do this estimation (as opposed to looking at the bottom and top of a large bar chart), because all angles are bounded by 360 (max angle was 100 degrees in our study), or because the angles were relatively small. Future work should investigate this more closely, especially at larger and smaller scales. + +## DISCUSSION + +This work suggests that the distance compression problem that occurs in VR does alter one's perception of data visualizations. However, relative distance tasks, like the percentage estimation tasks in these three studies, appear to be robust to this distance compression, particularly when participants can reach a perspective where they can view the chart from far back. + +## Design Guidelines + +## VR is Good for Virtual Environments + +In Study 1, VR had less error than the equivalent screen-based virtual environment for both percentage and height estimations. While VR may not be better in all circumstances (e.g., flat screen image), when the visualization exists inside a virtual environment, VR can be a good choice for immersive analytics. + +## Use Movement Modes that Avoid Nausea + +Unity's practitioner guidelines [49] recommends that one avoids user simulator sickness by designing to avoid vection. Vection occurs in VR when the user's vestibular system is receiving different signals than their eyes and ears. This occurs most often in VR when the movement modality causes the player to experience motion in VR that their body does not (e.g., mapping movement in VR to a thumbstick on a controller), or when experienced bodily motion has no effect in VR (e.g., not updating the VR environment when the user turns their head). + +Although we were not specifically investigating or avoiding nausea, we did find some techniques we used successful. We used a fade in/out teleport mechanism and platforms that matched the safely walkable space in the real world in Study 2 and Study 3 to avoid vection. The elevator feature was unfortunately vection inducing because it was activated with the touchpad instead of equivalent bodily motion. We combated vection-related nausea by severely limiting the elevator speed and used an easing function to prevent sudden stops. A future implementation of an elevator might have 'floors' that can be accessed with a teleport-like fade effect. + +## Encode Data with Relative Distances + +One should not have any requirements or expectations that a user can estimate a distance in VR. In all three studies, participants were underestimating heights on average by ${33}\%$ ,(i.e., $2/3$ their actual value). However, across all three studies, participants provided extremely low-ball heights 15% of the time, underestimating by more than 50%. Conversely participants were much more accurate in the percentage estimation task, off only by 6-10% on average across all three studies. + +Therefore, designers should encode data in ways that allows users to compare two distances/lengths rather than expecting them to measure a distance directly. In a simple graph like a bar chart or scatter plot, this can be as simple as providing a labelled axis immediately beside the data. Avoid, say, providing a scale for a map requiring that one can estimate a distance (e.g.,1inch $= 1$ mile). We also recommend that where a measurement of distance is necessary, one should provide a tool which can be used for comparison, like a ruler, or a measuring tape. + +## Consider Maximum Scale + +It is important to consider the most extreme perspective that the user can navigate into, or, rather to consider the maximum distance from themselves that a user might be asked to evaluate. In Study 1 participants were pretty good at the personal and house scale, and were predictably bad at the skyscraper scale. Therefore, we recommend that one create situations that expect users to be evaluating distances less than ${15}\mathrm{\;m}$ , though this is just a rough estimation based on the particular distances we used in the study. Future work is needed to provide a better guideline. + +## Provide an Overview Perspective + +In Study 1, larger scales meant higher error in both percentage and height estimations. However, when the ability to view the data from an overview perspective was added in Study 2, this effect disappeared for percentage estimations. If one does use large distances, or data that is far away from the user, providing a way for the user to make a quick overview of relevant data can remove the negative effects of large distances by removing problems like foreshortening. It should be noted that in Study 2, the teleport-anywhere condition where one could move their view to an overview perspective was not significantly better than the back-platform-only condition which automatically provided an overview perspective. Therefore it may be enough to simply provide a generated overview of your visualization, or one could provide freeform movement options like teleport-anywhere depending on your needs. + +## Users Appreciate Additional Depth Cues + +Given that every participant mentioned how helpful additional depth cues were in Study 1, despite no change in task performance, we would recommend providing as many depth cues as possible to improve user's comfort and perceived competency with the task. This could mean simple things like adding texture to an object, or more complicated, fully developed, contextually relevant environments with relatively sized objects. + +## CONCLUSION + +In this paper we investigated how the phenomenon of distance compression alters perception of visualizations in VR. Through three studies that replicate foundational work around the perception of visualizations we found that estimations of actual lengths, in this case the heights of bars in a bar chart, are negatively impacted by distance compression, but relative distances are not. Furthermore, as with traditional visualizations, people can better estimate relative lengths over relative angles, suggesting that much of the existing perceptual research on visualizations may still apply. Finally, we provide set of design guidelines for designers wishing to develop VR visualizations that limit the negative effects of distance compression. + +## REFERENCES + +1. C. Armbrüster, M. Wolter, T. Kuhlen, W. Spijkers, and B. Fimm. 2008. Depth Perception in Virtual Reality: Distance Estimations in Peri- and Extrapersonal Space. CyberPsychology & Behavior 11, 1: 9-15. https://doi.org/10.1089/cpb.2007.9935 + +2. Sriram Karthik Badam, Arjun Srinivasan, Niklas Elmqvist, and John Stasko. Affordances of Input Modalities for Visual Data Exploration in Immersive Environments. + +3. Scott Bateman, Regan L Mandryk, Carl Gutwin, Aaron Genest, David McDine, and Christopher Brooks. 2010. Useful junk?: the effects of visual embellishment on comprehension and memorability of charts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2573-2582. + +4. R. Bennett, D. J. Zielinski, and R. Kopper. 2014. Comparison of interactive environments for the archaeological exploration of 3D landscape data. In 2014 IEEE VIS International Workshop on 3DVis (3DVis), 67-71. + +https://doi.org/10.1109/3DVis.2014.7160103 + +5. Steve Bryson. 1996. Virtual Reality in Scientific Visualization. Commun. ACM 39, 5: 62-71. https://doi.org/10.1145/229459.229467 + +6. William S. Cleveland and Robert McGill. 1984. Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods. Journal of the American Statistical Association 79, 387: 531-554. https://doi.org/10.1080/01621459.1984.10478080 + +7. Grégoire Cliquet, Matthieu Perreira, Fabien Picarougne, Yannick Prié, and Toinon Vigier. 2017. Towards HMD-based Immersive Analytics. In Immersive analytics Workshop, IEEE VIS 2017. Retrieved April 6, 2018 from https://hal.archives-ouvertes.fr/hal-01631306 + +8. Maxime Cordeil, Andrew Cunningham, Tim Dwyer, Bruce H. Thomas, and Kim Marriott. 2017. ImAxes: Immersive Axes As Embodied Affordances for Interactive Multivariate Data Visualisation. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST '17), 71-83. https://doi.org/10.1145/3126594.3126613 + +9. Sarah H Creem-Regehr, Peter Willemsen, Amy A Gooch, and William B Thompson. 2005. The Influence of Restricted Viewing Conditions on Egocentric Distance Perception: Implications for Real and Virtual Indoor Environments. Perception 34, 2: 191-204. https://doi.org/10.1068/p5144 + +10. James E. Cutting. 1997. How the eye measures reality and virtual reality. Behavior Research Methods, Instruments, & Computers 29, 1: 27-36. https://doi.org/10.3758/BF03200563 + +11. Henry Fuchs, Mark A. Livingston, Ramesh Raskar, D'nardo Colucci, Kurtis Keller, Andrei State, Jessica R. Crawford, Paul Rademacher, Samuel H. Drake, and Anthony A. Meyer. 1998. Augmented reality visualization for laparoscopic surgery. In Medical Image Computing and Computer-Assisted Intervention - MICCAI'98 (Lecture Notes in Computer Science), 934- 943. https://doi.org/10.1007/BFb0056282 + +12. Michael Glueck and Azam Khan. 2011. Considering multiscale scenes to elucidate problems encumbering three-dimensional intellection and navigation. AI EDAM 25, 4: 393-407. + +https://doi.org/10.1017/S0890060411000230 + +13. Nicolas Greffard, Fabien Picarougne, and Pascale Kuntz. 2011. Visual Community Detection: An Evaluation of 2D, 3D Perspective and 3D Stereoscopic Displays. In Graph Drawing (Lecture Notes in Computer Science), 215-225. https://doi.org/10.1007/978-3-642-25878-7_21 + +14. Jeffrey Heer and Michael Bostock. 2010. Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10), 203-212. https://doi.org/10.1145/1753326.1753357 + +15. Carolin Helbig, Hans-Stefan Bauer, Karsten Rink, Volker Wulfmeyer, Michael Frank, and Olaf Kolditz. 2014. Concept and workflow for $3\mathrm{D}$ visualization of atmospheric data in a virtual reality environment for analytical approaches. Environmental Earth Sciences 72, 10: 3767-3780. https://doi.org/10.1007/s12665-014- 3136-6 + +16. Tung-Ju Hsieh, Yang-Lang Chang, and Bormin Huang. 2011. Visual analytics of terrestrial lidar data for cliff erosion assessment on large displays. In Satellite Data Compression, Communications, and Processing VII, 81570D. https://doi.org/10.1117/12.895437 + +17. Mark St John, Michael B. Cowen, Harvey S. Smallman, and Heather M. Oonk. 2001. The Use of 2D and 3D Displays for Shape-Understanding versus Relative-Position Tasks. Human Factors 43, 1: 79-98. https://doi.org/10.1518/001872001775992534 + +18. St Mark John, Harvey S. Smallman, Timothy E. Bank, and Michael B. Cowen. 2001. Tactical Routing Using Two-Dimensional and Three-Dimensional Views of Terrain. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 45, 18: 1409-1413. https://doi.org/10.1177/154193120104501820 + +19. Johnson JG Keiriz, Olusola Ajilore, Alex D Leow, and Angus G Forbes. Immersive Analytics for Clinical Neuroscience. + +20. Robert S. Kennedy, Norman E. Lane, Kevin S. Berbaum, and Michael G. Lilienthal. 1993. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. The International Journal of Aviation Psychology 3, 3: 203-220. https://doi.org/10.1207/s15327108ijap0303_3 + +21. Tereza G. Kirner and Valéria F. Martins. 2000. Development of an Information Visualization Tool Using Virtual Reality. In Proceedings of the 2000 ACM Symposium on Applied Computing - Volume 2 (SAC ’00), 604-606. https://doi.org/10.1145/338407.338515 + +22. Joshua M. Knapp and Jack M. Loomis. 2004. Limited Field of View of Head-Mounted Displays Is Not the Cause of Distance Underestimation in Virtual Environments. Presence: Teleoperators and Virtual Environments 13, 5: 572-577. https://doi.org/10.1162/1054746042545238 + +23. Søren Knudsen and Sheelagh Carpendale. Multiple Views in Immersive Analytics. + +24. Gregorij Kurillo and Maurizio Forte. 2012. Telearch-Integrated Visual Simulation Environment for Collaborative Virtual Archaeology. Mediterranean Archaeology and Archaeometry 12, 1: 11-20. + +25. Jong Min Lee, James MacLachlan, and William A. Wallace. 1986. The Effects of 3D Imagery on Managerial Data Interpretation. MIS Quarterly 10, 3: 257-269. https://doi.org/10.2307/249259 + +26. Jack M. Loomis and Joshua M. Knapp. 2003. Visual Perception of Egocentric Distance in Real and Virtual + +Environments. In Virtual and Adaptive Environments: Applications, Implications, and Human Performance Issues. CRC Press, 21-46. + +27. Arif Masrur, Jiayan Zhao, Jan Oliver Wallgrün, Peter LaFemina, and Alexander Klippel. Immersive Applications for Informal and Interactive Learning for Earth Sciences. + +28. Ross Messing and Frank H. Durgin. 2005. Distance Perception and the Visual Horizon in Head-Mounted Displays. ACM Trans. Appl. Percept. 2, 3: 234-250. https://doi.org/10.1145/1077399.1077403 + +29.Laura Nelson, Dianne Cook, and Carolina Cruz-Neira. 1999. Xgobi vs the c2: Results of an experiment comparing data visualization in a 3-d immersive virtual reality environment with a 2-d workstation display. Computational Statistics 14, 1: 39-52. + +30. Matthew Ready, Tim Dwyer, and Jason H Haga. Immersive Visualisation of Big Data for River Disaster Management. + +31.Jun Rekimoto and Mark Green. 1993. The information cube: Using transparency in $3\mathrm{\;d}$ information visualization. In Proceedings of the Third Annual Workshop on Information Technologies & Systems (WITS'93), 125-132. + +32. W. Ribarsky, J. Bolter, A. Op den Bosch, and R. van Teylingen. 1994. Visualization and analysis using virtual reality. IEEE Computer Graphics and Applications 14, 1: 10-12. https://doi.org/10.1109/38.250911 + +33. Adam R. Richardson and David Waller. 2005. The effect of feedback training on distance estimation in virtual environments. Applied Cognitive Psychology 19, 8: 1089-1108. https://doi.org/10.1002/acp.1140 + +34. Cynthia S. Sahm, Sarah H. Creem-Regehr, William B. Thompson, and Peter Willemsen. 2005. Throwing Versus Walking As Indicators of Distance Perception in Similar Real and Virtual Environments. ACM Trans. Appl. Percept. 2, 1: 35-45. https://doi.org/10.1145/1048687.1048690 + +35.M. Sedlmair, T. Munzner, and M. Tory. 2013. Empirical Guidance on Scatterplot and Dimension Reduction Technique Choices. IEEE Transactions on Visualization and Computer Graphics 19, 12: 2634-2643. https://doi.org/10.1109/TVCG.2013.153 + +36.B. Shneiderman. 2003. Why not make interfaces better than $3\mathrm{\;d}$ reality? IEEE Computer Graphics and Applications 23, 6: 12-15. + +37.Mark Simpson, Jiayan Zhao, and Alexander Klippel. Take a Walk: Evaluating Movement Types for Data Visualization in Immersive Virtual Reality. + +38. N. G. Smith, K. Knabb, C. DeFanti, P. Weber, J. Schulze, A. Prudhomme, F. Kuester, T. E. Levy, and T. A. DeFanti. 2013. ArtifactVis2: Managing real-time archaeological data in immersive $3\mathrm{D}$ environments. In 2013 Digital Heritage International Congress (DigitalHeritage), 363-370. + +https://doi.org/10.1109/DigitalHeritage.2013.6743761 + +39. J. E. Swan, A. Jones, E. Kolstad, M. A. Livingston, and H. S. Smallman. 2007. Egocentric depth judgments in optical, see-through augmented reality. IEEE Transactions on Visualization and Computer Graphics 13, 3: 429-442. + +https://doi.org/10.1109/TVCG.2007.1035 + +40. William B. Thompson, Peter Willemsen, Amy A. Gooch, Sarah H. Creem-Regehr, Jack M. Loomis, and Andrew C. Beall. 2004. Does the Quality of the Computer Graphics Matter when Judging Distances in Visually Immersive Environments? Presence: Teleoperators and Virtual Environments 13, 5: 560-571. https://doi.org/10.1162/1054746042545292 + +41.C. Ware and G. Franck. 1994. Viewing a graph in a virtual reality display is three times as good as a $2\mathrm{D}$ diagram. In Proceedings of 1994 IEEE Symposium on Visual Languages, 182-183. https://doi.org/10.1109/VL.1994.363621 + +42. Colin Ware and Peter Mitchell. 2005. Reevaluating Stereo and Motion Cues for Visualizing Graphs in Three Dimensions. In Proceedings of the 2Nd Symposium on Applied Perception in Graphics and Visualization (APGV '05), 51-58. + +https://doi.org/10.1145/1080402.1080411 + +43. S. J Westerman and T Cribbin. 2000. Mapping semantic information in virtual space: dimensions, variance and individual differences. International Journal of Human-Computer Studies 53, 5: 765-787. https://doi.org/10.1006/ijhc.2000.0417 + +44. Christopher D. Wickens, David H. Merwin, and Emilie L. Lin. 1994. Implications of Graphics Enhancements for the Visualization of Scientific Data: Dimensional Integrality, Stereopsis, Motion, and Mesh. Human Factors 36, 1: 44-61. + +https://doi.org/10.1177/001872089403600103 + +45. Peter Willemsen, Mark B. Colton, Sarah H. Creem-Regehr, and William B. Thompson. 2004. The effects of head-mounted display mechanics on distance judgments in virtual environments. In Ist Symposium on Applied Perception in Graphics and Visualization, APGV 2004, 35-38. Retrieved September 20, 2018 from https://utah.pure.elsevier.com/en/publications/the-effects-of-head-mounted-display-mechanics-on-distance-judgmen + +46. Jessica K Witt, Dennis R Proffitt, and William Epstein. 2004. Perceiving Distance: A Role of Effort and Intent. Perception 33, 5: 577-590. + +https://doi.org/10.1068/p5090 + +47. S. Zhang, C. Demiralp, D. F. Keefe, M. DaSilva, D. H. Laidlaw, B. D. Greenberg, P. J. Basser, C. Pierpaoli, E. A. Chiocca, and T. S. Deisboeck. 2001. An immersive virtual environment for DT-MRI volume visualization applications: a case study. In Visualization, 2001. VIS '01. Proceedings, 437-584. + +https://doi.org/10.1109/VISUAL.2001.964545 + +48. Daniel Zielasko, Martin Bellgardt, Alexander Mei\\s sner, Maliheh Haghgoo, Bernd Hentschel, Benjamin + +Weyers, and Torsten W Kuhlen. buenoSDIAs: Supporting Desktop Immersive Analytics While Actively Preventing Cybersickness. + +49. Unity Tutorial: Movement in VR. Unity. Retrieved September 20, 2018 from + +https://unity3d.com/learn/tutorials/topics/virtual-reality/movement-vr \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..95402e9fb58636eeb4b89881011a178cc778a592 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,380 @@ +§ HOW TALL IS THAT BAR CHART? VIRTUAL REALITY, DISTANCE COMPRESSION AND VISUALIZATIONS + +1st Author Name + +Affiliation + +City, Country + +e-mail address + +2nd Author Name + +Affiliation + +City, Country + +e-mail address + +3rd Author Name + +Affiliation + +City, Country + +e-mail address + + < g r a p h i c s > + +Figure 1. The virtual environments used in Study 1, each with differing levels of depth cues. Participants could look around with the HMD in VR and used the mouse to look around in the screen virtual environment conditions. + +§ ABSTRACT + +As VR technology becomes more available, VR applications will be increasing used to present information visualizations. While data visualization in VR is an interesting topic, there remain questions about how effective or accurate such visualization can be. One known phenomenon with VR environments is that people tend to unconsciously compress or underestimate distances. However, it is unknown if or how this effect will alter the perception of data visualizations in VR. To this end, we replicate portions of Cleveland and McGill's foundational perceptual visualization studies, in VR. Through a series of three studies we find that distance compression does negatively affect estimations of actual lengths (heights of bars), but does not appear to impact relative comparisons. Additionally, by replicating the position-angle experiments, we find that (as with traditional 2D visualizations) people are better at relative length evaluations than relative angles. Finally, by looking at these open questions, we develop a series of best practices for performing data visualization in a VR environment. + +§ AUTHOR KEYWORDS + +Visualization; VR. + +§ ACM CLASSIFICATION KEYWORDS + +H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous + +§ INTRODUCTION + +As Virtual Reality (VR) technology continues to be developed and expanded, workplace tasks such as viewing information visualizations, are becoming more likely to be executed in a VR environment. While much of the research around traditional screen-based visualizations likely applies to VR, it is unclear how specific VR-related phenomena might alter how effective or accurate these visualizations are. Of particular note, it has been shown that in VR environments people tend to unconsciously underestimate distances, in a phenomenon called distance compression $\left\lbrack {1,9,{10},{22},{26},{28},{33},{34},{39},{40},{45}}\right\rbrack$ . However, to this point, designers of VR visualizations have not had any guidance about how distance compression will alter visualization effectiveness or user accuracy. For example, even a simple bar chart uses the heights/lengths of the bars to represent data and it is unclear how distance compression will alter one's ability to measure or compare the lengths of the bars. + +To solve this problem, we looked to foundational work by Cleveland and McGill which looked at graphical perception of paper based visualizations [6] and has also been replicated in a digital context [14]. We performed 3 studies, replicating the position-length and the position-angle experiments in a VR environment. Our first study, using the bar chart position-length experiment, provided bar charts in virtual environments both on the screen and in VR and asked participants to measure actual distances (i.e., the bar is $1\mathrm{\;m}$ tall) and relative distances (i.e., bar $A$ is ${80}\%$ as tall as bar $B$ ). We explored the suggestion that varying degrees of depth cues could reduce distance compression $\left\lbrack {{10},{12}}\right\rbrack$ , as well as bar charts of varying scales. In a second study, rather than depth cues we looked at perspective, providing participants different ways to move around the look at the various scaled charts in VR. Finally, in the last experiment, we implemented the position-angle experiment, looking at how actual lengths/angles and relative lengths/angles were measured in bar charts, scatter plots, and pie charts. + +Through these studies, we have 5 contributions. First, we confirm the existence of distance compression in VR visualizations, but see that is also applies to similarly to screen-based environments. We show that distance compression does negatively affect actual length/distance measurements, but may only have a small or negligible effect on relative comparisons. Depth cues had no discernable effect on the accuracy of measurements, but do appear to affect one's perception of their ability to be accurate. As-in traditional 2D visualizations, people are better at relative length evaluations than relative angles. Finally, we provide a set of design guidelines for designers to inform the implementation and creation of effective and useful visualizations in VR. + +§ RELATED WORK + +§ VR AND VISUALIZATION + +VR visualizations fall under the field of 'Immersive Analytics', though this also refers to augmented reality (AR) and Mixed Reality (MR) Visualizations [7]. In the late 90s people were beginning to talk about VR, and visualization, even though systems of the day (mostly VR Caves) did not quite provide adequate capabilities [5,21,31,32,36]. More recent work provides concrete examples in the environmental $\left\lbrack {{15},{16},{27},{30}}\right\rbrack$ , medical $\left\lbrack {{11},{19},{47}}\right\rbrack$ , and archeology $\left\lbrack {4,{24},{38}}\right\rbrack$ domains. A notable example is ImAxes [8] which is a dynamic system were users draw and connect axis in midair in VR allowing for multiple dynamic chart types. + +When interacting with a VR visualization, one should keep in mind that multiple views and input modalities may not transfer from the screen to VR [23]. Furthermore, the affordances of an interaction may be different in VR [2]. For example, Simpson et al. found that walking around the dataset in VR was not better than using a controller to rotate it [37]. Cybersickness must also be considered, as certain design choices, such as using a controller to move around, may work on a screen but quickly induce nausea in VR $\left\lbrack {{48},{49}}\right\rbrack$ . There may be many ways to combat this, for example, Cliquet et al. suggest allowing the user to sit [7]. + +§ VISUALIZATION AND PERCEPTION + +Cleveland and McGill's foundational work [6] showed that people are better estimating lengths than areas, and better at estimating areas than volumes. Furthermore, people are better at position estimations (i.e., a scatter plot) than angle estimations (i.e., pie chart). These results have been confirmed and replicated more recently by Heer and Bostock with a Mechanical Turk based study [14]. + +§ 2D VS 3D + +The usefulness and effectiveness of $2\mathrm{D}$ visualizations have often been compared to $3\mathrm{D}.2\mathrm{D}$ is considered generally the best approach $\left\lbrack {{35},{43}}\right\rbrack$ , especially for tasks that require precision [18] and tasks that suffer from perspective distortion (e.g., distance estimation) [17]. However, 3D visualizations can still be useful, particularly when the data has a high levels of detail, structure, and/or complexity (e.g., 3+ dimensions) $\left\lbrack {{17},{18},{25},{44}}\right\rbrack$ or when the task involves exploring 3D representations of the real world (e.g., terrain or other real world objects) $\left\lbrack {{17},{18}}\right\rbrack$ . 3D may also prove useful by providing ways to explore overlap in network graphs $\left\lbrack {{13},{41},{42}}\right\rbrack$ . + +Now most visualizations can be considered 3D visualizations in VR - even though they might be mapped onto a plane, the ability to look at them from multiple angles might cause occlusion or other problems of perspective. However, we bring up the debate between 2D and 3D graphs because there is some indication that the binocular depth cues provided by modern VR tip the equation in favor of $3\mathrm{D}$ in some situations. When only considering scatterplots, for example, providing binocular cues has shown that $3\mathrm{D}$ visualizations tend win over $2\mathrm{D}\left\lbrack {{25},{29},{44}}\right\rbrack$ , although this is not always the case $\left\lbrack {{35},{43}}\right\rbrack$ . + +§ VR AND PERCEPTION + +People tend to underestimate distances in VR using a Head Mounted Display (HMD) [1,9,10,26,28,33,34,39,40,45]. This effect exists even when the VR environment is very similar to, or a recording of the real world [28,34,40], and may be partially caused by the limited field of vision of the HMD [9,22]. Physical factors, such as the weight of the HMD may be also be important [45], especially as the effort perceived to be necessary to walk a particular distance (e.g., if one was wearing a heavy backpack) can have effects perception of that distance [46]. The parameters which affect distance compression have been investigated but are not fully understood. Furthermore, it is unclear how distance compression may affect visualization tasks like comparing two bars in a bar chart. + +One possible solution might be a lack of realistic depth cues $\left\lbrack {{10},{12}}\right\rbrack$ in VR. Some of these cues, such as light, texture, shape, luminance, linearity of light, object occlusion, motion etc., can be manipulated to be more or less available in a virtual environment. Unlike a 2D screen, VR headsets provide binocular cues because they render a separate image for each eye (although at least one study has suggested that monocular/binocular cues alone are not responsible for distance compression [9]). In fact, current VR headsets, such as the HTC Vive, allow for most depth cues that are available in the real world to be implemented in VR [10]. Our first study looks at how fidelity of depth cues might change distance compression when looking at a visualization. + +It has also been suggested that this distance compression might be affected by perceptually different distance zones. Armbrüster et al., suggest that distance compression is smaller for objects in peripersonal space $\left( { < 1\mathrm{\;m}}\right)$ where an object is within arm's reach [1]. Cutting calls this zone personal space $\left( { < {1.5}\mathrm{\;m}}\right)$ , splitting up larger distances into action space ( $< {30}\mathrm{\;m}$ , interaction of some sort is feasible) and vista space $({30}\mathrm{\;m} +$ , further than one would expect to be able to act) [10]. To further investigate these categories of distance, the first two studies use three different sizes of charts, roughly corresponding to personal, action, and vista bar heights. + +§ STUDY 1 - DEPTH CUES AND SCALE + +The chronic underestimation of distances is troublesome for VR, particularly when considering visualizations tend to encode data using absolute or relative distances. Even a simple bar chart uses length to communicate data to the viewer. + +Our first study aimed to confirm that visualization in VR are compromised by distance underestimation. Since it has been suggested that more depth cues $\left\lbrack {{10},{12}}\right\rbrack$ could lesson underestimation, we designed three virtual environments with differing levels of depth-cues. + +Few Depth Cues: Objects had a consistent luminance and no texture. Shadows were disabled and the sky was a medium gray. A simple textured floor was provided as floating over a void was nauseating in VR. This condition represented a simple chart without embellishments.. + +Some Depth Cues: Bars now had a slightly crumpled paper texture and responded to the lights in the scene, casting shadows. The floor contained an arbitrary grid, and was also textured slightly. Aerial perspective was applied, allowing distant objects to fade into the sunset sky somewhat. We consider this a 'best practices' chart, with minimal added embellishments, all of which directly contribute depth cues to the environment. + +Rich Depth Cues: In addition to texture, luminance, and aerial perspective, the scene was augmented with objects that could be used to determine relative sizes. Trees, a light post, some bushes provided general cues about scale. A house, car, and park bench were also in the scene, as these have relatively standard sizes. Similarity, a skyscraper was in the scene, as a floor of a building is also about the same size. These objects were not immediately in view, the participant would have to look at them directly. Grass and flowers on the ground provided cues of relative density. This condition, while being very rich with depth cues, was also a bit extreme; one can imagine that not every visualization has a place for trees, cars, and buildings (Figure 1). However, Bateman et al [3] showed that embellishments that add context to the visualization can improve memorability, and given the prevalence of infographics, it is not impossible that some visualization s might provide relevant, contextual objects (e.g., a visualization about deforestation could contain trees). + +We were also interested in measuring this effect at multiple scales. The corresponding chart heights and task specific bar lengths can be seen in Table 1. In all conditions, the participant viewed the visualization from $4\mathrm{\;m}$ back. + +Personal scale: The entire visualization could be seen at one time when looking straight ahead without looking significantly up or down. + +House Scale: The larger visualization required that the participant need to look up somewhat to see the entire chart. + +Skyscraper Scale: The visualization was extremely tall, requiring the viewer to tilt their head back and look way up. While a barchart as high as a skyscraper is unlikely to be very useful, we included this scale because for very large or complex visualizations it is possible that a user could to navigate to a view where some of the data is very far away. + +Finally, we compared VR with an on-screen condition which featured virtual environments with the same scales and depth-cues (without, of course, the binocular depth cues provided be the HMD). This gave us a scale (3) by depth-cues (3) by screen/VR (2) factorial study. We also added a real-world condition, with a simple bar chart contained on a monitor. This was to provide a baseline as it was similar to the foundational work done by Cleveland and McGill [6]. + +§ TASK + +Cleveland and McGill [6] provide several tasks for evaluating perception of lengths in a visualization. We chose to mimic their position length experiment, specifically using their Type-1 task (as this had the lowest error). This task provides a 5 value bar chart, with two side by side bars marked with a dot which have percentage differences ranging from 18% to 83%. The participant is asked to evaluate, without explicitly measuring, what percentage the smaller bar is of the larger. Our task mimics theirs, down to the way they chose relevant values for the bars in the task, except that we used a single bar chart instead of two side by side bar charts. We also colored the bars of interest because for dot at the bottom would be insufficient for differentiation when looking way up in the skyscraper scale. + +Participants completed 7 blocks (depth-cues (3) x screen/VR (2) + 1 real-world), counterbalanced with a Latin square design. In each block they viewed 18 bar charts (126 total), 6 of each scale, in random order, except in the real-world condition where all 18 bar charts fit on the screen at ${30}\mathrm{\;{cm}}$ tall. Using [6]'s template as a guide, we randomly generated sets of 6 tasks; each set containing two tasks with a percentage difference between 10% and 40%, two between 40% and 60% and two between 60% and 90%. For every bar chart, participants were asked to specify the percentage the smaller bar was of the larger, and the absolute height of the smaller bar in the virtual environment (or the real-world height on the screen, in the real-world block). While relative comparisons (bar compared to axis) is indeed the more common visualization task, we also asked about actual heights as this was relevant to the distance compression literature and is a relevant visualization task in situations(e.g., terrain map where $1\mathrm{\;m} = 1\mathrm{\;{km}}$ in the real world). Participants were told not to walk around in VR, but could look around in any direction. In the on-screen virtual environment, participant could not move, but could look around using the mouse. Their 'virtual head' was placed at the same height as their real head had been in VR. + +Like in the original position-length experiment [6], participants were instructed to not explicitly measure (e.g., using a finger) or explicitly calculate distance. Also, because all participants in the pilot study expressed that the task was too hard and that they had no confidence in their estimations, we included an instruction that the task was supposed to be hard and not to feel discouraged. + +§ MEASURES + +Before the study participants filled out a demographics questionnaire asking about VR and game experience. Participants responded verbally when asked for percentages and heights. Answers were recorded and later merged with the logged study data. After the study, they filled out a simulator sickness questionnaire (SSQ)[20] and a questionnaire asking them whether they thought they performed better when estimating percentages or heights, which depth-cues virtual environment they thought they performed best in, whether they were better screen/VR, and finally were asked to rate each of the 7 blocks in terms of how well they thought they performed. + +Like in the original position-length experiment [6], we took the log error of the percentage estimations + +$$ +\text{ LogErrorP } = {\log }_{2}\left( {\left| {{\text{ percent }}_{\text{ guessed }} - {\text{ percent }}_{\text{ actual }}}\right| + \frac{1}{8}}\right) +$$ + +For the height estimations, we calculated the error as a percentage of the actual height they were estimating. This meant if the bar was ${20}\mathrm{\;m}$ tall, and the participant guessed either ${18}\mathrm{\;m}$ , or ${22}\mathrm{\;m}$ , they had a height error of ${10}\%$ . + +$$ +\text{ ErrorH } = \frac{\left| {\text{ height }}_{\text{ guessed }} - {\text{ height }}_{\text{ actual }}\right| }{{\text{ height }}_{\text{ actual }}} +$$ + +§ PARTICIPANTS + +Study 1 had 18 participants recruited, with one removed for not following the instructions consistently, and one removed as an outlier with results more than 2 standard deviations from the mean, resulting in data for 16 participants. Details about the participants can be found in Table 1. Participants were remunerated with a ${25}\mathrm{{CAD}}$ gift card. + +§ RESULTS + +Results were analyzed with two (depth-cues (3) x screen/VR (2) x scale (3)) RM-ANOVAs (the real-world condition was not part of this RM-ANOVA, instead providing a sanity check that our participants performed similarly to [6]. Overall measurement means and standard deviations can be found in Table 1 and charts are in Figure 2. + + < g r a p h i c s > + +Figure 2. Log percent error (left) and height estimation error (right) for screen/VR (top row), scale (middle row) and level of depth cues (bottom row) used in Study 1. (Note: Error bars show standard error.) + +§ PERCENTAGE LOG ERROR + +There was a main effect of screen/VR (F(1,15) = 27.63, p < ${0.01})$ . When given the same virtual environments on the screen and in VR, participants had less error in VR (Figure 2). There was no main effect of depth-cues (p=.53). There was a main effect of scale $\left( {\mathrm{F}\left( {2,{30}}\right) = {185},\mathrm{p} < {0.01}}\right)$ (Figure 2). People were significantly worse at larger distances. + +There was an interaction effect of screen/VR x scale (F(2,30) $= {32.49},\mathrm{p} < {0.01})$ . At the personal scale, screen-based virtual environments has less error than VR, but this reversed at the larger scales. + +The real-world condition had a mean log percent error of 1.7 (SD: 0.94), which is slightly higher, but very similar to value achieved by the original paper based task [6] which was 1.5. + +§ HEIGHT ERROR + +As expected, 77.5% of all height evaluations were underestimations. There was a main effect of screen/VR $\left( {\mathrm{F}\left( {1,{15}}\right) = {5.80},\mathrm{p} < {0.05}}\right)$ , with participants having less error in VR. There was a main effect of scale $(\mathrm{F}\left( {2,{30}}\right) = {5.403}$ , p $< {0.05}$ ), with higher error at larger scales. There was no main effect of depth-cues $\left( {\mathrm{p} = {15}}\right)$ and no interaction effects. + +The real-world condition had a mean height error of ${20}\%$ (SD: 16%). + +§ SUBJECTIVE RANKINGS + +Participants indicated whether they were better in VR (62%), the screen-based virtual environment (19%) or performed equally well on both (19%). Most participants indicated that they performed best in the rich virtual environment (81%), while a few indicated they were best in the some virtual environment (19%). The 7 blocks (depth cues (3) x screen/VR (2) + 1 real-world), sorted by mean participant rank, are: rich/VR (1.6), some/VR (3.0), rich/screen (3.1), some/screen (4.5), real-world (4.7), few/VR (4.9), few/screen (6.1). Although we did not formally record participants with audio or video, we tried to take notes if they commented on the helpfulness (or lack of helpfulness) of a virtual environment during or after the study. Most participants commented that the task was very hard, particularly with heights (e.g., "I don't think I am very good at this", "I have no idea how tall that is"). Every single participant commented at least once about some facet of the rich depth cue conditions as helpful (e.g., "The trees help", "I like the building, I can count the floors", "How big is a house, that chart is as big as a house"). + +§ SUMMARY OF RESULTS + +In general, people were quite good at evaluating percentages, but poor at evaluating heights. They were better in VR, perhaps due to binocular depth cues, or due to physical sensations such as tilting one's head back to look up. Scale was as expected, important, with larger distances resulting in more error in both measurements. Depth cues did not seem to be influential in either height or percentage evaluations. This first study confirmed that distance underestimation occurs in a visualization context, in this case bar charts, in VR. However, the results suggest that percentage estimations are not nearly as negatively affected as height estimations. Participants were off by 9.7% on average, which is very small when considering that most answers were given as a multiple of five, introducing an expected error of 2.5%. Furthermore, VR had less error than the equivalent screen-based virtual environments, meaning that VR might be a better option than a screen based visualization where one needs to look around, at least in some situations. + +Subjectively, people felt they were more accurate at percentages and better in VR. However, even though we did not find a significant difference between the different depth-cue conditions, people collectively felt that they were more accurate in the rich depth cue conditions. This is interesting because it means that while depth cues might be less impactful on task performance, it does seem to be impactful on user comfort and their own perception of competency. + +§ STUDY 2 - MULTIPLE PERSPECTIVES + +In this study we were interested in how perspective and motion would change perception of distances. In particular, the fixed position near the base of the bar chart used in Study 1 meant that for the larger scales of charts, users would experience significant perspective issues such as foreshortening. + +Other than providing movement and the ability to take a new perspective, we also made a few changes to Study 1. Since we found no effect of depth-cues on task performance, we fixed this factor at our sunset-like, some depth cues virtual environment which we consider a reasonable best practice. Despite the preference of participants for the park-like rich depth cue environment, we acknowledge that complex context-rich settings full of trees and buildings may not be universally suitable for all visualizations. The scales we used in this study did not change, however since we were interested in letting participants take perspectives that were possibly far away, we made our bar charts ${1.5}\mathrm{x}$ as wide to me more visible from a distance. This meant that we moved the front and center starting position back ${1.5}\mathrm{\;m}$ such that at the personal scale the participant would still see the whole chart. We then calculated, using this front location, two other fixed positions ( ${15}\mathrm{\;m}$ to the left &right), and a variable back position such that the view was elevated and far enough away that both colored bars could be seen in their entirety. This back position was different in every bar chart and provided the participant with perspective that did not require them to look up or down and removed foreshortening effects. + + < g r a p h i c s > + +Figure 3. Front platform near personal scale bar chart. Players would view the world from the center of the platform. In some conditions platforms functioned as elevators. + +Unlike the first study, where participants stood on the ground, here participants stood on virtual platforms (Figure 3). These hexagonal platforms featured transparent glass railings, interaction instructions, and lights that would light up when the participant stood on a centrally located pressure plate. (Participants were asked to return to the center between tasks). The width of the platform (about $2\mathrm{\;m}$ ) corresponded to the maximum walkable space as calibrated by the HTC Vive. This meant that participants could always walk around on a platform and even lean over the railings safely. Other than being functional in terms of movement, the consistently sized platforms could be used for relative sizing, like the objects in the rich depth cues virtual environment. + +Participants were always on a platform, however, we provided the following movement modes. + +Front Platform Only: Like our naïve perspective chosen in the first study, the participant was stuck at the front and bottom of the chart. The participant could not teleport to a new location or move the platform up or down. However, unlike the first study, participants could walk around on the platform. + +Table 1: Study Details + +Study 1 Study 2 Study 3 + +max width= + +6|c|Study 1Study 2Study 3 + +1-6 +Task Type X Position-Length [4] Position-Length[4] 2|c|Position-Angle[4] + +1-6 +7*Scale Personal House Skyscraper X X X Bar/Scatter Pie + +2-6 + Height 3m $3\mathrm{\;m}$ 13m 5m + +2-6 + Task Bar Height <1.7m <1.7m < 5m - + +2-6 + Height 30m 30m 30m 12m + +2-6 + Task Bar Height <17m <17m < 12m - + +2-6 + Height 180m 180m - - + +2-6 + Task Bar Height < 100m <100m - - + +1-6 +4*Participants Age M 35.0. SD: 10.7 M: 34.5. SD: 11.0 2|c|M: 28.5. SD: 4.3 + +2-6 + $N$ 16 (4 female) 10 (2 female) 2|c|10 (3 female) + +2-6 + Measurement 4 used metric 5 used metric 2|c|6 used metric + +2-6 + VR Experience 11 tried before, 3 very familiar, 2 experts 3 no experience, 2 tried before, 5 very familiar 2|c|7 tried before, 2 very familiar, 1 expert + +1-6 +5*Measures Log Percent Error $\mathrm{M};{2.67},\mathrm{{SD}};{1.37}$ M: 1.96, SD: 1.18 2|c|M: 1.85. SD: 1.67 + +2-6 + Height Error M: 32%. SD: 23% M: 34%. SD: 22% 2|c|M: 34%. SD: 27% + +2-6 + Angle Error - - 2|c|M: 20%, SD: 16% + +2-6 + Simulator Sickness M: 5.1. SD: 4.5 M: 10.1. SD: 6.2 2|c|M: 6.2. SD: 4.8 + +2-6 + Performed Better At percents (88%), heights (0%), both (12%) percents (90%), heights (0%), both (10%) 2|c|percents (50%), heights (10%), both (40%) + +1-6 + +Back Platform Only: The participant started on a variable location back platform and could walk around but not teleport or use the platform elevator. + +Teleport Anywhere: Participants started at the front location and could teleport/move the platform they were on by pressing the trigger, aiming a visible arc pointer to a valid location (marked by a repeating blue pattern) and then releasing the trigger. Participants could teleport anywhere in a ${100}\mathrm{\;m}$ square centered on the chart. Participants could not move so close to the chart that they intersected it (invalid area was marked in a red). The elevator platform could be moved up and down by using a diegetic interface on the touchpad. + +Teleport 4 Platforms: Participants could teleport to front, left, and right, fixed location, elevator platforms as well as the variable location back platform. Additional platforms are only seen when teleporting so they do not block the chart. Platforms were selected by aiming the arc pointer directly at, near, or in the general direction of a platform. + +Teleport Front/Back: Participants could teleport like in the Four Platform condition, but could only access the front and back elevator platforms. + + < g r a p h i c s > + +Figure 4. Types of movement allowed in Study 2. + +§ TASK + +Tasks were generated the same as they were in Study 1. Participants completed 5 counterbalanced blocks, one for each movement type. Each block was introduced in a training mode where participants could try out the relevant movement interactions. During the study tasks, participants were asked to teleport at least once before giving their answers (if applicable). They were asked to return to the center of the platform in between each of the 90 tasks. + +§ MEASURES + +The measures employed were similar to Study 1, except that the final questionnaire asked them to rank their performance with the 5 movement types. + +§ PARTICIPANTS + +Study 2 had 11 participants, one was excluded as they were a clear outlier (more than two standard deviations from the mean). All participant details can be found in Table 1. The study took one hour and participants were remunerated with a 25 CAD gift card. + +§ RESULTS + +We performed two (movement-type (5) x scale (3)) RM-ANOVAs. Overall measurement means and standard deviations are in Table 1, and charts are in Figure 5. + + < g r a p h i c s > + +Figure 5. Log percent error (left) and height estimation error (right) for each movement type (top row) and scale (bottom row) used in Study 2. (Note: error bars show standard error.) + +§ PERCENTAGE LOG ERROR + +There was a main effect of movement-type $(\mathrm{\;F}\left( {4,{36}}\right) = {51.6}$ , p $< {0.001})$ . Post-hoc tests showed that Front Platform Only was significantly worse than all other conditions. There was no effect of scale $\left( {\mathrm{p} = {0.12}}\right)$ and no interaction effects. Overall measure averages can be found in Table 1. + +§ HEIGHT ERROR + +There was no main effect of movement-type $\left( {\mathrm{p} = {.31}}\right)$ but there was an effect of scale $\left( {\mathrm{F}\left( {2,{18}}\right) = {6.7},\mathrm{p} < {0.01}}\right)$ . Post hoc tests showed that the largest scale lead to significantly higher error than the smallest and medium scale $\left( {\mathrm{p} < {0.05}}\right)$ . + +§ SUBJECTIVE RANKINGS + +The movement-types, sorted by mean rank, are: continuous-teleport (1.8), teleport-front/back (2.0), teleport-4-platforms (2.1), back-platform-only (4.1), front-platform-only (4.7). + +§ SUMMARY OF RESULTS + +Adding movement/different perspectives to the viewpoint used in the first study always resulted in improvements to percentage estimations. The log percentage error of these improved conditions is about 1.7 which is very close to the 1.5 log percentage error achieved by Cleveland and McGill's position-length type 1 task which we modelled our study after. Furthermore, the effect of scale on percentage estimations was essentially eliminated when participants had the opportunity to view the visualization from far away and view the entire set of bars at once. Thus, it appears that relative distances tasks, like the percentage estimation we used here, are robust to the perceptual effects of VR if one can view the chart from different perspectives. + +On the other hand, directly estimating the height was still problematic when movement and perspective were added. There was no condition which improved people's height estimations and larger scales were still more difficult. Thus movement/perspective was not successful at improving people's ability to estimate heights. + +§ STUDY 3 - OTHER CHART TYPES + +Now that we have established, that at least when it comes to bar charts, estimating relative distances may be robust to the effects of distance compression in VR, we were interested in looking at this effect in other chart types. To this end we ran a third study using bar charts, pie charts and scatter plots (Figure 6). We used the sunset environment from study 1 and the continuous teleport movement from study 2 as it was ranked the highest. Also, because the first two studies had confirmed that people are bad at estimating the size of skyscrapers, we had 9 of the 18 charts in each condition fit all data directly ahead without needing to look up and the other 9 about twice as tall. + + < g r a p h i c s > + +Figure 6. Chart types used in Study 3. + +Bar Chart: Like with the bar chart in the previous studies, participants were asked to make percentage estimations between two colored bars and to estimate the height of the smaller colored bar. Colored bars were always consecutive but their order was randomized. + +Scatter plot: Instead of bars, this chart used spherical markers. The markers were spread out horizontally on the x-axis such that they were all between ${0.5}\mathrm{\;m}$ and ${2.5}\mathrm{\;m}$ apart, however, the colored markers were always consecutive and ${1.5}\mathrm{\;m}$ apart. Participants were asked to make percentage estimations between the colored markers with respect to height (y-axis) and to estimate the height of the shorter colored marker. + +Pie Chart: This five-section pie chart had two colored segments and three distinctly colored white sections. Colored segments were always in a random consecutive position and were assigned either dark or light purple randomly. Like the Cleveland and McGill's [6] position-angle experiment, participants were asked to make percentage estimations between the colored segments. However, as a pie chart uses angles rather than heights to encode data, participants were instructed to estimate the angle of the smaller segment. + +§ TASK + +Tasks were generated similar to Cleveland and McGill's [6] position-angle experiment. This meant that 5 numbers summing to 100 were generated, with percentage differences ranging from 10% to 97%. Participants completed 3 counterbalanced blocks, one for each chart type. Each block was introduced in a training mode where the researcher walked them through the exact questions used in this task. During the study tasks, participants were asked to teleport, walk around, or use the elevator at least once before giving their answers. They were asked to return to the center of the platform in between each of the 54 tasks. + +§ MEASURES + +The measures employed were similar to Study 1 and 2, except that the final questionnaire asked them to rank their performance with each chart. + +Additionally, to compare participant's angle estimations to height estimations, we used a very similar formula to calculate the angle estimation error as a percentage of the actual angle they were estimating. + +$$ +\text{ ErrorA } = \frac{\left| {\text{ angle }}_{\text{ guessed }} - {\text{ angle }}_{\text{ actual }}\right| }{{\text{ angle }}_{\text{ actual }}} +$$ + +§ PARTICIPANTS + +Study 3 had 10 participants. All participant details can be found in Table 1. The study took 40 minutes and participants were remunerated with a ${25}\mathrm{{CAD}}$ gift card. + +§ RESULTS + +We performed a (chart-type (5) x scale (2)) RM-ANOVA. Overall measurement means and standard deviations can be found in Table 1 and charts are in Figure 7. + +§ PERCENTAGE LOG ERROR + +There was a main effect of chart-type $(\mathrm{F}\left( {2,{18}}\right) = {11.06},\mathrm{p} <$ 0.001). Post-hoc tests showed that Pie Charts were significantly worse than all other condition (p<0.05). There was no effect of scale $\left( {\mathrm{p} = {0.30}}\right)$ and no interaction effects. + +§ HEIGHT AND ANGLE ERROR + +There was a main effect of chart-type $(\mathrm{\;F}\left( {2,{18}}\right) = {11.39},\mathrm{p} <$ 0.01 ). Post hoc tests showed that angle estimations had significantly less error than heights (p<0.01). There was no effect of scale $\left( {\mathrm{p} = {0.65}}\right)$ and no interaction effects. + +§ SUBJECTIVE RANKINGS + +The chart-types, sorted by mean rank, are: bar (1.4), pie (2), scatter (2.7). + + < g r a p h i c s > + +Figure 7. Log percent error (left) and height estimation error (right) for each chart type in Study 3. (Note: error bars show standard error.) + +§ SUMMARY OF RESULTS + +When it comes to percentage estimations, our results mirror Cleveland and McGills [6] results: people are better at lengths than they are at angles, by a factor of 2.1 (1.96 in the original study). This result suggests that for percentage estimations, other perceptual tasks (e.g., area) should follow the same patterns in VR as the original work. + +However, when looking at angle/height estimations, participants were better at angles. This could be because the on only needs to look at the innermost pie chart point to do this estimation (as opposed to looking at the bottom and top of a large bar chart), because all angles are bounded by 360 (max angle was 100 degrees in our study), or because the angles were relatively small. Future work should investigate this more closely, especially at larger and smaller scales. + +§ DISCUSSION + +This work suggests that the distance compression problem that occurs in VR does alter one's perception of data visualizations. However, relative distance tasks, like the percentage estimation tasks in these three studies, appear to be robust to this distance compression, particularly when participants can reach a perspective where they can view the chart from far back. + +§ DESIGN GUIDELINES + +§ VR IS GOOD FOR VIRTUAL ENVIRONMENTS + +In Study 1, VR had less error than the equivalent screen-based virtual environment for both percentage and height estimations. While VR may not be better in all circumstances (e.g., flat screen image), when the visualization exists inside a virtual environment, VR can be a good choice for immersive analytics. + +§ USE MOVEMENT MODES THAT AVOID NAUSEA + +Unity's practitioner guidelines [49] recommends that one avoids user simulator sickness by designing to avoid vection. Vection occurs in VR when the user's vestibular system is receiving different signals than their eyes and ears. This occurs most often in VR when the movement modality causes the player to experience motion in VR that their body does not (e.g., mapping movement in VR to a thumbstick on a controller), or when experienced bodily motion has no effect in VR (e.g., not updating the VR environment when the user turns their head). + +Although we were not specifically investigating or avoiding nausea, we did find some techniques we used successful. We used a fade in/out teleport mechanism and platforms that matched the safely walkable space in the real world in Study 2 and Study 3 to avoid vection. The elevator feature was unfortunately vection inducing because it was activated with the touchpad instead of equivalent bodily motion. We combated vection-related nausea by severely limiting the elevator speed and used an easing function to prevent sudden stops. A future implementation of an elevator might have 'floors' that can be accessed with a teleport-like fade effect. + +§ ENCODE DATA WITH RELATIVE DISTANCES + +One should not have any requirements or expectations that a user can estimate a distance in VR. In all three studies, participants were underestimating heights on average by ${33}\%$ ,(i.e., $2/3$ their actual value). However, across all three studies, participants provided extremely low-ball heights 15% of the time, underestimating by more than 50%. Conversely participants were much more accurate in the percentage estimation task, off only by 6-10% on average across all three studies. + +Therefore, designers should encode data in ways that allows users to compare two distances/lengths rather than expecting them to measure a distance directly. In a simple graph like a bar chart or scatter plot, this can be as simple as providing a labelled axis immediately beside the data. Avoid, say, providing a scale for a map requiring that one can estimate a distance (e.g.,1inch $= 1$ mile). We also recommend that where a measurement of distance is necessary, one should provide a tool which can be used for comparison, like a ruler, or a measuring tape. + +§ CONSIDER MAXIMUM SCALE + +It is important to consider the most extreme perspective that the user can navigate into, or, rather to consider the maximum distance from themselves that a user might be asked to evaluate. In Study 1 participants were pretty good at the personal and house scale, and were predictably bad at the skyscraper scale. Therefore, we recommend that one create situations that expect users to be evaluating distances less than ${15}\mathrm{\;m}$ , though this is just a rough estimation based on the particular distances we used in the study. Future work is needed to provide a better guideline. + +§ PROVIDE AN OVERVIEW PERSPECTIVE + +In Study 1, larger scales meant higher error in both percentage and height estimations. However, when the ability to view the data from an overview perspective was added in Study 2, this effect disappeared for percentage estimations. If one does use large distances, or data that is far away from the user, providing a way for the user to make a quick overview of relevant data can remove the negative effects of large distances by removing problems like foreshortening. It should be noted that in Study 2, the teleport-anywhere condition where one could move their view to an overview perspective was not significantly better than the back-platform-only condition which automatically provided an overview perspective. Therefore it may be enough to simply provide a generated overview of your visualization, or one could provide freeform movement options like teleport-anywhere depending on your needs. + +§ USERS APPRECIATE ADDITIONAL DEPTH CUES + +Given that every participant mentioned how helpful additional depth cues were in Study 1, despite no change in task performance, we would recommend providing as many depth cues as possible to improve user's comfort and perceived competency with the task. This could mean simple things like adding texture to an object, or more complicated, fully developed, contextually relevant environments with relatively sized objects. + +§ CONCLUSION + +In this paper we investigated how the phenomenon of distance compression alters perception of visualizations in VR. Through three studies that replicate foundational work around the perception of visualizations we found that estimations of actual lengths, in this case the heights of bars in a bar chart, are negatively impacted by distance compression, but relative distances are not. Furthermore, as with traditional visualizations, people can better estimate relative lengths over relative angles, suggesting that much of the existing perceptual research on visualizations may still apply. Finally, we provide set of design guidelines for designers wishing to develop VR visualizations that limit the negative effects of distance compression. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..8e2232df7f4963536366bf0ebfd3ecd08141823b --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,381 @@ +# Real-Time Cinematic Tracking of Targets in Dynamic Environments + +Category: Research + +## Abstract + +Tracking a moving target in a 3D dynamic environment in a cinematic way remains a challenging problem through the need to simultaneously ensure a low computational cost, a good degree of reactivity to changes, and a high cinematic quality. In this paper, we draw on the idea of Motion-Predictive Control to propose an efficient real-time camera tracking technique which ensures these properties. Our approach relies on the predicted motion of a target to create and evaluate a very large number of camera motions using hardware ray casting. Our evaluation of camera motions includes a range of cinematic properties such as distance to target, visibility, collision, smoothness and jitter. Experiments are conducted to display the benefits of the approach with relation to prior work. + +## 1 INTRODUCTION + +The automated generation of cinematic camera motions in 3D virtual environments is a key problem for a number of computer graphics applications (computer games, automated generation of virtual tours, virtual storytelling). The first and foremost problem is to identify what are the intrinsic characteristics of a good camera motion. While film literature provides a thorough and in-depth analysis of what makes a viewpoint qualitative in terms of framing, angle to target, aesthetic composition, depth-of-field, lighting, the characterisation of camera motions has been far less addressed. This is pertained to specifics of real camera rigs (dollies, cranes) that reduce the set of motion possibilities, and the limited use of long camera sequences in movies (with the exception of steadicam sequence shots). In addition, the characteristics of camera motions in movies are strongly guided by the narrative intentions which need to be conveyed (e.g. rhythm, excitation, or soothing atmosphere). + +In transposing the knowledge to the tracking of targets in virtual environments, one can however derive a number of desirable cinematic characteristics such as visibility (avoiding occlusion of the tracked target, and obviously collisions with the environment), smoothness (avoiding jerkiness in trajectories) and continuity (avoiding large changes in viewing angles and distances to target). In practice, these characteristics are often contradictory (avoiding a sudden occlusion requires a sudden acceleration, or a abrupt change in angle). Furthermore, the computational cost of evaluating visibility, collision, continuity and smoothness lowers the possibility of evaluating many alternative camera motions. + +Existing work have either addressed the problem using global motion planning techniques typically based on precomputed roadmaps $\left\lbrack {5,{10},{11}}\right\rbrack$ , or local planning techniques using ray casting [12] and shadow maps for efficient visibility computations $\left\lbrack {1,2,4}\right\rbrack$ . While global motion planning techniques excel at ensuring visibility given their full prior knowledge of the scene, local planning techniques excel in handling strong dynamic changes in the environment. The main bottleneck of both approaches remains the limited capacity in evaluating expensive cinematic properties such as target visibility along a camera motion or in the local neighborhood of a camera. + +Our approach builds on the idea of performing a mixed local+global approach by exploiting a finite-time horizon large enough to perform a global planning, yet efficient enough to react in real-time to sudden changes. This sliding window enables the real-time evaluation of thousands of camera motions by exploiting recent hardware raycasting techniques. As such, our approach draws inspiration from Motion Predictive control techniques [13] by optimizing a finite time-horizon, only implementing the current timeslot and then repeating the process on the following time slots. + +A strong hypothesis we make is that the target object is controlled by the user through interactive inputs, hence its motions and actions can be predicted within a short time horizon $H$ . Our system comprises 2 main stages, illustrated in Figure 1. In the first stage, we predict the motion of the target over a given time horizon ${H}^{i}$ by using the target’s current position (at time $t$ ) and the user inputs. We then select an ideal camera position at time $t + {H}^{i}$ and propose to define a camera animation space as a collection of smooth camera animations which link the current camera position (at time $t$ ), to the ideal camera location (at time $t + {H}^{i}$ ). In the second stage, we perform an evaluation of the quality of the camera animations in the animation space by relying on hardware raycasting techniques and select the best camera animation. In a way similar to motion-predictive control [13], we then apply part of the camera animation and re-start the process at a low frequency(4Hz)or when a change in the user inputs is detected. Finally, to adapt the camera animation space to the scene configurations, we dynamically adapt a scaling factor on the animation space. As a whole, this process allows generating a continuous and smooth camera animation which enables the real-time tracking of a target object motions in fully dynamic and complex environments. + +Our contributions are: + +- the design of a camera animation space as a dedicated space in which to express a range of camera trajectories; + +- an efficient evaluation technique using hardware ray casting; + +- a motion predictive control approach that exploits the camera animation space to generate real-time cinematic camera motions. + +## 2 RELATED WORK + +We narrow the scope of related work to real-time camera planning techniques. For a broader view of camera control techniques in computer graphics, we refer the reader to [3]. + +## Global camera path planning + +Global camera path-planning techniques build on well-known results from robotics such as probabilistic roadmaps, regular cell-decompositons or Delaunay triangulations. All have in common the prior computation of a roadmap as a graph where nodes represent regions of the configuration-free space (points, regular cells or other primitives), and edges represent collision-free links between the nodes. Niewenhuisen and Overmars exploited probabilistic roadmaps (PRM) to automatically perform queries withe the graph structure, linking given starting and ending configurations [10]. Heuristics were required to smooth the camera trajectories and avoid sudden changes in position and camera angles. Later, Oskam et al. [11] proposed a visibility-aware roadmap, by using sphere-sampling of the configuration-free space, and precomputing the combinatorial sphere-to-sphere visibility using stochastic ray-casting. + +Lino et al. [7] exploited spatial partitioning techniques as dynamically evolving volumes around targets. Connectivity between volumes allowed to dynamically create a roadmap, through which camera paths were computed while accounting for visibility and viewpoint semantics along the path. + +More recently, Jovane et al. exploited the 3D environments to create topology-aware camera roadmaps that would lower the road-amp complexity (compared to probablistic roadmaps) and enable the exploitation of different cinematic styles. + +Yet, in all cases, the cost of precomputing the roadmap, and the difficulty in dynamically updating it to account for change in the 3D environments limits the practical applicability of such techniques to strongly dynamic environments, such as those met in computer games or storytelling applications. + +## Local camera planning + +The other class of real-time camera planning techniques relies on a local knowledge of the environment. Mostly by sampling and evaluating the local neighborhood around the current camera location, such system are able to take decisions as to were to move at the next iteration, while evaluation classical cinematic properties such as visibility, smoothness and continuity. To address the computational issue of evaluation visilibity of targets, Halper et al. [4] exploited shadow maps to compute potential visible sets, coupled with a hierarchical solver. Normand and Christie exploited slanted rendering frustums to compose spatial and temporal visibility for two targets over a small temporal window (10 frames) [2]. Additional criteria were added in order to select the best move to perform at each frame, and to balance between camera smoothness and camera reactivity. Litteneker et al. [8] proposed a local planning technique based on an active contour algorithm. + +Burg et al. [1] performed shadow map projections from the targets to the surface of the Toric manifold (a specific manifold space dedicated to camera control [6]). The visibility information provided by the shadow maps was then exploited to move the camera on the surface of the Toric manifold while ensuring secondary visual properties. + +Recently, for the specific case of drone cinematography, Nageli et al. [9] built a non-linear model predictive contouring controller to jointly optimize 3D motion paths, the associated velocities and control inputs for a drone. + +Our approach partly builds on the work of Nageli et al. , by borrowing the idea of a receding horizon process in which motion planning is performed for a large enough time horizon (few seconds), and the processed is repeated at a higher frequency to account for dynamic changes in the environment. Rather than addressing the problem using a non-linear solver, we propose in our paper to exploit the hardware raycasting capacities of recent graphics cards to efficiently detect collisions and occlusion and evaluate thousands of camera trajectories for each time slot. + +## 3 OVERVIEW + +Our system aims at tracking in real-time a target object traveling through a $3\mathrm{\;d}$ animated scene, by generating a series of smooth cinematic-like camera motions. + +In the following, we will present the construction of our camera animation space (Section 4), before detailing the evaluation of camera animations in this space (Section 5). Finally, we will show how we dynamically adapt and recompute our graph to fit the scene geometry and improve the results (Section 6). + +## 4 CAMERA ANIMATION SPACE + +We propose the design of a Camera Animation Space as relative local frame, defined by an initial camera configuration ${\mathbf{q}}_{\text{start }}$ at time $t$ and final camera configuration ${\mathbf{q}}_{\text{goal }}$ at time $t + {H}^{i}$ (see Figure 2). This local space defines all the possible camera animations that link ${\mathbf{q}}_{\text{start }}$ at time $t$ to ${\mathbf{q}}_{\text{goal }}$ at time $t + {H}^{i}$ . Our goal is to compute the optimal camera motion within this space considering a number of desired features on the trajectory (smoothness, collision and occlusion avoidance along the camera animation, ...). + +--- + +${H}^{i}\;$ Time horizon for iteration $i$ (between times ${t}_{i}$ and ${t}_{i} + h$ ) + + Target behavior (predicted position) at time $t \in {H}_{i}$ + + Set of preferred viewpoints at time ${t}_{i} + h$ + + Camera animation space for horizon ${H}^{i}$ + +${M}^{i}$ Transform matrix of the camera animation space, for ${H}^{i}$ + + 3D position in camera animation ${\mathbf{q}}_{j}^{i} \in {\mathbb{Q}}^{i}$ , at time $t \in {H}^{i}$ + + Starting camera position. ${\mathbf{q}}_{j}^{i}\left( {t}_{i}\right) = {q}_{\text{start }}^{i},\forall \left( {i, j}\right)$ + + Goal camera position. ${\mathbf{q}}_{j}^{i}\left( {{t}_{i} + h}\right) = {q}_{\text{goal }}^{i} \in {\mathbf{V}}^{i},\forall \left( {i, j}\right)$ + + Tangent vector of a camera track at time $t$ + + The camera view vector at time $t$ + +$\left( {\mathbf{x},\mathbf{y}}\right) \;$ angle between two vectors $\mathbf{x}$ and $\mathbf{y}$ + + Table 1: Notations used in the paper + +--- + +To this end, we propose to follow a 3-step process: (i) anticipate the target object's behavior (i.e. its next positions) within a given time horizon, (ii) choose a goal camera viewpoint from which to view the target at the end of the time horizon, and (iii) given this goal viewpoint, and the current one, build and evaluate the space of possible camera animations between them. + +### 4.1 Anticipating the target behavior + +We here make the strong assumption that we can anticipate, with good approximation, the next positions of the tracked target within a time horizon ${H}^{i}$ . This is classical in character animation engines to select the best animation to play (e.g. motion matching). We consider ${H}^{i}$ begins at time ${t}_{i}$ and has a constant user-defined duration of $h$ seconds. Moreover, we consider that the target behavior will be consistent over the whole horizon ${H}^{i}$ . In our implementation, we consider the target as a rigid body (e.g. a capsule) with a current speed and acceleration, launch a physical simulation over ${H}^{i}$ , then store all simulated positions of the rigid body over time. With this anticipation, we account for the scene geometry which might influence future user inputs, e.g. to make the target avoid collisions. We then refer to the anticipated positions as the target behavior, output in the form of a 3d animation curve ${B}^{i}\left( t\right)$ with $t \in {H}^{i}$ (see Figure 3). Note that one may use another technique to anticipate the target behavior, provided it can output a $3\mathrm{\;d}$ animation curve ${B}^{i}\left( t\right)$ over time, it will not change the overall workflow of our camera system. + +### 4.2 Selecting a goal viewpoint + +We now make the second assumption that the user defines a set of viewpoints to portray the target object. By default, one might use a list of stereotypical viewpoints in movies such as 3-quarter front and back views, side views, or bird eye views. These viewpoints are sorted by order of preference (fixed by the user) in a priority queue V. Each preferred viewpoint is defined as a 3d position in spherical coordinates $\left( {d,\phi ,\theta }\right)$ , in the local frame of the target’s configuration, where $\left( {\phi ,\theta }\right)$ defines the vertical and horizontal viewing angles, and $d$ the viewing distance. + +Given this set of viewpoints we propose, each time we update the target behavior, to select a good viewpoint where the camera should be at the end of the time horizon ${H}^{i}$ , i.e. at time ${t}_{i} + h$ . Considering all viewpoints are in $\mathbf{V}$ , we pop viewpoints by order of priority. We propose to stop as soon as a viewpoint is promising enough, i.e. at time ${t}_{i} + h$ neither the target will be occluded from this viewpoint, nor the camera will be in collision with the scene geometry. We then refer to this selected viewpoint as the goal viewpoint ${\mathbf{q}}_{\text{goal }}$ . + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_2_155_156_1479_479_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_2_155_156_1479_479_0.jpg) + +Figure 1: System overview: the orange box represents the CPU part of the system; the green box represent the GPU part of the system + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_2_298_829_565_561_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_2_298_829_565_561_0.jpg) + +Figure 2: Representation of the animation space and its local transform + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_2_215_1662_590_280_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_2_215_1662_590_280_0.jpg) + +Figure 3: Representation of the target’s behaviour curve at the ${i}^{th}$ iteration + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_2_981_746_601_396_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_2_981_746_601_396_0.jpg) + +Figure 4: Ray launched from the camera toward the target's sampling area at time $t$ + +Knowing the current camera viewpoint ${\mathbf{q}}_{\text{start }}$ (at time ${t}_{i}$ ) and this goal viewpoint ${\mathbf{q}}_{\text{goal }}$ (at time ${t}_{i} + h$ ) provides a good basis to build camera animations that can follow the target behavior. We now need to categorize the full range of possible camera animations. + +### 4.3 Sampling camera animations + +Given the target behavior to track, we have computed two key viewpoints ${\mathbf{q}}_{\text{start }}$ and ${\mathbf{q}}_{\text{goal }}$ , where the camera should be at start and end times of horizon ${H}^{i}$ . We now propose to categorize the space of possible camera animations, between these two viewpoints. It is worth noting that this space is infinite, which makes it difficult to explore and evaluate. Our idea is to instead create a compact representation of this space, by sampling a large set of animation curves, with a good coverage of the range of animations. We will hereafter note this stochastic set of camera animations as ${\mathbb{Q}}^{i}$ , and a sampled camera animation as ${\mathbf{q}}_{j}^{i}$ , where $j$ is the sample index. + +Two requirements should be considered on this sampled space: (i) sampled camera animations should be as-smooth-as-possible, i.e. with low jerk, and (ii) the sampled animation space should enable to enforce continuity between successive horizons. To do so, we propose to encode each sampled camera animation as a cubic spline curve on all 3 camera position parameters, as they offer ${C}^{3}$ continuity between key-viewpoints. In practice, we make use of Hermite curves. They provide an easy means to sample the space of possible animations, by simply sampling a set of tangent vectors to the spline curve at start and end positions. Then, ${C}^{1}$ continuity between successive Hermite curve portions is commonly enforced by aligning both positions and tangents at connecting positions. We hence propose to rely on the same idea. + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_3_148_146_726_360_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_3_148_146_726_360_0.jpg) + +Figure 5: Example of a part of a Visibility data encoding texture; Black $=$ The target is visible from the camera, $\operatorname{Red} =$ The target is occluded or partially occluded from the camera, Blue = The camera is inside the scene geometry + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_3_183_674_653_406_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_3_183_674_653_406_0.jpg) + +Figure 6: Representation of the positioned animation space that is asymmetrically collided by the scene geometry + +In practice we propose, for each camera animation, to complement the starting and the goal camera positions ${\mathbf{q}}_{\text{start }}$ and ${\mathbf{q}}_{\text{goal }}$ , by two tangents, i.e. the camera velocities ${\dot{\mathbf{q}}}_{\text{start }}$ and ${\dot{\mathbf{q}}}_{\text{goal }}$ (figure 2). To offer a good categorization of the whole animation space, we use a uniform sampling of these tangents in a sphere of radius $r$ (in our tests, we used $r = 5$ ). The number of sampled animations is left as a user-defined parameter. Though, providing a sufficient categorization might require to sample several hundreds of animations (an evaluation of results for different values is provided in section 7.2). + +Recomputing such a stochastic graph might be costly and/or lack stability over time. Our last proposition is then to precompute a graph of uniformly sampled camera animations, in an orthonormal coordinate system (as illustrated in figure 2). In this system, ${\mathbf{q}}_{\text{start }}$ and ${\mathbf{q}}_{\text{goal }}$ has coordinates(0,0,0)and(0,0,1)respectively. Then, for any horizon ${H}^{i}$ , we apply a proper ${4x4}$ transform matrix ${M}^{i}$ to align the graph onto the computed viewpoints ${\mathbf{q}}_{\text{start }}^{i}$ and ${\mathbf{q}}_{\text{goal }}^{i}$ . It is worth noting that in ${M}^{i}$ the $3\mathrm{\;d}$ translation, $3\mathrm{\;d}$ rotation and the scaling on the $z$ axis will lead this axis to match the vector $\left( {{\mathbf{q}}_{\text{goal }}^{i} - {\mathbf{q}}_{\text{start }}^{i}}\right)$ . Two parameters remain free: the scaling for the other two axes ( $x$ and $y$ ). As a first assumption we could use the same scaling as for $z$ . However, we will further explain how to choose a better scaling in section 6, to take collisions and occlusions into consideration. + +## 5 EVALUATING CAMERA ANIMATIONS + +In the first stage, we have computed a set of camera animations ${\mathbb{Q}}^{i}$ , that can portray the target objects' behavior within time horizon ${H}^{i}$ . We now need to select one of these animations, as the one to apply to the camera. Our second stage is devoted to evaluating of the quality of all animations, and selecting the most promising one, in an efficient way. In the following, we will first detail our evaluation criteria, before focusing on how we perform this evaluation. + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_3_926_148_722_288_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_3_926_148_722_288_0.jpg) + +Figure 7: Projection of the success and fail of one Track on the four axis of resolution $R = 4$ . a) Collision and occlusion detection b) Enumeration and projection of the success and fail samples on the axis + +### 5.1 Camera animation quality + +A proper camera animation to portray the motions of a target object should meet a set of nice properties, among which the most important are: avoid collisions with the scene, enforce visibility on the target object, while offering a smooth series of intermediate viewpoints to the viewer. To evaluate how much these properties are enforced along a camera animation ${\mathbf{q}}_{j}^{i}$ , we propose to rely on a set of costs ${C}_{k}\left( t\right) \in \left\lbrack {0,1}\right\rbrack$ : + +Occlusions and Collisions To evaluate how much the target object is occluded from a camera position ${q}_{j}^{i}\left( t\right)$ , we rely on ray-tracing. We firstly approximate the target object's geometry with a simple abstraction (e.g. a sphere). We secondly sample a set of points $s \in \left\lbrack {0, N}\right\rbrack$ onto this abstraction, which we position at the object’s anticipated position ${B}^{i}\left( t\right)$ . We thirdly launch a ray from the camera to each point $s$ (see figure 4). We lastly note ${R}_{s}\left( t\right)$ the result of this ray-launch. In the mean time, we use the same ray to also evaluate if the camera is in collision (i.e. inside another object of the scene), by setting its value as: + +$$ +{R}_{s}\left( t\right) = \left\{ \begin{array}{ll} 0 & \text{ if Visible } \\ 1 & \text{ if Occluded } \\ 2 & \text{ if Collided } \end{array}\right. +$$ + +We distinguish a collision from a simple occlusion as follow. By looking at the normal at the hit geometry, we know if the ray has hit a back face or a front face. When the ray hits a back face, ${q}_{j}^{i}\left( t\right)$ must be inside a geometry, hence we consider it as a camera collision. Conversely, when the ray hits a front face, ${q}_{j}^{i}\left( t\right)$ must be outside a geometry. If the ray does not reach $s$ , we consider $s$ as occluded, otherwise we consider it as visible. + +Knowing ${R}_{s}\left( t\right)$ , we define our collision and occlusion costs, as: + +$$ +{C}_{o}\left( t\right) = \frac{1}{N}\mathop{\sum }\limits_{{s = 0}}^{N}\left\{ \begin{array}{ll} 1 & \text{ if }{R}_{s}\left( t\right) = 1 \\ 0 & \text{ Otherwise } \end{array}\right. +$$ + +and + +$$ +{C}_{c}\left( t\right) = \frac{1}{N}\mathop{\sum }\limits_{{s = 0}}^{N}\left\{ \begin{array}{ll} 1 & \text{ if }{R}_{s}\left( t\right) = 2 \\ 0 & \text{ Otherwise } \end{array}\right. +$$ + +In our tests, we used $N = {20}$ . + +Viewpoint variations Providing a smooth series of intermediate camera viewpoints requires to regulate changes on successive viewpoints. We hence propose to evaluate how much the viewpoint changes over time. We split this evaluation into two distinct costs: one on the camera view angle, and one on the distance to the target object. Further, we define them for time steps ${\delta t}$ . + +Beforehand, let's introduce the view vector connecting the target object to the camera, on which we will rely. It is computed as: + +$$ +{D}_{j}^{i}\left( t\right) = {B}^{i}\left( t\right) - {q}_{j}^{i}\left( t\right) +$$ + +From this view vector, we define the view angle change as: + +$$ +{C}_{{\Delta }_{\phi ,\theta }}\left( t\right) = \frac{\left( {D}_{j}^{i}\left( t\right) ,{D}_{j}^{i}\left( t + \delta t\right) \right) }{\pi } +$$ + +In a way similar, we propose to rely on a squared distance variation, defined as: + +$$ +{\Delta d}\left( t\right) = {\left( \begin{Vmatrix}{D}_{j}^{i}\left( t\right) \end{Vmatrix} - \begin{Vmatrix}{D}_{j}^{i}\left( t + \delta t\right) \end{Vmatrix}\right) }^{2} +$$ + +We then define a cost on this distance change, which we further normalize as: + +$$ +{C}_{\Delta d}\left( t\right) = 1 - E\left( {{\Delta d}\left( t\right) ,\lambda }\right) +$$ + +where $E$ is an exponential decay function, for which we set parameter $\lambda$ to ${10}^{-4}$ . + +Preferred range of distances One side effect of the above costs is that for large distances, changes on the view angle and distance will be less penalized. In turn, this will favor large camera animations. It is worth noting that, in the same way, placing the camera too close to the target object is also not desired in general. We should then penalize both behaviors. To do so, we propose to introduce a last cost, aimed at favoring camera animations where the camera remains within a prescribed distance range $\left\lbrack {{d}_{\min },{d}_{\max }}\right\rbrack$ . We formulate this costs as: + +$$ +{C}_{d}\left( t\right) = \left\{ \begin{array}{ll} 1 & \text{ if }\begin{Vmatrix}{{D}_{j}^{i}\left( t\right) }\end{Vmatrix} \notin \left\lbrack {{d}_{\min },{d}_{\max }}\right\rbrack \\ 0 & \text{ otherwise } \end{array}\right. +$$ + +### 5.2 Selecting a camera animation + +Given the previously introduced costs, we now target to compute the cost of an entire animation, and select the most promising one. + +In a first step, we define the total cost of a camera animation as a weighted sum of single-criteria costs integrated over time: + +$$ +C = \mathop{\sum }\limits_{k}{w}_{k} \cdot \left\lbrack {{\int }_{{t}_{i}}^{{t}_{i} + h}{C}_{k}\left( t\right) G\left( {t - {t}_{i},\sigma }\right) {dt}}\right\rbrack +$$ + +where ${w}_{k} \in \left\lbrack {0,1}\right\rbrack$ is the weight of criterion $k.G$ is a Gaussian decay function, where we set standard deviation $\sigma$ to the value of $h/4$ . We also slightly tune the decay to converge toward 0.25 (instead of 0 ). This way, we give a higher importance to the costs of the beginning of the animation, yet considering the end. Indeed, our assumption is that the camera will only play the first part of it (10% in our tests), while the remaining part still brings a longer term information on what could be a good camera path. We compute this total cost for any camera animation ${q}_{j}^{i} \in {\mathbb{Q}}^{i}$ , and refer to it as ${C}_{j}^{i}$ . + +In a second step, we propose to choose the most promising camera animation for time horizon ${H}^{i}$ , denoted as ${\mathbf{q}}^{i}$ , as the one with minimum total cost, i.e. : + +$$ +{\mathbf{q}}^{i} = \underset{j}{\arg \min }{C}_{j}^{i} +$$ + +### 5.3 GPU-based evaluation + +We have presented our evaluation metric on camera animations. However, some costs might be expensive to compute. In particular, computing occlusion and collision require to trace many rays (i.e. $N$ rays, for many time steps, for hundreds of camera animations). We here focus more in detail on how we propose to perform computations in a very efficient way. + +It is worth noting that many computations can be performed in parallel. The evaluation of camera animations are independent from each other. In a way similar, the evaluation of a cost at different time steps along a given animation are also independent. Hence, we propose to cast our evaluation of single costs into a massively-parallel computation on GPU. + +Firstly, note that we only need to send the animation space (in orthonormal coordinate system) once to the GPU, when starting the system. Then, when we need to reposition the camera animation space, for horizon ${H}^{i}$ , we simply update the ${4x4}$ transform matrix ${M}^{i}$ . And, from this data, one can straightforwardly compute any camera position ${\mathbf{q}}_{j}^{i}\left( t\right)$ for any time $t$ . + +Secondly, for occlusion and collision computations, we propose to rely on the recent RTX technology, allowing to perform real-time raytracing requests on GPU. We run $\frac{{H}^{i}}{\delta t}$ kernels per track, where each kernel launches $N$ rays, one per sample $s$ picked onto the target object. The result of these computations are stored into a 2D texture (as shown in figure 5), where the texture coordinates $u$ and $v$ map to one time step $t$ and one animation of index $j$ , respectively. Occlusion and collision costs are stored into two different channels. + +Thirdly, we rely on a compute shader to compute all other costs, and combine them with occlusion and collision costs. This shader uses one kernel per camera animation. It stores the total cost of all animations into a GPU buffer, finally sent back to CPU where we perform the selection step. + +## 6 Dynamic Trajectory Adaptation + +Until now, we have considered a simplified configuration, where we evaluate the animation space and select one camera animation for one given time horizon ${H}^{i}$ . We now need to consider two other requirements. First, the camera should be animated to track the target object for an unknown duration, larger than $h$ . Changes in the target behavior may also occur, due to interactive user inputs Second, for any horizon ${H}^{i}$ , some camera animations could be in collision with the scene, or the target could be occluded. This would prevent finding a proper animation to apply. In other words, the space of potential camera animations should be influenced by the surrounding scene geometry. Hereafter, we explain how we account for these requirements. + +### 6.1 User inputs and interactive update + +We here assume the camera is currently animated along curve ${\mathbf{q}}^{i}$ . We then need to compute a new animation, for a new time horizon ${H}^{i + 1}$ in two cases. First, when the target's behavior has changed, this makes the currently played animation not valid for future time steps. Second, when the camera animation as reached a specific duration. In a way similar to motion-predictive control, we indeed consider the target behavior, as well as the collision and occlusion computations, less reliable after a certain anticipation duration. This in particular allows to handle dynamic collisions and occlusions. This update is specified by the user as a ratio of progress along animation ${\mathbf{q}}^{i}$ . In our tests the horizon duration is $h = 5$ seconds and the update ratio is 0.1 . In turn, the new horizon generally starts at ${t}_{i + 1} = {t}_{i} + {0.1h}$ , while we set ${\mathbf{q}}_{\text{start }}^{i + 1} = {\mathbf{q}}^{i}\left( {t}_{i + 1}\right)$ . + +Knowing that an update is required, we iterate on the overall process explained earlier, for next horizon ${H}^{i + 1}$ . We select a new goal viewpoint (i.e. ${\mathbf{q}}_{\text{goal }}^{i + 1}$ ), and update the transform matrix (i.e. ${M}^{i + 1}$ ) to position the camera animation space ${\mathbb{Q}}^{i + 1}$ . We then need to evaluate all camera animations in ${\mathbb{Q}}^{i + 1}$ . To do so, we compute all costs presented in section 5 . However, this is not enough, as we also need to enforce continuity between animation ${q}^{i}$ and the animation ${\mathbf{q}}^{i + 1}$ that is to be selected. To do so, we rely on an additional criteria designed to favor a smooth transition between consecutive animations: + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_5_141_148_737_1373_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_5_141_148_737_1373_0.jpg) + +Figure 8: Comparison of our system with an adaptive scale, or with a naïve scale, applied on the camera animation space. + +Animation transitions this cost penalizes abrupt changes when transitioning between two camera animation curves. Our idea is to penalize a wide angle between the tangent vector to camera animation ${\mathbf{q}}^{i}$ and the tangent vector to animation ${\mathbf{q}}_{j}^{i + 1} \in {\mathbb{Q}}^{i + 1}$ , at connection time ${t}_{i + 1}$ . We write this cost as: + +$$ +{C}_{i, i + 1}\left( j\right) = \frac{\left( {\dot{\mathbf{q}}}^{i}\left( {t}_{i + 1}\right) ,{\dot{\mathbf{q}}}_{j}^{i + 1}\left( {t}_{i + 1}\right) \right) }{\pi } +$$ + +We then rewrite the selection of camera animation ${q}^{i + 1}$ as: + +$$ +{\mathbf{q}}^{i + 1} = \underset{j}{\arg \min }\left\lbrack {{C}_{j}^{i + 1} + {w}_{i, i + 1}{C}_{i, i + 1}\left( j\right) }\right\rbrack +$$ + +where ${w}_{i, i + 1}$ is the relative weight of the transition cost with regards to other costs. + +### 6.2 Adapt to scene geometry + +We would also like to adapt our camera animation space to the scene geometry. To this aim, on the one hand we seek to offer a camera animation space which exhibits as few collisions and occlusions as possible. On the other hand, we also seek a space which is not too restricted, i.e. covering as much as possible the free space between the target behavior and the scene geometry. + +To do so, while we evaluate the quality of camera animations for an horizon ${H}^{i}$ , we also take the opportunity to analyse how much collisions and occlusions occur. In turn, this will allow to know if the free space is well covered (or not enough). We then propose to dynamically rescale the camera animation space to make it grow or shrink in the next time horizon ${H}^{i + 1}$ . This rescaling applies when we update the transform matrix ${M}^{i + 1}$ , and on the $x$ and $y$ axes only. It is worth noting that the free space might not be symmetrical around the target behavior (as illustrated figure 6). Indeed, this free space might for instance be larger (or smaller) on the left than on the right of the target. The same applies to the free space above or below the target. Consequently our idea is to compute four scale values, on all four directions $\{ - x, + x, - y, + y\}$ . For any camera position along a camera animation, we then apply either two of them, depending on the sign of the position’s $x$ and $y$ coordinates in the non-transformed animation space. + +We propose to process in the following way. We firstly leverage the occlusions and collisions evaluation to store additional information: we count fails and successes along each axis. We consider a launched ray along a camera animation (i.e. from the camera position at a given time step) as a fail if it is marked as occluded or collided, and as a success if not. We secondly store this information in height arrays: for each half-axis (e.g. $+ x$ or $- x$ ), we count successes in one array, and fails in another array. We further discretize this half-axis by using a certain resolution $R$ , to output two histograms, of fails and successes (as illustrated in figure 7). Note that $R$ here define the scale precision on each axis. We lastly use both histograms to compute the new scale to apply. We compute the indices ${i}_{f}$ and ${i}_{s}$ of the medians of both arrays (fails and successes, respectively). By comparing them, we define how much we should rescale animations along this half-axis. If ${i}_{s} < {i}_{f}$ , we consider that there are too many fails, and multiply the current scale by ${i}_{f}/R$ , to shrink animations Otherwise, we consider the free space is not covered enough, and apply a passive inflation to the current scale. The aim of this inflation is to help return to a maximum scale value, when the surrounding geometry allows for large camera animations. + +## 7 IMPLEMENTATION AND RESULTS + +### 7.1 Implementation + +We implemented our camera system within the Unity3D 2019 game engine. We compute our visibility and occlusion textures through raytracing shaders provided with Unity's integrated pipeline, and perform our scores for all sampled animations and timesteps through Unity Compute Shaders. All our results (detailed in section 7.2) have been processed on a laptop computer with a Intel Core i9-9880H CPU @ 2.30GHz and a NVIDIA Quadro RTX 4000. + +### 7.2 Results + +We split our evaluation into three parts. We firstly validate our adaptive scale mechanism. We secondly evaluate the robustness of our system, by comparing its performances when using a different number or set of reference camera animations. We thirdly validate the ability of our system, mixing local and global planning approaches, to outperform a purely local camera planning system. To do so, we compare results obtained with our system, and the one of Burg et al. [1], on the same test scenes. + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_6_162_137_1469_914_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_6_162_137_1469_914_0.jpg) + +Figure 9: Results for multiple runs, each using a randomly generated camera animation space. This space is sampled with uniform distribution, with 2400 sample camera animations (Hermite curves). Each plot shows the mean value over time (blue), with a 95% confidence interval (red). Left: results for 22 runs using different seeds. Right: results for 10 runs using the same seed. + +To validate our adaptive scale, we study its impact on the quality of the animation space. For the other evaluations, we compare camera systems with regards to two main criteria: how much the camera maintains visibility on the target object, and how smooth camera motions are. We compute visibility by launching rays onto the target object, and calculating the ratio of rays reaching the target. A ratio of 1 (respectively 0 ) means that the target is fully visible (respectively fully occluded). When relevant, we additionally provide statistics on the duration of partial occlusions. We then compare the quality of camera motions through their time derivatives (speed, acceleration and jerk), which provide a good indication of motion smoothness. + +Our comparisons have been performed within 4 different scenes (illustrated in the accompanying video). We validated our system by using (i) a Toy example scene, where the target is travelling a maze, containing several tight corridors with sharp turns, an open area inside a building, and a ramp. We then performed the comparisons with the technique of Burg et al. [1], by using two static scenes and a dynamic scene, which the target goes through: (ii) a scene with a set of columns and a gate (Columns+Gate), (iii) a scene with set of small and large spheres (Spheres) and (iv) a fully dynamic scene with a set of randomly falling and rolling boxes, and a randomly sliding wall (Dynamic). To provide fair comparisons, in the dynamic scene, we pre-process the random motions of boxes and of the wall. As well, for all tests in a scene, we play a pre-recorded trajectory of the target avatar, but let the camera system run as if the avatar was interactively controlled by a user. + +#### 7.2.1 Impact of adaptive scale + +We validate our adaptive scale (section 6.2) by comparing results obtained (i) when we compute and apply the adaptive scale on all 4 half-axes $\left( {-x, + x, - y, + y}\right)$ , and (ii) when we simply apply the same scale as for the $z$ axis (which we will call the naive scale technique). We processed our tests by using the toy example scene. For each technique, each time we evaluate a new set of camera animations, we output the new scale values and the ratio of fails on each half-axis. As well, we output and plot the mean cost of the 5 best animations in this set. Results are presented in figure 8 . + +Figure 8a shows how much our mechanism tightens the animation space (compared to the naive scaling technique) when the avatar is entering corridors, and grows back to the same scale when the avatar reaches less cluttered areas (e.g. in the open interior room, or the outdoor area). As expected, our mechanism allows to adapt the scale on half-spaces in a non-symmetrical way. As shown by figure $8\mathrm{\;b}$ , with our adaptive mechanism, the scaled animation space also exhibit less fails than using the naive scale technique. As well, as shown by figure 8c, it allows finding animations with lower cost, most of the time. One exception is between 40 s and 50 s, where the camera configuration isn't the same because the scale is different. In the naive case the camera is high above the character while in the adaptive case, the camera is closer to the ground, thus the scores is not relevant in this case because the two configurations are way to different to be compared. + +In the next evaluations, we consider that the adaptive scale mechanism is always activated. + +#### 7.2.2 Robustness + +We also study the robustness of our system regarding our randomly generated camera animation space. + +In a first step, we evaluate how performances vary if we run our real-time system multiple times, on the toy example scene. We also consider two cases: (i) using the same seed for every run (i.e. the same animation space is used), and (ii) using a new seed for every run (i.e. a new animation space is randomly sampled for each run). For each case, we sample a set of 2400 animations. Results are presented in figure 9. As it shows, with as many sampled animations, all runs lead to very similar results, both on the visibility enforcement, and on the camera motion smoothness. Differences are mainly due to variations in the actual framerate of the game engine, hence the rate at which the system takes new decisions. + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_7_145_141_728_935_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_7_145_141_728_935_0.jpg) + +Figure 10: Visibility when varying the number of sampled curves in our camera animation space. + +In a second step, we evaluate how the size of the animation space (i.e. the number of sampled animations) impacts performances. We ran our system with 4 different sizes:2400,1600,800or 100 animations. For each size, we performed 5 runs with random seed, and combined the results in figures 10, 11 and 12. It shows that lowering the size (at least until 800 animations) still allows good performances. Our camera system is able to find a series of camera animations maintaining enough visibility on the target object, through smooth camera motions. As we expected, for 100 animations, our system's performances are poor: it becomes harder to find animations with sufficient visibility and ensuring smooth camera motions. Our intuition behind this result is that as soon as the size become too small, the distribution of tangents becomes very sparse, hence breaking our assumption of a uniform sampling. If the animation space does not sufficiently cover the free space, this prevents the exploration of a wide range of possible camera animations. + +#### 7.2.3 Comparison to Burg 2020 + +We also compare our system, mixing local and global planning approaches, to a purely local camera planning system. To this aim, we have run our proposed camera system, and the local camera planning system of Burg et al. [1], in 3 different scenes: two static scenes (Columns+Gate and Spheres) and a fully dynamic scene (Dynamic). The Columns+Gate is the same as in [1], where the avatar is moving between some columns and go through a doorway. In the Spheres scene, the avatar is travelling a scene filled with a large set of spheres, which makes it moderately challenging for the camera systems. In the Dynamic scene, the avatar must go through a flat area, where a set of boxes are randomly flying, falling, rolling all over the place, and a wall is randomly sliding. This makes it challenging for camera systems to anticipate the scene dynamics and find occlusion-free and collision-free camera paths. + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_7_926_140_726_951_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_7_926_140_726_951_0.jpg) + +Figure 11: Camera speed when varying the number of sampled curves in our camera animation space. + +In our camera system, we used 2400 animations the recomputation rate is set to 0.25 s and the adaptive scaling is on. We present results of our tests in figures 13, 14, 15, 16, and 17. + +We firstly compare camera systems along their ability to enforce visibility on the target object (figure 13). Our tests show that for moderately challenging scenes, both lead to relatively good results. Few occlusions occur. However, for a more challenging scene (Dynamic), our system outperforms Burg et al. 's system. Even if occlusions may occur more often, the degree of occlusion is lower (figure 13b). Moreover, for all 3 scenes, when partial occlusions occur, they are shorter when using our system (figure 13c). This is explained by the fact that when no local solution exist, our system can still find a locally occluded path respecting other constraints, and leading to a less occluded area. This demonstrates our system's ability to better anticipate occlusions, and especially in dynamic scenes. + +We secondly compare the smoothness of camera motions in both camera systems. Figure 14 presents, side-by-side, the distributions of speed, acceleration and jerk when using each system. We also provide the speed, acceleration and jerk along time, in figures 15, 16, and 17. One observation we make is that Burg et al. 's system leads to lower camera speeds, as it restricts itself to simply following the avatar. In our camera system, the camera is allowed to move faster, to bypass the avatar when visibility or another constraint may be poorly satisfied. Yet, our system provides smoother motions (i.e. less jerk). One explanation is that local systems often need to steer the camera from local minima (e.g. low visibility areas). A side effect is that it may lead, for successive iterations, to an indecision on which direction the camera should take to reach better visibility. In turn, this leads to frequent changes in camera acceleration (hence higher jerk). Conversely, our system has a more global knowledge on the scene, allowing to more easily find a better path, which avoids sacrificing the smoothness of camera motions. + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_8_140_142_734_948_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_8_140_142_734_948_0.jpg) + +Figure 12: Camera jerk when varying the number of sampled curves in our camera animation space. + +## 8 DISCUSSION AND CONCLUSION + +Our system presents a number of limitations. Despite the ability to evaluate thousands of trajectories, strongly cluttered environments remain challenging. As smoothness is enforced, visibility may be lost in specific cases, and designing a techniques that could properly balance between the properties to handle specific cases need to be addressed. Also while the dynamic scale adaptation does improve results by compressing the trajectories in different half spaces, low values in scales prevent the camera from larger motions where necessary. A future work could consist in biasing the sampling in the animation space in order to adapt the space to typical local topologies of the 3D environment. Despite the limitations, the proposed work improves over existing contributions by proving an efficient camera tracking technique adapted to dynamic 3D environments and does not require heavy roadmap precomputations. + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_8_917_142_739_1237_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_8_917_142_739_1237_0.jpg) + +Figure 13: Comparison between our system and Burg et al. [1], regarding the target object's visibility (a)(b) and, when not fully visible, the duration of partial occlusion (c). + +## REFERENCES + +[1] L. Burg, C. Lino, and M. Christie. Real-time anticipation of occlusions for automated camera control in toric space. In Computer Graphics Forum, volume 39, pages 523-533. Wiley Online Library, 2020. + +[2] M. Christie, J.-M. Normand, and P. Olivier. Occlusion-free camera control for multiple targets. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2012. + +[3] M. Christie, P. Olivier, and J.-M. Normand. Camera control in computer graphics. In Computer Graphics Forum, volume 27, pages 2197-2218. Wiley Online Library, 2008. + +[4] N. Halper, R. Helbing, and T. Strothotte. A camera engine for computer games: Managing the trade-off between constraint satisfaction and frame coherence. In Computer Graphics Forum, volume 20, pages 174-183. Wiley Online Library, 2001. + +[5] A. Jovane, A. Louarn, and M. Christie. Topology-aware camera control for real-time applications. In Motion, Interaction and Games, pages 1-10. 2020. + +[6] C. Lino and M. Christie. Intuitive and efficient camera control with the toric space. ACM Transactions on Graphics (TOG), 34(4):1-12, 2015. + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_9_143_142_734_987_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_9_143_142_734_987_0.jpg) + +Figure 14: Comparison between our system and Burg et al. [1], regarding the camera speed (a), acceleration (b) and jerk (c) distributions. + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_9_146_1266_729_541_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_9_146_1266_729_541_0.jpg) + +Figure 15: Speed along time, for our camera system (blue) and the one of Burg et al. [1] (red) + +[7] C. Lino, M. Christie, F. Lamarche, G. Schofield, and P. Olivier. A real-time cinematography system for interactive $3\mathrm{\;d}$ environments. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 139-148. Eurographics Association, 2010. + +[8] A. Litteneker and D. Terzopoulos. Virtual cinematography using opti- + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_9_925_146_728_540_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_9_925_146_728_540_0.jpg) + +Figure 16: Acceleration along time, for our camera system (blue) and the one of Burg et al. [1] (red) + +![01963e8d-aafe-7e5e-9076-9b2b9c4b18df_9_925_791_728_536_0.jpg](images/01963e8d-aafe-7e5e-9076-9b2b9c4b18df_9_925_791_728_536_0.jpg) + +Figure 17: Jerk along time, for our camera system (blue) and the one of Burg et al. [1] (red) + +mization and temporal smoothing. In Proceedings of the Tenth International Conference on Motion in Games, pages 1-6, 2017. + +[9] T. Nägeli, L. Meier, A. Domahidi, J. Alonso-Mora, and O. Hilliges. Real-time planning for automated multi-view drone cinematography. ACM Transactions on Graphics (TOG), 36(4):1-10, 2017. + +[10] D. Nieuwenhuisen and M. H. Overmars. Motion planning for camera movements. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA'04. 2004, volume 4, pages 3870- 3876. IEEE, 2004. + +[11] T. Oskam, R. W. Sumner, N. Thuerey, and M. Gross. Visibility transition planning for dynamic camera control. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 55-65, 2009. + +[12] R. Ranon and T. Urli. Improving the efficiency of viewpoint composition. IEEE Transactions on Visualization and Computer Graphics, 20(5):795-807, 2014. + +[13] P. O. Scokaert and D. Q. Mayne. Min-max feedback model predictive control for constrained linear systems. IEEE Transactions on Automatic control, 43(8):1136-1142, 1998. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..f670f9c45abe8f7e6711c913fff70e3489b287a4 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,331 @@ +§ REAL-TIME CINEMATIC TRACKING OF TARGETS IN DYNAMIC ENVIRONMENTS + +Category: Research + +§ ABSTRACT + +Tracking a moving target in a 3D dynamic environment in a cinematic way remains a challenging problem through the need to simultaneously ensure a low computational cost, a good degree of reactivity to changes, and a high cinematic quality. In this paper, we draw on the idea of Motion-Predictive Control to propose an efficient real-time camera tracking technique which ensures these properties. Our approach relies on the predicted motion of a target to create and evaluate a very large number of camera motions using hardware ray casting. Our evaluation of camera motions includes a range of cinematic properties such as distance to target, visibility, collision, smoothness and jitter. Experiments are conducted to display the benefits of the approach with relation to prior work. + +§ 1 INTRODUCTION + +The automated generation of cinematic camera motions in 3D virtual environments is a key problem for a number of computer graphics applications (computer games, automated generation of virtual tours, virtual storytelling). The first and foremost problem is to identify what are the intrinsic characteristics of a good camera motion. While film literature provides a thorough and in-depth analysis of what makes a viewpoint qualitative in terms of framing, angle to target, aesthetic composition, depth-of-field, lighting, the characterisation of camera motions has been far less addressed. This is pertained to specifics of real camera rigs (dollies, cranes) that reduce the set of motion possibilities, and the limited use of long camera sequences in movies (with the exception of steadicam sequence shots). In addition, the characteristics of camera motions in movies are strongly guided by the narrative intentions which need to be conveyed (e.g. rhythm, excitation, or soothing atmosphere). + +In transposing the knowledge to the tracking of targets in virtual environments, one can however derive a number of desirable cinematic characteristics such as visibility (avoiding occlusion of the tracked target, and obviously collisions with the environment), smoothness (avoiding jerkiness in trajectories) and continuity (avoiding large changes in viewing angles and distances to target). In practice, these characteristics are often contradictory (avoiding a sudden occlusion requires a sudden acceleration, or a abrupt change in angle). Furthermore, the computational cost of evaluating visibility, collision, continuity and smoothness lowers the possibility of evaluating many alternative camera motions. + +Existing work have either addressed the problem using global motion planning techniques typically based on precomputed roadmaps $\left\lbrack {5,{10},{11}}\right\rbrack$ , or local planning techniques using ray casting [12] and shadow maps for efficient visibility computations $\left\lbrack {1,2,4}\right\rbrack$ . While global motion planning techniques excel at ensuring visibility given their full prior knowledge of the scene, local planning techniques excel in handling strong dynamic changes in the environment. The main bottleneck of both approaches remains the limited capacity in evaluating expensive cinematic properties such as target visibility along a camera motion or in the local neighborhood of a camera. + +Our approach builds on the idea of performing a mixed local+global approach by exploiting a finite-time horizon large enough to perform a global planning, yet efficient enough to react in real-time to sudden changes. This sliding window enables the real-time evaluation of thousands of camera motions by exploiting recent hardware raycasting techniques. As such, our approach draws inspiration from Motion Predictive control techniques [13] by optimizing a finite time-horizon, only implementing the current timeslot and then repeating the process on the following time slots. + +A strong hypothesis we make is that the target object is controlled by the user through interactive inputs, hence its motions and actions can be predicted within a short time horizon $H$ . Our system comprises 2 main stages, illustrated in Figure 1. In the first stage, we predict the motion of the target over a given time horizon ${H}^{i}$ by using the target’s current position (at time $t$ ) and the user inputs. We then select an ideal camera position at time $t + {H}^{i}$ and propose to define a camera animation space as a collection of smooth camera animations which link the current camera position (at time $t$ ), to the ideal camera location (at time $t + {H}^{i}$ ). In the second stage, we perform an evaluation of the quality of the camera animations in the animation space by relying on hardware raycasting techniques and select the best camera animation. In a way similar to motion-predictive control [13], we then apply part of the camera animation and re-start the process at a low frequency(4Hz)or when a change in the user inputs is detected. Finally, to adapt the camera animation space to the scene configurations, we dynamically adapt a scaling factor on the animation space. As a whole, this process allows generating a continuous and smooth camera animation which enables the real-time tracking of a target object motions in fully dynamic and complex environments. + +Our contributions are: + + * the design of a camera animation space as a dedicated space in which to express a range of camera trajectories; + + * an efficient evaluation technique using hardware ray casting; + + * a motion predictive control approach that exploits the camera animation space to generate real-time cinematic camera motions. + +§ 2 RELATED WORK + +We narrow the scope of related work to real-time camera planning techniques. For a broader view of camera control techniques in computer graphics, we refer the reader to [3]. + +§ GLOBAL CAMERA PATH PLANNING + +Global camera path-planning techniques build on well-known results from robotics such as probabilistic roadmaps, regular cell-decompositons or Delaunay triangulations. All have in common the prior computation of a roadmap as a graph where nodes represent regions of the configuration-free space (points, regular cells or other primitives), and edges represent collision-free links between the nodes. Niewenhuisen and Overmars exploited probabilistic roadmaps (PRM) to automatically perform queries withe the graph structure, linking given starting and ending configurations [10]. Heuristics were required to smooth the camera trajectories and avoid sudden changes in position and camera angles. Later, Oskam et al. [11] proposed a visibility-aware roadmap, by using sphere-sampling of the configuration-free space, and precomputing the combinatorial sphere-to-sphere visibility using stochastic ray-casting. + +Lino et al. [7] exploited spatial partitioning techniques as dynamically evolving volumes around targets. Connectivity between volumes allowed to dynamically create a roadmap, through which camera paths were computed while accounting for visibility and viewpoint semantics along the path. + +More recently, Jovane et al. exploited the 3D environments to create topology-aware camera roadmaps that would lower the road-amp complexity (compared to probablistic roadmaps) and enable the exploitation of different cinematic styles. + +Yet, in all cases, the cost of precomputing the roadmap, and the difficulty in dynamically updating it to account for change in the 3D environments limits the practical applicability of such techniques to strongly dynamic environments, such as those met in computer games or storytelling applications. + +§ LOCAL CAMERA PLANNING + +The other class of real-time camera planning techniques relies on a local knowledge of the environment. Mostly by sampling and evaluating the local neighborhood around the current camera location, such system are able to take decisions as to were to move at the next iteration, while evaluation classical cinematic properties such as visibility, smoothness and continuity. To address the computational issue of evaluation visilibity of targets, Halper et al. [4] exploited shadow maps to compute potential visible sets, coupled with a hierarchical solver. Normand and Christie exploited slanted rendering frustums to compose spatial and temporal visibility for two targets over a small temporal window (10 frames) [2]. Additional criteria were added in order to select the best move to perform at each frame, and to balance between camera smoothness and camera reactivity. Litteneker et al. [8] proposed a local planning technique based on an active contour algorithm. + +Burg et al. [1] performed shadow map projections from the targets to the surface of the Toric manifold (a specific manifold space dedicated to camera control [6]). The visibility information provided by the shadow maps was then exploited to move the camera on the surface of the Toric manifold while ensuring secondary visual properties. + +Recently, for the specific case of drone cinematography, Nageli et al. [9] built a non-linear model predictive contouring controller to jointly optimize 3D motion paths, the associated velocities and control inputs for a drone. + +Our approach partly builds on the work of Nageli et al., by borrowing the idea of a receding horizon process in which motion planning is performed for a large enough time horizon (few seconds), and the processed is repeated at a higher frequency to account for dynamic changes in the environment. Rather than addressing the problem using a non-linear solver, we propose in our paper to exploit the hardware raycasting capacities of recent graphics cards to efficiently detect collisions and occlusion and evaluate thousands of camera trajectories for each time slot. + +§ 3 OVERVIEW + +Our system aims at tracking in real-time a target object traveling through a $3\mathrm{\;d}$ animated scene, by generating a series of smooth cinematic-like camera motions. + +In the following, we will present the construction of our camera animation space (Section 4), before detailing the evaluation of camera animations in this space (Section 5). Finally, we will show how we dynamically adapt and recompute our graph to fit the scene geometry and improve the results (Section 6). + +§ 4 CAMERA ANIMATION SPACE + +We propose the design of a Camera Animation Space as relative local frame, defined by an initial camera configuration ${\mathbf{q}}_{\text{ start }}$ at time $t$ and final camera configuration ${\mathbf{q}}_{\text{ goal }}$ at time $t + {H}^{i}$ (see Figure 2). This local space defines all the possible camera animations that link ${\mathbf{q}}_{\text{ start }}$ at time $t$ to ${\mathbf{q}}_{\text{ goal }}$ at time $t + {H}^{i}$ . Our goal is to compute the optimal camera motion within this space considering a number of desired features on the trajectory (smoothness, collision and occlusion avoidance along the camera animation, ...). + +${H}^{i}\;$ Time horizon for iteration $i$ (between times ${t}_{i}$ and ${t}_{i} + h$ ) + + Target behavior (predicted position) at time $t \in {H}_{i}$ + + Set of preferred viewpoints at time ${t}_{i} + h$ + + Camera animation space for horizon ${H}^{i}$ + +${M}^{i}$ Transform matrix of the camera animation space, for ${H}^{i}$ + + 3D position in camera animation ${\mathbf{q}}_{j}^{i} \in {\mathbb{Q}}^{i}$ , at time $t \in {H}^{i}$ + + Starting camera position. ${\mathbf{q}}_{j}^{i}\left( {t}_{i}\right) = {q}_{\text{ start }}^{i},\forall \left( {i,j}\right)$ + + Goal camera position. ${\mathbf{q}}_{j}^{i}\left( {{t}_{i} + h}\right) = {q}_{\text{ goal }}^{i} \in {\mathbf{V}}^{i},\forall \left( {i,j}\right)$ + + Tangent vector of a camera track at time $t$ + + The camera view vector at time $t$ + +$\left( {\mathbf{x},\mathbf{y}}\right) \;$ angle between two vectors $\mathbf{x}$ and $\mathbf{y}$ + + Table 1: Notations used in the paper + +To this end, we propose to follow a 3-step process: (i) anticipate the target object's behavior (i.e. its next positions) within a given time horizon, (ii) choose a goal camera viewpoint from which to view the target at the end of the time horizon, and (iii) given this goal viewpoint, and the current one, build and evaluate the space of possible camera animations between them. + +§ 4.1 ANTICIPATING THE TARGET BEHAVIOR + +We here make the strong assumption that we can anticipate, with good approximation, the next positions of the tracked target within a time horizon ${H}^{i}$ . This is classical in character animation engines to select the best animation to play (e.g. motion matching). We consider ${H}^{i}$ begins at time ${t}_{i}$ and has a constant user-defined duration of $h$ seconds. Moreover, we consider that the target behavior will be consistent over the whole horizon ${H}^{i}$ . In our implementation, we consider the target as a rigid body (e.g. a capsule) with a current speed and acceleration, launch a physical simulation over ${H}^{i}$ , then store all simulated positions of the rigid body over time. With this anticipation, we account for the scene geometry which might influence future user inputs, e.g. to make the target avoid collisions. We then refer to the anticipated positions as the target behavior, output in the form of a 3d animation curve ${B}^{i}\left( t\right)$ with $t \in {H}^{i}$ (see Figure 3). Note that one may use another technique to anticipate the target behavior, provided it can output a $3\mathrm{\;d}$ animation curve ${B}^{i}\left( t\right)$ over time, it will not change the overall workflow of our camera system. + +§ 4.2 SELECTING A GOAL VIEWPOINT + +We now make the second assumption that the user defines a set of viewpoints to portray the target object. By default, one might use a list of stereotypical viewpoints in movies such as 3-quarter front and back views, side views, or bird eye views. These viewpoints are sorted by order of preference (fixed by the user) in a priority queue V. Each preferred viewpoint is defined as a 3d position in spherical coordinates $\left( {d,\phi ,\theta }\right)$ , in the local frame of the target’s configuration, where $\left( {\phi ,\theta }\right)$ defines the vertical and horizontal viewing angles, and $d$ the viewing distance. + +Given this set of viewpoints we propose, each time we update the target behavior, to select a good viewpoint where the camera should be at the end of the time horizon ${H}^{i}$ , i.e. at time ${t}_{i} + h$ . Considering all viewpoints are in $\mathbf{V}$ , we pop viewpoints by order of priority. We propose to stop as soon as a viewpoint is promising enough, i.e. at time ${t}_{i} + h$ neither the target will be occluded from this viewpoint, nor the camera will be in collision with the scene geometry. We then refer to this selected viewpoint as the goal viewpoint ${\mathbf{q}}_{\text{ goal }}$ . + + < g r a p h i c s > + +Figure 1: System overview: the orange box represents the CPU part of the system; the green box represent the GPU part of the system + + < g r a p h i c s > + +Figure 2: Representation of the animation space and its local transform + + < g r a p h i c s > + +Figure 3: Representation of the target’s behaviour curve at the ${i}^{th}$ iteration + + < g r a p h i c s > + +Figure 4: Ray launched from the camera toward the target's sampling area at time $t$ + +Knowing the current camera viewpoint ${\mathbf{q}}_{\text{ start }}$ (at time ${t}_{i}$ ) and this goal viewpoint ${\mathbf{q}}_{\text{ goal }}$ (at time ${t}_{i} + h$ ) provides a good basis to build camera animations that can follow the target behavior. We now need to categorize the full range of possible camera animations. + +§ 4.3 SAMPLING CAMERA ANIMATIONS + +Given the target behavior to track, we have computed two key viewpoints ${\mathbf{q}}_{\text{ start }}$ and ${\mathbf{q}}_{\text{ goal }}$ , where the camera should be at start and end times of horizon ${H}^{i}$ . We now propose to categorize the space of possible camera animations, between these two viewpoints. It is worth noting that this space is infinite, which makes it difficult to explore and evaluate. Our idea is to instead create a compact representation of this space, by sampling a large set of animation curves, with a good coverage of the range of animations. We will hereafter note this stochastic set of camera animations as ${\mathbb{Q}}^{i}$ , and a sampled camera animation as ${\mathbf{q}}_{j}^{i}$ , where $j$ is the sample index. + +Two requirements should be considered on this sampled space: (i) sampled camera animations should be as-smooth-as-possible, i.e. with low jerk, and (ii) the sampled animation space should enable to enforce continuity between successive horizons. To do so, we propose to encode each sampled camera animation as a cubic spline curve on all 3 camera position parameters, as they offer ${C}^{3}$ continuity between key-viewpoints. In practice, we make use of Hermite curves. They provide an easy means to sample the space of possible animations, by simply sampling a set of tangent vectors to the spline curve at start and end positions. Then, ${C}^{1}$ continuity between successive Hermite curve portions is commonly enforced by aligning both positions and tangents at connecting positions. We hence propose to rely on the same idea. + + < g r a p h i c s > + +Figure 5: Example of a part of a Visibility data encoding texture; Black $=$ The target is visible from the camera, $\operatorname{Red} =$ The target is occluded or partially occluded from the camera, Blue = The camera is inside the scene geometry + + < g r a p h i c s > + +Figure 6: Representation of the positioned animation space that is asymmetrically collided by the scene geometry + +In practice we propose, for each camera animation, to complement the starting and the goal camera positions ${\mathbf{q}}_{\text{ start }}$ and ${\mathbf{q}}_{\text{ goal }}$ , by two tangents, i.e. the camera velocities ${\dot{\mathbf{q}}}_{\text{ start }}$ and ${\dot{\mathbf{q}}}_{\text{ goal }}$ (figure 2). To offer a good categorization of the whole animation space, we use a uniform sampling of these tangents in a sphere of radius $r$ (in our tests, we used $r = 5$ ). The number of sampled animations is left as a user-defined parameter. Though, providing a sufficient categorization might require to sample several hundreds of animations (an evaluation of results for different values is provided in section 7.2). + +Recomputing such a stochastic graph might be costly and/or lack stability over time. Our last proposition is then to precompute a graph of uniformly sampled camera animations, in an orthonormal coordinate system (as illustrated in figure 2). In this system, ${\mathbf{q}}_{\text{ start }}$ and ${\mathbf{q}}_{\text{ goal }}$ has coordinates(0,0,0)and(0,0,1)respectively. Then, for any horizon ${H}^{i}$ , we apply a proper ${4x4}$ transform matrix ${M}^{i}$ to align the graph onto the computed viewpoints ${\mathbf{q}}_{\text{ start }}^{i}$ and ${\mathbf{q}}_{\text{ goal }}^{i}$ . It is worth noting that in ${M}^{i}$ the $3\mathrm{\;d}$ translation, $3\mathrm{\;d}$ rotation and the scaling on the $z$ axis will lead this axis to match the vector $\left( {{\mathbf{q}}_{\text{ goal }}^{i} - {\mathbf{q}}_{\text{ start }}^{i}}\right)$ . Two parameters remain free: the scaling for the other two axes ( $x$ and $y$ ). As a first assumption we could use the same scaling as for $z$ . However, we will further explain how to choose a better scaling in section 6, to take collisions and occlusions into consideration. + +§ 5 EVALUATING CAMERA ANIMATIONS + +In the first stage, we have computed a set of camera animations ${\mathbb{Q}}^{i}$ , that can portray the target objects' behavior within time horizon ${H}^{i}$ . We now need to select one of these animations, as the one to apply to the camera. Our second stage is devoted to evaluating of the quality of all animations, and selecting the most promising one, in an efficient way. In the following, we will first detail our evaluation criteria, before focusing on how we perform this evaluation. + + < g r a p h i c s > + +Figure 7: Projection of the success and fail of one Track on the four axis of resolution $R = 4$ . a) Collision and occlusion detection b) Enumeration and projection of the success and fail samples on the axis + +§ 5.1 CAMERA ANIMATION QUALITY + +A proper camera animation to portray the motions of a target object should meet a set of nice properties, among which the most important are: avoid collisions with the scene, enforce visibility on the target object, while offering a smooth series of intermediate viewpoints to the viewer. To evaluate how much these properties are enforced along a camera animation ${\mathbf{q}}_{j}^{i}$ , we propose to rely on a set of costs ${C}_{k}\left( t\right) \in \left\lbrack {0,1}\right\rbrack$ : + +Occlusions and Collisions To evaluate how much the target object is occluded from a camera position ${q}_{j}^{i}\left( t\right)$ , we rely on ray-tracing. We firstly approximate the target object's geometry with a simple abstraction (e.g. a sphere). We secondly sample a set of points $s \in \left\lbrack {0,N}\right\rbrack$ onto this abstraction, which we position at the object’s anticipated position ${B}^{i}\left( t\right)$ . We thirdly launch a ray from the camera to each point $s$ (see figure 4). We lastly note ${R}_{s}\left( t\right)$ the result of this ray-launch. In the mean time, we use the same ray to also evaluate if the camera is in collision (i.e. inside another object of the scene), by setting its value as: + +$$ +{R}_{s}\left( t\right) = \left\{ \begin{array}{ll} 0 & \text{ if Visible } \\ 1 & \text{ if Occluded } \\ 2 & \text{ if Collided } \end{array}\right. +$$ + +We distinguish a collision from a simple occlusion as follow. By looking at the normal at the hit geometry, we know if the ray has hit a back face or a front face. When the ray hits a back face, ${q}_{j}^{i}\left( t\right)$ must be inside a geometry, hence we consider it as a camera collision. Conversely, when the ray hits a front face, ${q}_{j}^{i}\left( t\right)$ must be outside a geometry. If the ray does not reach $s$ , we consider $s$ as occluded, otherwise we consider it as visible. + +Knowing ${R}_{s}\left( t\right)$ , we define our collision and occlusion costs, as: + +$$ +{C}_{o}\left( t\right) = \frac{1}{N}\mathop{\sum }\limits_{{s = 0}}^{N}\left\{ \begin{array}{ll} 1 & \text{ if }{R}_{s}\left( t\right) = 1 \\ 0 & \text{ Otherwise } \end{array}\right. +$$ + +and + +$$ +{C}_{c}\left( t\right) = \frac{1}{N}\mathop{\sum }\limits_{{s = 0}}^{N}\left\{ \begin{array}{ll} 1 & \text{ if }{R}_{s}\left( t\right) = 2 \\ 0 & \text{ Otherwise } \end{array}\right. +$$ + +In our tests, we used $N = {20}$ . + +Viewpoint variations Providing a smooth series of intermediate camera viewpoints requires to regulate changes on successive viewpoints. We hence propose to evaluate how much the viewpoint changes over time. We split this evaluation into two distinct costs: one on the camera view angle, and one on the distance to the target object. Further, we define them for time steps ${\delta t}$ . + +Beforehand, let's introduce the view vector connecting the target object to the camera, on which we will rely. It is computed as: + +$$ +{D}_{j}^{i}\left( t\right) = {B}^{i}\left( t\right) - {q}_{j}^{i}\left( t\right) +$$ + +From this view vector, we define the view angle change as: + +$$ +{C}_{{\Delta }_{\phi ,\theta }}\left( t\right) = \frac{\left( {D}_{j}^{i}\left( t\right) ,{D}_{j}^{i}\left( t + \delta t\right) \right) }{\pi } +$$ + +In a way similar, we propose to rely on a squared distance variation, defined as: + +$$ +{\Delta d}\left( t\right) = {\left( \begin{Vmatrix}{D}_{j}^{i}\left( t\right) \end{Vmatrix} - \begin{Vmatrix}{D}_{j}^{i}\left( t + \delta t\right) \end{Vmatrix}\right) }^{2} +$$ + +We then define a cost on this distance change, which we further normalize as: + +$$ +{C}_{\Delta d}\left( t\right) = 1 - E\left( {{\Delta d}\left( t\right) ,\lambda }\right) +$$ + +where $E$ is an exponential decay function, for which we set parameter $\lambda$ to ${10}^{-4}$ . + +Preferred range of distances One side effect of the above costs is that for large distances, changes on the view angle and distance will be less penalized. In turn, this will favor large camera animations. It is worth noting that, in the same way, placing the camera too close to the target object is also not desired in general. We should then penalize both behaviors. To do so, we propose to introduce a last cost, aimed at favoring camera animations where the camera remains within a prescribed distance range $\left\lbrack {{d}_{\min },{d}_{\max }}\right\rbrack$ . We formulate this costs as: + +$$ +{C}_{d}\left( t\right) = \left\{ \begin{array}{ll} 1 & \text{ if }\begin{Vmatrix}{{D}_{j}^{i}\left( t\right) }\end{Vmatrix} \notin \left\lbrack {{d}_{\min },{d}_{\max }}\right\rbrack \\ 0 & \text{ otherwise } \end{array}\right. +$$ + +§ 5.2 SELECTING A CAMERA ANIMATION + +Given the previously introduced costs, we now target to compute the cost of an entire animation, and select the most promising one. + +In a first step, we define the total cost of a camera animation as a weighted sum of single-criteria costs integrated over time: + +$$ +C = \mathop{\sum }\limits_{k}{w}_{k} \cdot \left\lbrack {{\int }_{{t}_{i}}^{{t}_{i} + h}{C}_{k}\left( t\right) G\left( {t - {t}_{i},\sigma }\right) {dt}}\right\rbrack +$$ + +where ${w}_{k} \in \left\lbrack {0,1}\right\rbrack$ is the weight of criterion $k.G$ is a Gaussian decay function, where we set standard deviation $\sigma$ to the value of $h/4$ . We also slightly tune the decay to converge toward 0.25 (instead of 0 ). This way, we give a higher importance to the costs of the beginning of the animation, yet considering the end. Indeed, our assumption is that the camera will only play the first part of it (10% in our tests), while the remaining part still brings a longer term information on what could be a good camera path. We compute this total cost for any camera animation ${q}_{j}^{i} \in {\mathbb{Q}}^{i}$ , and refer to it as ${C}_{j}^{i}$ . + +In a second step, we propose to choose the most promising camera animation for time horizon ${H}^{i}$ , denoted as ${\mathbf{q}}^{i}$ , as the one with minimum total cost, i.e. : + +$$ +{\mathbf{q}}^{i} = \underset{j}{\arg \min }{C}_{j}^{i} +$$ + +§ 5.3 GPU-BASED EVALUATION + +We have presented our evaluation metric on camera animations. However, some costs might be expensive to compute. In particular, computing occlusion and collision require to trace many rays (i.e. $N$ rays, for many time steps, for hundreds of camera animations). We here focus more in detail on how we propose to perform computations in a very efficient way. + +It is worth noting that many computations can be performed in parallel. The evaluation of camera animations are independent from each other. In a way similar, the evaluation of a cost at different time steps along a given animation are also independent. Hence, we propose to cast our evaluation of single costs into a massively-parallel computation on GPU. + +Firstly, note that we only need to send the animation space (in orthonormal coordinate system) once to the GPU, when starting the system. Then, when we need to reposition the camera animation space, for horizon ${H}^{i}$ , we simply update the ${4x4}$ transform matrix ${M}^{i}$ . And, from this data, one can straightforwardly compute any camera position ${\mathbf{q}}_{j}^{i}\left( t\right)$ for any time $t$ . + +Secondly, for occlusion and collision computations, we propose to rely on the recent RTX technology, allowing to perform real-time raytracing requests on GPU. We run $\frac{{H}^{i}}{\delta t}$ kernels per track, where each kernel launches $N$ rays, one per sample $s$ picked onto the target object. The result of these computations are stored into a 2D texture (as shown in figure 5), where the texture coordinates $u$ and $v$ map to one time step $t$ and one animation of index $j$ , respectively. Occlusion and collision costs are stored into two different channels. + +Thirdly, we rely on a compute shader to compute all other costs, and combine them with occlusion and collision costs. This shader uses one kernel per camera animation. It stores the total cost of all animations into a GPU buffer, finally sent back to CPU where we perform the selection step. + +§ 6 DYNAMIC TRAJECTORY ADAPTATION + +Until now, we have considered a simplified configuration, where we evaluate the animation space and select one camera animation for one given time horizon ${H}^{i}$ . We now need to consider two other requirements. First, the camera should be animated to track the target object for an unknown duration, larger than $h$ . Changes in the target behavior may also occur, due to interactive user inputs Second, for any horizon ${H}^{i}$ , some camera animations could be in collision with the scene, or the target could be occluded. This would prevent finding a proper animation to apply. In other words, the space of potential camera animations should be influenced by the surrounding scene geometry. Hereafter, we explain how we account for these requirements. + +§ 6.1 USER INPUTS AND INTERACTIVE UPDATE + +We here assume the camera is currently animated along curve ${\mathbf{q}}^{i}$ . We then need to compute a new animation, for a new time horizon ${H}^{i + 1}$ in two cases. First, when the target's behavior has changed, this makes the currently played animation not valid for future time steps. Second, when the camera animation as reached a specific duration. In a way similar to motion-predictive control, we indeed consider the target behavior, as well as the collision and occlusion computations, less reliable after a certain anticipation duration. This in particular allows to handle dynamic collisions and occlusions. This update is specified by the user as a ratio of progress along animation ${\mathbf{q}}^{i}$ . In our tests the horizon duration is $h = 5$ seconds and the update ratio is 0.1 . In turn, the new horizon generally starts at ${t}_{i + 1} = {t}_{i} + {0.1h}$ , while we set ${\mathbf{q}}_{\text{ start }}^{i + 1} = {\mathbf{q}}^{i}\left( {t}_{i + 1}\right)$ . + +Knowing that an update is required, we iterate on the overall process explained earlier, for next horizon ${H}^{i + 1}$ . We select a new goal viewpoint (i.e. ${\mathbf{q}}_{\text{ goal }}^{i + 1}$ ), and update the transform matrix (i.e. ${M}^{i + 1}$ ) to position the camera animation space ${\mathbb{Q}}^{i + 1}$ . We then need to evaluate all camera animations in ${\mathbb{Q}}^{i + 1}$ . To do so, we compute all costs presented in section 5 . However, this is not enough, as we also need to enforce continuity between animation ${q}^{i}$ and the animation ${\mathbf{q}}^{i + 1}$ that is to be selected. To do so, we rely on an additional criteria designed to favor a smooth transition between consecutive animations: + + < g r a p h i c s > + +Figure 8: Comparison of our system with an adaptive scale, or with a naïve scale, applied on the camera animation space. + +Animation transitions this cost penalizes abrupt changes when transitioning between two camera animation curves. Our idea is to penalize a wide angle between the tangent vector to camera animation ${\mathbf{q}}^{i}$ and the tangent vector to animation ${\mathbf{q}}_{j}^{i + 1} \in {\mathbb{Q}}^{i + 1}$ , at connection time ${t}_{i + 1}$ . We write this cost as: + +$$ +{C}_{i,i + 1}\left( j\right) = \frac{\left( {\dot{\mathbf{q}}}^{i}\left( {t}_{i + 1}\right) ,{\dot{\mathbf{q}}}_{j}^{i + 1}\left( {t}_{i + 1}\right) \right) }{\pi } +$$ + +We then rewrite the selection of camera animation ${q}^{i + 1}$ as: + +$$ +{\mathbf{q}}^{i + 1} = \underset{j}{\arg \min }\left\lbrack {{C}_{j}^{i + 1} + {w}_{i,i + 1}{C}_{i,i + 1}\left( j\right) }\right\rbrack +$$ + +where ${w}_{i,i + 1}$ is the relative weight of the transition cost with regards to other costs. + +§ 6.2 ADAPT TO SCENE GEOMETRY + +We would also like to adapt our camera animation space to the scene geometry. To this aim, on the one hand we seek to offer a camera animation space which exhibits as few collisions and occlusions as possible. On the other hand, we also seek a space which is not too restricted, i.e. covering as much as possible the free space between the target behavior and the scene geometry. + +To do so, while we evaluate the quality of camera animations for an horizon ${H}^{i}$ , we also take the opportunity to analyse how much collisions and occlusions occur. In turn, this will allow to know if the free space is well covered (or not enough). We then propose to dynamically rescale the camera animation space to make it grow or shrink in the next time horizon ${H}^{i + 1}$ . This rescaling applies when we update the transform matrix ${M}^{i + 1}$ , and on the $x$ and $y$ axes only. It is worth noting that the free space might not be symmetrical around the target behavior (as illustrated figure 6). Indeed, this free space might for instance be larger (or smaller) on the left than on the right of the target. The same applies to the free space above or below the target. Consequently our idea is to compute four scale values, on all four directions $\{ - x, + x, - y, + y\}$ . For any camera position along a camera animation, we then apply either two of them, depending on the sign of the position’s $x$ and $y$ coordinates in the non-transformed animation space. + +We propose to process in the following way. We firstly leverage the occlusions and collisions evaluation to store additional information: we count fails and successes along each axis. We consider a launched ray along a camera animation (i.e. from the camera position at a given time step) as a fail if it is marked as occluded or collided, and as a success if not. We secondly store this information in height arrays: for each half-axis (e.g. $+ x$ or $- x$ ), we count successes in one array, and fails in another array. We further discretize this half-axis by using a certain resolution $R$ , to output two histograms, of fails and successes (as illustrated in figure 7). Note that $R$ here define the scale precision on each axis. We lastly use both histograms to compute the new scale to apply. We compute the indices ${i}_{f}$ and ${i}_{s}$ of the medians of both arrays (fails and successes, respectively). By comparing them, we define how much we should rescale animations along this half-axis. If ${i}_{s} < {i}_{f}$ , we consider that there are too many fails, and multiply the current scale by ${i}_{f}/R$ , to shrink animations Otherwise, we consider the free space is not covered enough, and apply a passive inflation to the current scale. The aim of this inflation is to help return to a maximum scale value, when the surrounding geometry allows for large camera animations. + +§ 7 IMPLEMENTATION AND RESULTS + +§ 7.1 IMPLEMENTATION + +We implemented our camera system within the Unity3D 2019 game engine. We compute our visibility and occlusion textures through raytracing shaders provided with Unity's integrated pipeline, and perform our scores for all sampled animations and timesteps through Unity Compute Shaders. All our results (detailed in section 7.2) have been processed on a laptop computer with a Intel Core i9-9880H CPU @ 2.30GHz and a NVIDIA Quadro RTX 4000. + +§ 7.2 RESULTS + +We split our evaluation into three parts. We firstly validate our adaptive scale mechanism. We secondly evaluate the robustness of our system, by comparing its performances when using a different number or set of reference camera animations. We thirdly validate the ability of our system, mixing local and global planning approaches, to outperform a purely local camera planning system. To do so, we compare results obtained with our system, and the one of Burg et al. [1], on the same test scenes. + + < g r a p h i c s > + +Figure 9: Results for multiple runs, each using a randomly generated camera animation space. This space is sampled with uniform distribution, with 2400 sample camera animations (Hermite curves). Each plot shows the mean value over time (blue), with a 95% confidence interval (red). Left: results for 22 runs using different seeds. Right: results for 10 runs using the same seed. + +To validate our adaptive scale, we study its impact on the quality of the animation space. For the other evaluations, we compare camera systems with regards to two main criteria: how much the camera maintains visibility on the target object, and how smooth camera motions are. We compute visibility by launching rays onto the target object, and calculating the ratio of rays reaching the target. A ratio of 1 (respectively 0 ) means that the target is fully visible (respectively fully occluded). When relevant, we additionally provide statistics on the duration of partial occlusions. We then compare the quality of camera motions through their time derivatives (speed, acceleration and jerk), which provide a good indication of motion smoothness. + +Our comparisons have been performed within 4 different scenes (illustrated in the accompanying video). We validated our system by using (i) a Toy example scene, where the target is travelling a maze, containing several tight corridors with sharp turns, an open area inside a building, and a ramp. We then performed the comparisons with the technique of Burg et al. [1], by using two static scenes and a dynamic scene, which the target goes through: (ii) a scene with a set of columns and a gate (Columns+Gate), (iii) a scene with set of small and large spheres (Spheres) and (iv) a fully dynamic scene with a set of randomly falling and rolling boxes, and a randomly sliding wall (Dynamic). To provide fair comparisons, in the dynamic scene, we pre-process the random motions of boxes and of the wall. As well, for all tests in a scene, we play a pre-recorded trajectory of the target avatar, but let the camera system run as if the avatar was interactively controlled by a user. + +§ 7.2.1 IMPACT OF ADAPTIVE SCALE + +We validate our adaptive scale (section 6.2) by comparing results obtained (i) when we compute and apply the adaptive scale on all 4 half-axes $\left( {-x, + x, - y, + y}\right)$ , and (ii) when we simply apply the same scale as for the $z$ axis (which we will call the naive scale technique). We processed our tests by using the toy example scene. For each technique, each time we evaluate a new set of camera animations, we output the new scale values and the ratio of fails on each half-axis. As well, we output and plot the mean cost of the 5 best animations in this set. Results are presented in figure 8 . + +Figure 8a shows how much our mechanism tightens the animation space (compared to the naive scaling technique) when the avatar is entering corridors, and grows back to the same scale when the avatar reaches less cluttered areas (e.g. in the open interior room, or the outdoor area). As expected, our mechanism allows to adapt the scale on half-spaces in a non-symmetrical way. As shown by figure $8\mathrm{\;b}$ , with our adaptive mechanism, the scaled animation space also exhibit less fails than using the naive scale technique. As well, as shown by figure 8c, it allows finding animations with lower cost, most of the time. One exception is between 40 s and 50 s, where the camera configuration isn't the same because the scale is different. In the naive case the camera is high above the character while in the adaptive case, the camera is closer to the ground, thus the scores is not relevant in this case because the two configurations are way to different to be compared. + +In the next evaluations, we consider that the adaptive scale mechanism is always activated. + +§ 7.2.2 ROBUSTNESS + +We also study the robustness of our system regarding our randomly generated camera animation space. + +In a first step, we evaluate how performances vary if we run our real-time system multiple times, on the toy example scene. We also consider two cases: (i) using the same seed for every run (i.e. the same animation space is used), and (ii) using a new seed for every run (i.e. a new animation space is randomly sampled for each run). For each case, we sample a set of 2400 animations. Results are presented in figure 9. As it shows, with as many sampled animations, all runs lead to very similar results, both on the visibility enforcement, and on the camera motion smoothness. Differences are mainly due to variations in the actual framerate of the game engine, hence the rate at which the system takes new decisions. + + < g r a p h i c s > + +Figure 10: Visibility when varying the number of sampled curves in our camera animation space. + +In a second step, we evaluate how the size of the animation space (i.e. the number of sampled animations) impacts performances. We ran our system with 4 different sizes:2400,1600,800or 100 animations. For each size, we performed 5 runs with random seed, and combined the results in figures 10, 11 and 12. It shows that lowering the size (at least until 800 animations) still allows good performances. Our camera system is able to find a series of camera animations maintaining enough visibility on the target object, through smooth camera motions. As we expected, for 100 animations, our system's performances are poor: it becomes harder to find animations with sufficient visibility and ensuring smooth camera motions. Our intuition behind this result is that as soon as the size become too small, the distribution of tangents becomes very sparse, hence breaking our assumption of a uniform sampling. If the animation space does not sufficiently cover the free space, this prevents the exploration of a wide range of possible camera animations. + +§ 7.2.3 COMPARISON TO BURG 2020 + +We also compare our system, mixing local and global planning approaches, to a purely local camera planning system. To this aim, we have run our proposed camera system, and the local camera planning system of Burg et al. [1], in 3 different scenes: two static scenes (Columns+Gate and Spheres) and a fully dynamic scene (Dynamic). The Columns+Gate is the same as in [1], where the avatar is moving between some columns and go through a doorway. In the Spheres scene, the avatar is travelling a scene filled with a large set of spheres, which makes it moderately challenging for the camera systems. In the Dynamic scene, the avatar must go through a flat area, where a set of boxes are randomly flying, falling, rolling all over the place, and a wall is randomly sliding. This makes it challenging for camera systems to anticipate the scene dynamics and find occlusion-free and collision-free camera paths. + + < g r a p h i c s > + +Figure 11: Camera speed when varying the number of sampled curves in our camera animation space. + +In our camera system, we used 2400 animations the recomputation rate is set to 0.25 s and the adaptive scaling is on. We present results of our tests in figures 13, 14, 15, 16, and 17. + +We firstly compare camera systems along their ability to enforce visibility on the target object (figure 13). Our tests show that for moderately challenging scenes, both lead to relatively good results. Few occlusions occur. However, for a more challenging scene (Dynamic), our system outperforms Burg et al. 's system. Even if occlusions may occur more often, the degree of occlusion is lower (figure 13b). Moreover, for all 3 scenes, when partial occlusions occur, they are shorter when using our system (figure 13c). This is explained by the fact that when no local solution exist, our system can still find a locally occluded path respecting other constraints, and leading to a less occluded area. This demonstrates our system's ability to better anticipate occlusions, and especially in dynamic scenes. + +We secondly compare the smoothness of camera motions in both camera systems. Figure 14 presents, side-by-side, the distributions of speed, acceleration and jerk when using each system. We also provide the speed, acceleration and jerk along time, in figures 15, 16, and 17. One observation we make is that Burg et al. 's system leads to lower camera speeds, as it restricts itself to simply following the avatar. In our camera system, the camera is allowed to move faster, to bypass the avatar when visibility or another constraint may be poorly satisfied. Yet, our system provides smoother motions (i.e. less jerk). One explanation is that local systems often need to steer the camera from local minima (e.g. low visibility areas). A side effect is that it may lead, for successive iterations, to an indecision on which direction the camera should take to reach better visibility. In turn, this leads to frequent changes in camera acceleration (hence higher jerk). Conversely, our system has a more global knowledge on the scene, allowing to more easily find a better path, which avoids sacrificing the smoothness of camera motions. + + < g r a p h i c s > + +Figure 12: Camera jerk when varying the number of sampled curves in our camera animation space. + +§ 8 DISCUSSION AND CONCLUSION + +Our system presents a number of limitations. Despite the ability to evaluate thousands of trajectories, strongly cluttered environments remain challenging. As smoothness is enforced, visibility may be lost in specific cases, and designing a techniques that could properly balance between the properties to handle specific cases need to be addressed. Also while the dynamic scale adaptation does improve results by compressing the trajectories in different half spaces, low values in scales prevent the camera from larger motions where necessary. A future work could consist in biasing the sampling in the animation space in order to adapt the space to typical local topologies of the 3D environment. Despite the limitations, the proposed work improves over existing contributions by proving an efficient camera tracking technique adapted to dynamic 3D environments and does not require heavy roadmap precomputations. + + < g r a p h i c s > + +Figure 13: Comparison between our system and Burg et al. [1], regarding the target object's visibility (a)(b) and, when not fully visible, the duration of partial occlusion (c). \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..0d7fb3620b08da0147dc9686ab30326eaccbaa51 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,271 @@ +# Simulating Mass in Virtual Reality using Physically-Based Hand-Object Interactions with Vibration Feedback + +![01963e90-4878-7407-b5f6-333e985b7ec5_0_423_394_958_318_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_0_423_394_958_318_0.jpg) + +Figure 1: Physics-based interactions with virtual objects using a co-located virtual hand (the left figure) are augmented using vibrational feedback proportional to objects' mass and acceleration (the right figure). + +## Abstract + +Providing the sense of mass for virtual objects using un-grounded haptic interfaces has proven to be a complicated task in virtual reality. This paper proposes using a physically-based virtual hand and a complementary vibrotactile effect on the index fingertip to give the sensation of mass to objects in virtual reality. The vibro-tactile feedback is proportional to the balanced forces acting on the virtual object and is modulated based on the object's velocity. For evaluating this method, we set an experiment in a virtual environment where participants wear a VR headset and attempt to pick up and move different virtual objects using a virtual physically-based force-controlled hand while a voice-coil actuator attached to their index fingertip provides the vibrotactile feedback. Our experiments indicate that the virtual hand and our vibration effect give the ability to discriminate and perceive the mass of virtual objects. + +Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction devices-Haptic devices; Human-centered computing-Human computer interaction (HCI)— Interaction paradigms-Virtual reality + +## 1 INTRODUCTION + +Virtual Reality (VR) has significantly revolutionized simulated human experiences. VR enables an immersive virtual experience by simulating and triggering most of our senses as if we are present in another environment. Notably, in VR it is possible to see one's own co-located virtual hands, perceive them as their own real hands and interact with virtual objects [16]. However, virtual objects have no real mass, and the problem is including touch and visual cues that we rely on for mass perception. The physical cues include skin stretch and contact pressure at the fingertips (cutaneous feedback) and proprioceptive feedback from multiple muscles and joints (kinesthetic feedback). + +Grounded haptic devices can render the necessary forces for kinesthetic and cutaneous haptic feedback. However, their size, weight, and limited workspace restrict free-hand movements, making them less desirable in various VR applications. + +Alternatively, ungrounded haptic devices (such as finger-mounted or hand-held devices) can be built more compactly and lighter, making them more convenient to use in a larger workspace. Sensing the mass of a virtual object in every direction needs more complex ungrounded hardware with higher degrees of freedom. However, such devices require multiple actuators and can limit hand and finger movements. + +Another approach to overcome the hardware limitations is to use visio-haptic illusions. These methods aim to trick the brain into perceiving the mass by manipulating the objects' visual cues. For example, limiting the virtual object's velocity [1], or scaling its displacement compared to the user's hand [21] are shown to give a sense of mass to the objects. However, these methods are not physically realistic or decrease the co-location between the actual and virtual hands. + +In this paper, we introduce a novel mass rendering method that combines a visio-haptic technique with a simple finger-mounted vibration actuator. For the visio-haptic part, we replicate the visual cues that humans perceive during a real-world hand interaction with physical objects. We use a force-controlled physically-based virtual hand in VR to interact with virtual objects, which results in a limit on the heaviness of the objects that the user can pick up and how fast they can accelerate them based on their mass. However, it is difficult to distinguish between light objects using this technique. We complement our visio-haptic method with haptic feedback. The haptic actuator that renders the feedback should be small and compact enough to allow individual fingers to move independently and perform dexterous interactions. Also, we prefer an ungrounded device since it allows a larger workspace. One method to reduce the device's size is to use haptic feedback that is directionally invariant to our sense of touch. If the haptic stimulus's direction is detectable by the sense of touch, we need multiple actuators to render the haptic effect in different directions during a virtual interaction. Therefore we employ an ungrounded, direction invariant haptic effect to complement our physically-based virtual hand. We explore using a mechanical vibration feedback effect to achieve ungrounded mass rendering for virtual objects. In our work, while the virtual object is in the user's grasp, a sinusoid vibration proportional to the object's mass and acceleration is played through an ungrounded voice-coiled actuator at the tip of the user's index finger. An overview of the proposed method is shown in Fig. 1. + +When moving two objects with different mass, in addition to the physically-based visio-haptic feedback, users feel proportionally stronger vibration while grasping the heavier object. This vibro-tactile feedback gives the user a clue to the net force acting on the virtual object. To make this a direction-invariant haptic feedback, we use frequencies above ${100}\mathrm{\;{Hz}}$ . These frequencies are sensed by Pacini mechanoreceptors, which are not sensitive to the stimuli's directions. + +To evaluate the proposed physically-based virtual hand and the vibration feedback, we conducted a user study where participants interact with virtual objects with different masses and perform virtual tasks. Using qualitative and quantitative methods, we show that the physically-based hand gives a sense of mass to virtual objects, and adding the vibration feedback does improve mass perception and discrimination. + +The main contribution of this work is the design, development, and evaluation of a novel mass rendering method for virtual objects using physically-based hand-object interactions and vibration feedback. + +## 2 Related Works + +In this section, we review relevant literature on simulating the mass of virtual objects during a VR experience. Also, we discuss modes of interaction in VR, including physically realistic grasping and interaction. + +Grounded haptic devices are highly sought-after in tool-mediated applications where precision and fidelity are essential such as surgical training $\left\lbrack {{11},{18}}\right\rbrack$ . Hand wearable grounded devices have also been developed. HIRO III [10] is an example of a five-fingered grounded haptic interface, with three DoF for each of its haptic fingers and a $6\mathrm{{DoF}}$ base capable of providing high precision force feedback to a hand while it is attached to each of the fingertips. The main challenge with grounded devices is their limited workspace size. + +Ungrounded haptic devices are attached to the user's body instead of a fixed point in the room, which allows a larger workspace. These devices are either hand-held or attached to the user's fingers, hands, or body. Minamizawa et al. [19] introduce a fingertip mounted ungrounded haptic device called the Gravity Grabber that can create a sense of weight when grabbing virtual objects in specific orientations. Gravity Grabber achieves this using one degree of freedom for shear force feedback and another degree of freedom in the normal direction of the fingertip skin. However, since our skin can detect the direction of skin stretch, this method cannot give a sense of weight to a virtual object in all orientations. Sensing the weight and inertia of a virtual object in all directions requires an ungrounded device with more complex hardware and higher degrees of freedom. Such as the works of Chinello et al. [4], and Prattichizzo et al. [20]. However, such devices are mechanically complicated since they require multiple actuators and limit hand and finger movements. In our method, we use one haptic actuator to render the mass of objects in all directions since we use sinusoidal vibration feedback. + +Hand-held ungrounded devices are desirable for simulating interactions with hand-held tools such as a hammer or a baseball bat. However, they limit the movement of fingers and the hand. Zenner in [26] introduced Drag:on a custom VR hand controller with two actuated fans, which can dynamically adjust the controller's aerodynamic properties, therefore changing the sensed inertia of a virtual object. Zenner et al. [25] introduce Shifty, a hand-held VR controller with an internal prismatic joint connected to a weight that shifts the center of mass of the device, resulting in different rotational inertia and resistance as the user interacts with various virtual tools. In the work of Lykke et al. [17], users have two hand controllers to pick up round virtual objects (scooping), and they should keep their hands closer together when the objects are heavier. Our method tracks the user's own hand instead of using a VR controller, which increases the sense of ownership and realism of the virtual hand [16] while not limiting the fingers' and hand's mobility. + +Humans can use visual cues to determine the weight of a virtual object. Backstrom [1] gives the sensation of mass to virtual objects in VR by limiting the velocity of a virtual object based on how heavy it is. Such constraints on the object's movements are not physically realistic. Dominjon et al. [9] show that manipulating the control-display ratios of virtual objects can change the perceived mass in virtual environments. In other words, if a virtual object's displacement is proportionally increased compared to the user's actual hand, its mass is perceived as lighter than it is and vice versa. Samad et al. [21] utilize the same technique in VR to change the perceived weight of wooden cubes. However, one downside of changing the control-display ratios is that the offset between the actual and the virtual representation of the object increases as the hand gets further away from the initial contact point. Therefore, bi-manual coordination and interaction could become difficult since the virtual hand's relative position is different from the actual hand's, even if it is not moving. Our approach aims to give a sense of mass to objects by using a physically-based virtual hand that enables realistic interactions with virtual objects and preserves the co-location between the virtual and actual palm when the hands are steady or when their acceleration is not changing. + +Interaction is an important part of an immersive virtual experience and increases the user's sense of presence [3] [24]. There are various ways to enable interactions between a virtual hand and virtual objects. In gesture and metaphor-based approaches, the interaction uses specified hand commands. For example, if the virtual hand is in a grasping pose and near a virtual object, that object's orientation follows the virtual hand. Song et al. [23] enables 9 DoF control of a virtual tool using bi-manual gestures. Gesture-based approaches have proven to be robust and effective. However, they are unintuitive and artificial by nature; therefore, they are not suitable for a physically realistic interaction. Another approach is to use physically-based manipulation techniques. For example, Borst and Indugula in [2] propose virtual coupling of the tracked hand to a rigid virtual hand that enables whole hand grasping. In this method, the palm and finger joints of the tracked hand and the virtual hand are connected to the corresponding parts using linear and torsional virtual spring-dampers. Moreover, since the spring damper links work based on applying a limited and proportional amount of force, this method shares the same physical limitations that a realistic interaction has. We modify this method to preserve the co-location between the virtual and actual palms and evaluate it for mass rendering in VR. + +Vibration feedback can be used to simulate different touch stimuli. We use sinusoidal vibrations to render the mass of a virtual object. Asymmetrical vibration is another type of vibration feedback that has been used by Choi et al. [5] to simulate weight in VR. These vibrations cause skin-stretch, and the user can detect their direction. Therefore, multiple actuators are required for simulating weight and inertia in all directions. Moreover, the intensity of these asymmetrical vibrations is much stronger (up to ${20}\mathrm{\;g}\left( {{9.8}{\mathrm{\;{ms}}}^{-2}}\right)$ ) compare to our vibration feedback (less than $1\mathrm{\;g}$ ). Kildal [13] uses grain mechanical vibrations to create the illusion of compliance for a rigid box. Sinusoidal vibration feedback has been used in other haptic applications such as simulating a button press on a rigid box [15] and a virtual button in VR [14]. Moreover, Seo et al. [22] simulate a moving cart by adding vibration feedback to a chair and changing the amplitude and frequency of the vibration feedback proportional to the simulated cart's angular velocity. + +Mass rendering methods in VR limit the hand and finger movements or engage users in unrealistic interactions. Our physically-based interaction is realistic and preserves the co-location of actual and virtual palms when the hands are under no or constant acceleration, and our vibration feedback works with a single actuator on the fingertip without limiting the hand and finger movements. + +## 3 FORCE-CONTROLLED VIRTUAL HAND + +One of the goals of this paper is to explore the effect of a physically-based interaction on mass perception and discrimination. There is a weight limit on objects in the real world that we can pick up using our hands. Our grip strength and the force that we can apply to a grasped object are bounded. Therefore, there is a limit to how fast we can accelerate an object based on its mass. In VR, we hypothesize that physically-based interaction between the user's virtual hand and object creates a sense of mass for that object. For this purpose, we track the user's hand, couple it with a 3D model of a hand, and use a physically-based simulation for hand-object interactions. We use a vision-based hand tracking system (Leap Motion hand tracker) to allow the user's hand and fingers to move freely, providing a virtual experience analogous to real-world interaction. + +For modeling the hand, we consider one rigid palm and five fingers, each of which has three rigid phalanges. Interaction between VR objects and the force-controlled virtual hand is more realistic than interactions between the tracked hand and VR objects. For example, when grasping an object, the tracked hand can go inside the object, but the virtual hand grasps around the object. Therefore, we only display the force-controlled virtual hand (VR hand). The VR hand must be co-located and coupled with the tracked hand. To achieve this, rather than a purely geometric approach, we modify the physically-based method described by Borst and Indugula in [2]. The physically-based coupling helps us to efficiently prevent unrealistic collisions and interactions between the VR hand and objects. In the physically-based coupling method, we associate one spring-damper to each rigid component of fingers. The spring-dampers apply force to the VR hand's components to match their positions and orientations to the tracked hand's corresponding components. To achieve consistent behavior from the physical simulation, we use a fixed size VR hand. Having a fixed size for the VR hand does not directly influence efficiency in virtual object manipulation tasks, sense of hand ownership, realism, or immersion in VR [16]. + +The spring-damper coupling applies both force and torque to the virtual part. The force at time $t,\overrightarrow{F}\left( t\right)$ , is proportional to ${\Delta }_{\text{Position }}\left( t\right)$ , the distance between the center of the mass of the two corresponding parts and the torque at time $t,\overrightarrow{\tau }\left( t\right)$ , is proportional to ${\Delta }_{\text{Rotation }}\left( t\right)$ , the difference in their rotation. To prevent the virtual part from overshooting its target position and orientation, the spring-damper applies another force to the virtual object proportional to $\overrightarrow{V}\left( t\right)$ , its linear velocity and torque proportional to $\overrightarrow{\omega }\left( t\right)$ , its angular velocity. That gives: + +$$ +\overrightarrow{F}\left( t\right) = {k}_{p}^{\prime }{\overrightarrow{\Delta }}_{\text{Position }}\left( t\right) - {k}_{d}^{\prime }\overrightarrow{V}\left( t\right) , \tag{1} +$$ + +$$ +\overrightarrow{\tau }\left( t\right) = {k}_{p}^{\prime \prime }{\overrightarrow{\Delta }}_{\text{Rotation }}\left( t\right) - {k}_{d}^{\prime \prime }\overrightarrow{\omega }\left( t\right) \tag{2} +$$ + +where ${k}_{p}^{\prime },{k}_{p}^{\prime \prime },{k}_{d}^{\prime }$ and ${k}_{d}^{\prime \prime }$ are the spring-damper coefficients. These parameters, are set during the preliminary experiments to ensure that the VR hand is responsive and closely and smoothly follows the actual hand and can pick up virtual mass up to $4\mathrm{\;{kg}}$ . + +If we use a similar spring-damper to couple the palms, when the user holds an object using the VR hand, the distance between the VR hand and the actual hand increases until the spring-dampers' forces equal the weight of the VR hand and the object that it is holding. This causes a discrepancy between the visual and the proprioceptive sense. To solve this problem, we introduce an additional term in the spring-damper for the palms: + +$$ +\overrightarrow{{F}_{\text{Palm }}}\left( t\right) = {k}_{p}^{\prime }{\overrightarrow{\Delta }}_{\text{Position }}\left( t\right) - {k}_{d}^{\prime }\overrightarrow{V}\left( t\right) + {k}_{i}^{\prime }\mathop{\sum }\limits_{{j = 0}}^{t}{\overrightarrow{\Delta }}_{\text{Position }}\left( j\right) , \tag{3} +$$ + +$$ +{\tau }_{\text{Palm }}^{ \rightarrow }\left( t\right) = {k}_{p}^{\prime \prime }{\overrightarrow{\Delta }}_{\text{Rotation }}\left( t\right) - {k}_{d}^{\prime \prime }\overrightarrow{\omega }\left( t\right) + {k}_{i}^{\prime \prime }\mathop{\sum }\limits_{{j = 0}}^{t}{\overrightarrow{\Delta }}_{\text{Rotation }}\left( j\right) \tag{4} +$$ + +where ${k}_{i}^{\prime }$ and ${k}_{i}^{\prime \prime }$ are spring-damper coefficients. The added summation term applies force and torque proportional to the accumulation of ${\overrightarrow{\Delta }}_{\text{Position }}\left( t\right)$ and ${\overrightarrow{\Delta }}_{\text{Rotation }}\left( t\right)$ over time. Therefore, when the user holds an object, $\overrightarrow{{F}_{\text{Palm }}}\left( t\right)$ and $\overrightarrow{{\tau }_{\text{Palm }}}\left( t\right)$ increase until the virtual palm's orientation and position match the tracked hand palm in the steady-state. ${k}_{i}^{\prime }$ and ${k}_{i}^{\prime \prime }$ are set during the preliminary experiments so that position and orientation of the coupled palms quickly match when the hand is not accelerating. Also, ${k}_{p}^{\prime },{k}_{p}^{\prime \prime },{k}_{d}^{\prime }$ and ${k}_{d}^{\prime \prime }$ are set independently for the palm compared to the phalanges since it has different physical properties. + +![01963e90-4878-7407-b5f6-333e985b7ec5_2_931_150_704_696_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_2_931_150_704_696_0.jpg) + +Figure 2: A weak and a strong virtual grip and the corresponding actual hands. + +Using a force-controlled virtual hand should give a sense of mass perception and allow mass discrimination between virtual objects. However, we suspect that this claim is stronger in some scenarios and weaker in others. While grasping and moving a light object, the spring-damper forces counteract the force of gravity and inertia on the object. Therefore, using our virtual hand, if a user grasps an object with a low virtual mass, they can easily pick it up and quickly move it around the workspace with high acceleration without it coming out of their grip. However, for a heavier object, the user can still pick it up, but they have to increase their effort, such as using more fingers for grasping or closing their grip further so spring-dampers would apply more force on the object (Fig. 2). Also, it is not possible to accelerate it as fast as lighter objects since the inertial forces are higher and can overcome the spring-dampers in the virtual hand and open the virtual grasp. Depending on the spring dampers' coefficients, after a certain point in mass, it would be really difficult or eventually impossible for the user to move or pick up the object. We hypothesize that the limit on how fast the user can accelerate the virtual object in hand and how challenging it is to pick it up gives the user a sense of the virtual object's mass and enable them to discriminate two objects based on their mass. However, using this technique, it is hard to perceive the difference in mass between two light objects $\left( { < 1\mathrm{\;{kg}}}\right)$ since it would be almost effortless to pick both of them up off the ground and move them quickly without dropping them. To overcome this problem, we introduce a vibration feedback effect to complement our VR hand. + +## 4 VIBROTACTILE FEEDBACK + +In day-to-day physical interactions with real-world objects, we can feel the object's mass and compare it to other heavier or lighter objects through our sense of touch. Virtual experiences that do not provide haptic feedback lack realism compared to real-world experiences. One of the modalities of haptic feedback is vibrotactile feedback in the form of mechanical waves or vibrations. + +Our goal is to complement the VR hand in giving the user a perception of an object's mass by communicating the net force they apply to the object. To achieve this without limiting the hand and finger movements, we use one actuator to render our haptic feedback. We use sinusoidal vibration feedback with a frequency range between ${100}\mathrm{{Hz}}$ and ${150}\mathrm{{Hz}}$ , making it perceivable only by the Pacini mechanoreceptors in the fingertip skin. The Pacini mechanorecep-tors cannot detect the direction of the mechanical waves; therefore, only one actuator is sufficient to render our haptic feedback in all directions. + +We strap a VCA (voice-coil actuator) to the fingertip of the index finger. We chose the index finger because it has a critical role in picking up objects with a pinch grasp. Other fingers, such as the thumb and the middle finger, can have an important role in grasping as well; However, attaching voice-coil actuators to multiple fingers limits the relative movement of fingertips and manual dexterity. + +While a user grasps an object, we render the vibration feedback $O\left( t\right)$ with frequency $O{\left( t\right) }_{F}$ . The amplitude of $O\left( t\right)$ is proportional to the object’s mass $M$ and acceleration $A\left( t\right)$ . This results: + +$$ +O\left( t\right) = {\alpha MA}\left( t\right) \sin \left( {{2\pi tO}{\left( t\right) }_{F}}\right) , \tag{5} +$$ + +where $\alpha$ is a scaling constant to control the range for the vibration energy perceived by the user. The vibration feedback should be only strong enough so that users can perceive the vibration when slowly moving the lightest weight in the scene. The value of $\alpha$ also depends on the hardware components of the haptic chain, such as the signal amplifier and the haptic actuator. For our setup, we set the $\alpha$ value in a way that, if the user accelerates a $1\mathrm{\;{kg}}$ object at $1\mathrm{\;g}$ , the measured vibration at the fingertip is on average ${0.32}\mathrm{\;g}$ , which allows users to perceive the vibration feedback when slowly moving the lightest weight(0.25kg)in our experiments. The frequency of the output signal $O{\left( t\right) }_{F}$ dynamically changes from ${100}\mathrm{{Hz}}$ to ${150}\mathrm{{Hz}}$ based on the velocity of the virtual object $V\left( t\right)$ , that gives: + +$$ +O{\left( t\right) }_{F} = \max \left( {{150},{100}\frac{\left| {V\left( t\right) }\right| + 2}{2}}\right) , \tag{6} +$$ + +where at speeds near zero, the signal’s frequency is ${100}\mathrm{\;{Hz}}$ , and as the speed increases to about one $\mathrm{m}/\mathrm{s}$ , it goes up to ${150}\mathrm{{Hz}}$ . To ensure a smooth vibration signal, we apply a second-order Butterworth lowpass filter to $V\left( t\right)$ and $A\left( t\right)$ . The filter has a sample rate of ${1000}\mathrm{\;{Hz}}$ , and the corner frequency is ${20}\mathrm{\;{Hz}}( - 3\mathrm{\;{db}}$ amplification at ${20}\mathrm{{Hz}}$ ). + +We set the signal’s amplitude proportional to ${MA}\left( t\right)$ which, according to Newton's second law of motion, represents the net force acting on the virtual object. In our method, we ignore balanced or counteracted forces acting on an object since the counteracted forces from grasping can be similar between a light and a heavy object. As an example, we can grip a light object just as hard as a heavier one. + +During a virtual experience, the voice-coil actuator is always strapped to the user's index fingertip. However, the vibration feedback renders only when the user's virtual hand grasps a virtual object and not during their free-hand motions in the scene. To detect if the user is grasping a virtual object, we check whether the virtual object is off the ground and touching the virtual hand's palm and the distal joint of the thumb, index, or middle finger. If grasping is detected, the vibration feedback is rendered for the user through the voice coil actuator. + +Whenever the system detects that the user is no longer grasping a virtual object, the vibration feedback rendering stops. However, in a physical simulation, even when the user is grasping the object, the hand parts may momentarily lose contact with the virtual object for a few cycles, and this might cause on/off pulses in our vibration feedback. To avoid these impulse noises in our signal, we stop the vibration feedback after no grasping is detected for ten milliseconds. + +![01963e90-4878-7407-b5f6-333e985b7ec5_3_936_215_684_501_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_3_936_215_684_501_0.jpg) + +Figure 3: The output voltage of the vibration feedback for two virtual objects with mass values ${0.5}\mathrm{\;{kg}},{O}_{1}\left( t\right)$ , and $1\mathrm{\;{kg}},{O}_{2}\left( t\right)$ , during an arbitrary shaking movement with acceleration, $A\left( t\right)$ , and velocity $V\left( t\right)$ . + +When the user picks two virtual objects with different mass values and moves them around the scene with the same motion, the vibration effect is more substantial for the heavier object than the lighter object, proportional to their mass difference. In other words, the user feels more energetic mechanical vibrations on their skin when interacting with a heavier object. We suspect users perceive these vibrations as a resistance force to acceleration (similar to the force of inertia), which leads them to perceive the mass of virtual objects. + +The limitation of the force-controlled hand is that if we take two light virtual objects such that one object is twice as heavy as the other, it would be difficult to perceive the mass difference since both masses are well within the threshold of what the virtual hand can grasp and move around in the VR scene. However, with the presented vibration feedback, the vibration at the user's skin for the heavier object has twice the amplitude (Fig. 3). As a result, we expect that the user perceives the mass difference between the objects based on the vibration feedback. + +## 5 EVALUATION + +We evaluate our VR mass rendering techniques and verify our claims using both qualitative and quantitative measurements. We conducted a user study in which participants interact with virtual objects using the force-controlled co-located virtual hand and perform several object manipulation and comparison tasks. Moreover, we study the effect of the proposed vibration feedback on participants' ability to perceive virtual objects' masses and compare them based on the heaviness. More specifically, we look to assess these two hypotheses in our evaluations: + +- Grasping and manipulating virtual objects using a co-located physically-based hand model in virtual reality gives a sense of mass perception and allows some degree of mass discrimination between virtual objects. + +- The proposed vibration feedback can improve the sense of mass perception and enhance mass discrimination precision during virtual interactions between a physically-based virtual hand and virtual objects. + +![01963e90-4878-7407-b5f6-333e985b7ec5_4_331_156_370_291_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_4_331_156_370_291_0.jpg) + +Figure 4: The voice coil actuator is strapped to the index fingertip of the user's dominant hand + +To examine the validity of the first hypothesis, participants perform virtual tasks involving interactions with objects with different mass values using the VR hand. However, evaluating these results of the VR hand interactions is not enough to validate our first hypothesis. The virtual environment runs in a physics engine, and users might get other clues to detect the difference in mass between objects that are not from the VR hand interactions only. These clues include: how the object interacts with each other, how they bounce when dropped on the virtual ground, and the speed at which they fall in the presence of air friction. To control the experiment for these additional cues, we ask participants to interact with each object individually and not push or touch an object using another. Additionally, we add a control interaction mode to our platform, called the spherical cursor. In this mode, instead of a co-located hand, users only see a spherical cursor co-located with the center of their palms. If the spherical cursor is within an object and the user puts their hand in a grasp pose, that object follows the cursor around the virtual scene until the user opens their hand. During grasping using the spherical cursor, we move the object by applying force to it in the cursor's direction. However, this force is proportional to the object's mass. As a result, objects with different mass follow the cursor at the same speed and acceleration. Therefore, comparing the quantitative and qualitative results from user interactions using a force-controlled hand versus the spherical cursor as a baseline allows us to validate the first hypothesis. + +To test the second hypothesis, participants interact with virtual objects using the force-controlled hand both with and without the vibration feedback, which allows us to compare the results and analyze the effectiveness of the vibrotactile feedback in mass perception and discrimination. + +### 5.1 Setup + +In this subsection, we describe the study setup's hardware and software components and the range of mass values we use for our virtual objects. We use the MMXC-HF VCA by Tactile Labs, a relatively compact tactile actuator $\left( {{36}\mathrm{\;{mm}} \times {9.5}\mathrm{\;{mm}} \times {9.5}\mathrm{\;{mm}}}\right)$ , and the Tactile Labs QuadAmp multi-channel signal amplifier. A pair of thin wires attached the VCA to the signal amplifier placed on a nearby table. The cables from the actuator point outwards from the user's finger, limiting the chance of cables touching the user's hands during virtual interactions. Using a $3\mathrm{\;d}$ printed mount, we attach the voice coil actuator to the user's index fingertip (Fig. 4). We use the PC-powered Oculus Rift as our VR interface, which allows for external PC-based graphical computation. For tracking the user's hands, we attach a Leap Motion controller on the front side of the Oculus Rift VR headset for hand tracking. + +In our system, we use the Bullet physics simulation [8] as our physics engine. One desirable feature of the Bullet library is that it permits the virtual hand's control by applying virtual force and torque from an external source. This feature enables us to implement the virtual coupling between our virtual hand and the tracked hand. + +![01963e90-4878-7407-b5f6-333e985b7ec5_4_1136_148_297_511_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_4_1136_148_297_511_0.jpg) + +Figure 5: A participant interacting with a virtual object while wearing the VR headset with the Leap Motion hand tracker, VCA and nose-canceling headphones. + +To render the virtual scene to the VR headset and work with the Bullet physics simulation, we use the Chai3D library. Chai3D [6] is a platform-agnostic haptics, visualization, and interactive real-time simulation library. Moreover, it supports visualizing using the Oculus Rift headset and has built-in Bullet physics integration, making it ideal for immersive and physically realistic haptic experiences. + +In our study, we use cubes as our virtual object's shape since they are easier to grasp. During our experiments, there may be multiple virtual cubes in the scene with different mass ranges. For setting the mass range in our experiments, we should consider the physics engine that we use. The Bullet physics engine recommends keeping the mass of objects around $1\mathrm{\;{kg}}$ and avoid very large or small values [7]. Therefore, during our preliminary experiments, we set the virtual coupling coefficients so that users could pick up virtual cubes with masses up to $4\mathrm{\;{kg}}$ . However, past that mass point, it becomes too difficult to pick up the virtual cubes. Since we expect users to be able to interact and pick up any virtual cube in the scene, we chose ${2.5}\mathrm{\;{kg}}$ as our upper mass limit in our user studies for the heaviest objects and ${0.25}\mathrm{\;{kg}}$ as our lower mass limit for the lightest objects. + +### 5.2 Participants + +Ten participants ( 5 female, 5 male) took part in this study. All participants were right-handed. Three participants had never used VR headsets before; one participant used them few times per week and the rest at most a few times per year. Seven of them had interacted with virtual objects during their VR experiences, and three had used haptic devices in VR games and applications. This study was approved by the University of Calgary Conjoint Faculties Research Ethics Board (REB18-0708). Participants received 20\$ compensation for taking part in this user study. + +### 5.3 Study + +We begin the study by spending a few minutes $\left( { < 8}\right)$ familiarizing the participants with the VR headset, Leap Motion hand tracker, and the virtual study environment. After placing the haptic actuator on their dominant hand's fingertip, they practice how to pick up and move a virtual cube (1.25 kilograms) using the virtual co-located hand. We ask participants to always use their index fingers in grasping since the haptic actuator is attached to it. They are also encouraged to engage more fingers or tighten their grip to increase the grasping strength and move the training object around the scene both slowly and quickly. For consistency, we ask the participants only to use their dominant hand to interact with the virtual elements in the scene when the tasks start. During the virtual tasks, participants wear active noise-canceling headphones while white-noise is played through them to block any audible signal from the haptic actuator (Fig. 5). + +![01963e90-4878-7407-b5f6-333e985b7ec5_5_151_147_717_329_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_5_151_147_717_329_0.jpg) + +Figure 6: Two virtual cubes with random weights are placed in front of the participant to compare. The co-located spherical cursor mode is active, and the "Vibration Off" label indicates to the participant that they should not expect any vibration from the voice-coil actuator. + +![01963e90-4878-7407-b5f6-333e985b7ec5_5_153_649_714_325_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_5_153_649_714_325_0.jpg) + +Figure 7: Three virtual cubes with random weights are placed in front of the participant to sort in ascending order from left to right. The "Vibration On" label indicates to the participant that they should expect vibration from the voice-coil actuator when picking up objects. + +In the first task, we present participants with six pairs of cubes and ask them to interact, grasp, move the objects, and think aloud about the experience. Furthermore, we ask them to compare the two cubes based on their mass and say if they feel they have the same mass or if one is slightly or considerably (or to whatever degree they perceive it) heavier than the other. Participants interact with virtual objects using the three interaction modes in the following order: spherical cursor, virtual hand without the vibration feedback, and virtual hand with the vibration feedback. As an example, Fig. 6 shows this task's setup while the interaction mode is set to the spherical cursor. For each interaction mode, participants compare two pairs of cubes. One pair has the largest mass difference given our mass range (0.25 and ${2.5}\mathrm{\;{kg}}$ ), and the other pair has a smaller mass difference (0.25 and ${0.5}\mathrm{\;{kg}}$ ). The system randomly decides if the smaller or larger mass difference pair is first presented to the user and randomly places the two cubes on the table for each set to avoid learning from the previous rounds. + +In the next part, we ask participants to sort virtual cubes based on their mass. In sorting, a higher number of objects to sort means the participant spends more time picking up and moving objects around the scene, which results in a fuller user experience in comparing weights. However, a higher number of objects to sort increases the average time to complete the task, limiting the number of sorting rounds users can perform during a study session. Our preliminary experiments concluded that three cubes could offer a reasonable balance between sorting time and user interaction with objects. + +We quantized our mass range ( ${0.25}\mathrm{\;{kg}}$ to ${2.5}\mathrm{\;{kg}}$ ) into two weight sets of size three. Having more than one weight-set allows a more in-depth analysis of the interaction modes across our mass range. Weber's law states that the difference in magnitude needed to discriminate between a base stimulus and other stimuli increases proportionally to the intensity of the base stimulus [12]. We can easily differentiate a ${0.5}\mathrm{\;{kg}}$ mass versus a $1\mathrm{\;{kg}}$ mass, but it is harder to distinguish a ${10}\mathrm{\;{kg}}$ mass from a ${10.5}\mathrm{\;{kg}}$ even though both pairs have the same weight difference. Therefore we chose our mass values with equal ratios between them using a geometric series. That gives a light weight-set(0.25kg,0.44kg,0.79kg)and a heavy weight-set (0.79kg,1.4kg,2.5kg). + +Participants sort random permutations of the light and the heavy weight-set, using the three different interaction modes (spherical cursor, virtual hand without vibration feedback, the virtual hand with vibration feedback). Therefore we have six modes of sorting. As an example, Fig. 7 shows this task's setup while the interaction mode is set to the virtual hand with vibration feedback. In all sorting modes, three virtual cubes with similar appearance and size are placed on a virtual surface, and participants have to place them from left to right in ascending order based on the perceived mass. Participants perform six rounds of sorting for each mode. During each round, sorting modes are ordered randomly to remove the learning effect between the modes. Before the sorting task begins, we rotate between the modes to familiarize the participant with the scene. Furthermore, we ask participants to grasp each object at least once before finalizing their decision. Also, we recommend keeping each sorting under a minute; however, this is not a hard limit. + +When the sorting task finishes, participants fill out a questionnaire regarding their experience during the two virtual tasks. After participants fill out the questionnaire, we ask them to elaborate on their answers during a semi-structured interview. Our post-session questionnaire is as follows: (each question is repeated for each of the interaction modes) + +- While interacting with objects, I could perceive their mass. 1 to 5 (Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree) + +- I could feel one cube was heavier than the other. I to 5 (Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree) + +- How was your confidence level in sorting objects? 1 to 5 (Not confident at All, , , , Very Confident) + +- How realistic were the interactions with objects? 1 to 5 (Very Unrealistic, Unrealistic, Neutral, Realistic, Very Realistic) + +- Would you recommend experiencing the "" in VR games during interactions with virtual objects? 1 to 5 (Do Not Recommend at All, , Neutral, , Highly Recommend) + +### 5.4 Results + +We show the sorting results in the form of confusion matrices in Fig 8. Using the nonparametric Kruskal-Wallis test, we analyze the statistical significance of the difference between placement distributions of light, medium, and heavy objects for each of the sorting modes. For the spherical cursor (control mode), we observe statistically insignificant p-values of 0.463 for the heavy weight-set and 0.800 for the light weight-set, showing that the user could not discriminate between weights in this mode. For the virtual hand with no vibration feedback, we see statistically insignificant results for the light weight-set (p-value 0.928). However, for the heavy set, we see a significant effect of the virtual hand on sorting (p-value $< {0.001})$ . In the case of sorting using the virtual hand with vibration feedback, we see a significant effect on sorting both for the light (p-value <0.001) and heavy (p-value <0.001) weight sets. To check the validation of the first hypothesis, we see a significant improvement for the heavy weight-set compared to the control mode (spherical cursor). However, the same cannot be said for the light weight-set. To check for the second hypothesis, we see a statistically significant improvement in the light weight-set with the vibration feedback compared to only using the virtual hand. However, for the heavy set, we see significant effects both from virtual hand with and without the vibration feedback. Therefore, to check if the observed improvements in the precision of sorting for the light, medium and heavy objects are significant, we perform row by row comparison between the two confusion matrices using the Wilcoxon rank-sum test. Comparing the number of correct sorts for the heavy weight (54 correct sorts versus 33) gives a statistically significant p-value of $< {0.001}$ , for the medium weight (44 correct sorts versus 25) p-value is $< {0.001}$ , and for the light weight ( 48 correct sorts versus 34) p-value is $< {0.01}$ , which shows that for the heavy weight-set the vibration feedback improvement is statistically significant as well. + +![01963e90-4878-7407-b5f6-333e985b7ec5_6_366_149_1060_467_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_6_366_149_1060_467_0.jpg) + +Figure 8: Sorting results of the six different sort modes in the form of confusion matrices. The top three matrices show the sorting results for the light weight-set(0.25kg,0.44kg,0.79kg), and the bottom three show the sorting results for the heavy weight-set(0.79kg,1.4kg,2.5kg). From left to right, matrices represent the three interaction modes (spherical cursor, virtual hand with no vibration, virtual hand with vibration). The matrices diagonals show the number of times the objects were sorted correctly. + +![01963e90-4878-7407-b5f6-333e985b7ec5_6_216_804_582_391_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_6_216_804_582_391_0.jpg) + +Figure 9: Users compare the sense of mass perception and discrimination between the three interaction modes in the post-session questionnaire. The bars represent the mean answer, and the black lines show the standard deviation. + +The results of the questionnaire in Fig. 9 show that participants declared an improvement in mass perception and discrimination when the vibration feedback was enabled compare to only using the virtual hand. P6 (Participant #6) mentioned "With the hand no vibration, it was harder to tell the difference in mass, but I think you could still, it was realistic enough that it was engaging, but the vibration one I'm not if it's like a mental thing, it just helps a lot more with the differentiating between the different masses and the movements". We also see neutral results for the spherical cursor. Generally, participants mentioned they could not differentiate between the objects using the spherical cursor. P2 mentioned, "It was harder for me to use the cursor to compare the weights, most of the time I thought they were like identical". For the virtual hand without the vibration feedback, participants on average expressed neutral opinions regarding its ability to give them the sense of mass perception and discrimination. However, the results from the sorting task show they performed better than the control. Also, some participants mentioned different encounters that enabled them to differentiate between weights. P5 mentioned "I'm picking it up, how long would it slide, ok hold it, I shake it around it slides faster ... if I hold it, it slips faster then it's heavier", and P6 said "(with the virtual hand) if I grab it loose the heavy one just drops as opposed to the light one stays in even if I'm shaking it", and "looking at the movement, if I'm moving my hand it's a bit slower it just feels heavier versus if it's a quick it just feels lighter" + +![01963e90-4878-7407-b5f6-333e985b7ec5_6_990_801_586_476_0.jpg](images/01963e90-4878-7407-b5f6-333e985b7ec5_6_990_801_586_476_0.jpg) + +Figure 10: Users compare the sorting confidence, sense of realism, and gaming experience between the three interaction modes in the post-session questionnaire. The bars represent the mean answer, and the black lines show the standard deviation. + +Fig. 10 shows that participants expressed having more confidence in sorting when the vibration feedback was enabled. However, without the vibration feedback, they expressed neutral confidence. Furthermore, participants generally stated that the vibration feedback added to the interaction's realism and that the virtual hand's interactions were realistic. P4 said "For the vibration also, I felt like it helped me, felt like it's more real, I'm touching things, not just I'm seeing that I'm touching things". Furthermore, participants expressed interest in experiencing the vibration effect in virtual reality games. + +Finally, we asked the participants how did interaction with virtual objects feel when they vibrated. P2 said: "if felt like it has resistancy to move, based on that I felt like it's heavier, might be heavier" and P7 mentioned "When I picked a cube with vibration, I could feel that something is trying to, I don't know, annoy me bother me, might be something like the gravity taking it back to the ground, it feels that I should put more energy to pick it up" and further elaborated "the one that without vibration I just pick it with two fingers I played with that, but the one with vibration when I tried to pick it with two fingers, suddenly I tried to keep it with all my fingers because I thought that it might slides and drops." + +Overall our findings indicate that the presence of the force-controlled virtual hand both with and without the vibration effect gives a sense of weight discrimination and perception. However, the virtual hand without vibration feedback is only effective for heavier objects closer to the hand strength threshold. Furthermore, the virtual hand with the vibration effect improves the weight perception and discrimination sense for both lighter and heavier objects without having a negative effect on the realism of the experience. Therefore, our results validate our hypotheses. + +## 6 CONCLUSION + +Rendering the mass of objects in virtual reality without limiting the hand movements is a challenging task. In this paper, we propose using a force-controlled hand in VR to give a sense of mass perception and discrimination by enabling physically realistic hand-object interactions. We also propose a complementary vibration effect proportional to the object's mass and acceleration to improve the sense of mass perception and discrimination. We conducted a user study and performed qualitative and quantitative analysis, which indicates that our hypotheses are valid. The physically-based virtual hand can give a sense of mass perception and discrimination for heavier objects closer to the upper limit of its grasping strength. Furthermore, the vibration feedback greatly enhances the mass perception and discrimination for a wider mass range in our study while improving the interaction's realism. + +## 7 FUTURE WORKS + +One potential future direction for this research is to analyze the mass discrimination ability for the virtual hand and the vibration effect for a broader mass range and different mass ratios between the objects. Moreover, we are interested in analyzing the vibration effect's behavioral effects on the user's movements during virtual interactions. + +## REFERENCES + +[1] E. BÄCKSTRÖM. Do you even lift? an exploratory study of heaviness perception in virtual reality. Master's thesis, 2018. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230681. + +[2] C. W. Borst and A. P. Indugula. A spring model for whole-hand virtual grasping. Presence: Teleoperators and Virtual Environments, 15(1):47-61, 2006. doi: 10.1162/pres.2006.15.1.47 + +[3] D. A. Bowman and L. F. Hodges. Evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments. Proceedings of the Symposium on Interactive 3D Graphics, pp. 35-38, 1997. doi: 10.1145/253284.253301 + +[4] F. Chinello, C. Pacchierotti, M. Malvezzi, and D. Prattichizzo. A Three Revolute-Revolute-Spherical Wearable Fingertip Cutaneous Device for Stiffness Rendering. IEEE Transactions on Haptics, 11(1):39-50, 2018. doi: 10.1109/TOH.2017.2755015 + +[5] I. Choi, H. Culbertson, M. R. Miller, A. Olwal, and S. Follmer. Grabity. pp. 119-130, 2017. doi: 10.1145/3126594.3126599 + +[6] F. Conti, F. Barbagli, R. Balaniuk, M. Halg, C. Lu, D. Morris, L. Sen-tis, J. Warren, O. Khatib, and K. Salisbury. The chai libraries. In Proceedings of Eurohaptics 2003, pp. 496-500. Dublin, Ireland, 2003. + +[7] E. Coumans. Bullet 2.83 physics sdk manual, 2015. + +[8] E. Coumans. Bullet physics simulation. In ACM SIGGRAPH 2015 Courses, p. 1. 2015. + +[9] L. Dominjon, A. Lécuyer, J. M. Burkhardt, P. Richard, and S. Richir. Influence of control/display ratio on the perception of mass of manipulated objects in virtual environments. Proceedings - IEEE Virtual Reality, 2005:19-26, 2005. + +[10] T. Endo, H. Kawasaki, T. Mouri, Y. Ishigure, H. Shimomura, M. Mat-sumura, and K. Koketsu. Five-fingered haptic interface robot: HIRO III. IEEE Transactions on Haptics, 4(1):14-27, 2011. doi: 10.1109/TOH. 2010.62 + +[11] F. G. Hamza-Lup, C. M. Bogdan, D. M. Popovici, and O. D. Costea. A survey of visuo-haptic simulation in surgical training. eLmL - International Conference on Mobile, Hybrid, and On-line Learning, pp. 57-62, 2011. + +[12] E. Kandel, S. Mack, T. Jessell, J. Schwartz, S. Siegelbaum, and A. Hud-speth. Principles of Neural Science, Fifth Edition, chap. 21, p. 451. McGraw-Hill's AccessMedicine. McGraw-Hill Education, 2013. + +[13] J. Kildal. Kooboh: Variable Tangible Properties in a Handheld Haptic-Illusion Box 1 Introduction and Motivation. EuroHaptics 2012, Part II LNCS, 7283:191-194, 2012. + +[14] H. Kim, R. C. Park, H. B. Yi, and W. Lee. HapCube: A tactile actuator providing tangential and normal pseudo-force feedback on a fingertip. ACM SIGGRAPH 2018 Emerging Technologies, SIGGRAPH 2018, pp. 1-13, 2018. doi: 10.1145/3214907.3214922 + +[15] S. Kim and G. Lee. Haptic feedback design for a virtual button along force-displacement curves. UIST 2013 - Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, pp. 91- 96, 2013. doi: 10.1145/2501988.2502041 + +[16] L. Lin and S. Jorg. The effect of realism on the virtual hand illusion. Proceedings - IEEE Virtual Reality, 2016-July:217-218, 2016. doi: 10. 1109/VR.2016.7504731 + +[17] J. R. Lykke, A. B. Olsen, P. Berman, J. A. Bærentzen, and J. R. Frisvad. Accounting for Object Weight in Interaction Design for Virtual Reality. (May), 2019. + +[18] R. McCloy and R. Stone. Science, medicine, and the future: Virtual reality in surgery. Bmj, 323(7318):912-915, 2001. doi: 10.1136/bmj. 323.7318.912 + +[19] K. Minamizawa, S. Fukamachi, H. Kajimoto, N. Kawakami, and S. Tachi. Gravity grabber: Wearable haptic display to present virtual mass sensation. ACM SIGGRAPH 2007: Emerging Technologies, SIGGRAPH'07, pp. 3-6, 2007. doi: 10.1145/1278280.1278289 + +[20] D. Prattichizzo, F. Chinello, C. Pacchierotti, and M. Malvezzi. Towards wearability in fingertip haptics: A 3-DoF wearable device for cutaneous force feedback. IEEE Transactions on Haptics, 6(4):506-516, 2013. doi: 10.1109/TOH.2013.53 + +[21] M. Samad, E. Gatti, A. Hermes, H. Benko, and C. Parise. Pseudo-haptic weight: Changing the perceived weight of virtual objects by manipulating control-display ratio. Conference on Human Factors in Computing Systems - Proceedings, pp. 1-13, 2019. doi: 10.1145/ 3290605.3300550 + +[22] J. Seo, S. Mun, J. Lee, and S. Choi. Substituting motion effects with vibrotactile effects for 4D experiences. In Conference on Human Factors in Computing Systems - Proceedings, vol. 2018-April. Association for Computing Machinery, apr 2018. doi: 10.1145/3173574.3174002 + +[23] P. Song, W. B. Goh, W. Hutama, C.-W. Fu, and X. Liu. A handle bar metaphor for virtual object manipulation with mid-air interaction. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 1297-1306, 2012. + +[24] P. van der Straaten. Interaction affecting the sense of presence in virtual reality. 2000. + +[25] A. Zenner and A. Kruger. Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality. IEEE Transactions on Visualization and Computer Graphics, 23(4):1312- 1321, 2017. doi: 10.1109/TVCG.2017.2656978 + +[26] A. Zenner and A. Krüger. Drag: On - A virtual reality controller providing haptic feedback based on drag and weight shif. Conference on Human Factors in Computing Systems - Proceedings, 2019. doi: 10. 1145/3290605.3300441 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..9f5f767408edeb9bbf1aeebb158759af173fe472 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,217 @@ +§ SIMULATING MASS IN VIRTUAL REALITY USING PHYSICALLY-BASED HAND-OBJECT INTERACTIONS WITH VIBRATION FEEDBACK + + < g r a p h i c s > + +Figure 1: Physics-based interactions with virtual objects using a co-located virtual hand (the left figure) are augmented using vibrational feedback proportional to objects' mass and acceleration (the right figure). + +§ ABSTRACT + +Providing the sense of mass for virtual objects using un-grounded haptic interfaces has proven to be a complicated task in virtual reality. This paper proposes using a physically-based virtual hand and a complementary vibrotactile effect on the index fingertip to give the sensation of mass to objects in virtual reality. The vibro-tactile feedback is proportional to the balanced forces acting on the virtual object and is modulated based on the object's velocity. For evaluating this method, we set an experiment in a virtual environment where participants wear a VR headset and attempt to pick up and move different virtual objects using a virtual physically-based force-controlled hand while a voice-coil actuator attached to their index fingertip provides the vibrotactile feedback. Our experiments indicate that the virtual hand and our vibration effect give the ability to discriminate and perceive the mass of virtual objects. + +Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction devices-Haptic devices; Human-centered computing-Human computer interaction (HCI)— Interaction paradigms-Virtual reality + +§ 1 INTRODUCTION + +Virtual Reality (VR) has significantly revolutionized simulated human experiences. VR enables an immersive virtual experience by simulating and triggering most of our senses as if we are present in another environment. Notably, in VR it is possible to see one's own co-located virtual hands, perceive them as their own real hands and interact with virtual objects [16]. However, virtual objects have no real mass, and the problem is including touch and visual cues that we rely on for mass perception. The physical cues include skin stretch and contact pressure at the fingertips (cutaneous feedback) and proprioceptive feedback from multiple muscles and joints (kinesthetic feedback). + +Grounded haptic devices can render the necessary forces for kinesthetic and cutaneous haptic feedback. However, their size, weight, and limited workspace restrict free-hand movements, making them less desirable in various VR applications. + +Alternatively, ungrounded haptic devices (such as finger-mounted or hand-held devices) can be built more compactly and lighter, making them more convenient to use in a larger workspace. Sensing the mass of a virtual object in every direction needs more complex ungrounded hardware with higher degrees of freedom. However, such devices require multiple actuators and can limit hand and finger movements. + +Another approach to overcome the hardware limitations is to use visio-haptic illusions. These methods aim to trick the brain into perceiving the mass by manipulating the objects' visual cues. For example, limiting the virtual object's velocity [1], or scaling its displacement compared to the user's hand [21] are shown to give a sense of mass to the objects. However, these methods are not physically realistic or decrease the co-location between the actual and virtual hands. + +In this paper, we introduce a novel mass rendering method that combines a visio-haptic technique with a simple finger-mounted vibration actuator. For the visio-haptic part, we replicate the visual cues that humans perceive during a real-world hand interaction with physical objects. We use a force-controlled physically-based virtual hand in VR to interact with virtual objects, which results in a limit on the heaviness of the objects that the user can pick up and how fast they can accelerate them based on their mass. However, it is difficult to distinguish between light objects using this technique. We complement our visio-haptic method with haptic feedback. The haptic actuator that renders the feedback should be small and compact enough to allow individual fingers to move independently and perform dexterous interactions. Also, we prefer an ungrounded device since it allows a larger workspace. One method to reduce the device's size is to use haptic feedback that is directionally invariant to our sense of touch. If the haptic stimulus's direction is detectable by the sense of touch, we need multiple actuators to render the haptic effect in different directions during a virtual interaction. Therefore we employ an ungrounded, direction invariant haptic effect to complement our physically-based virtual hand. We explore using a mechanical vibration feedback effect to achieve ungrounded mass rendering for virtual objects. In our work, while the virtual object is in the user's grasp, a sinusoid vibration proportional to the object's mass and acceleration is played through an ungrounded voice-coiled actuator at the tip of the user's index finger. An overview of the proposed method is shown in Fig. 1. + +When moving two objects with different mass, in addition to the physically-based visio-haptic feedback, users feel proportionally stronger vibration while grasping the heavier object. This vibro-tactile feedback gives the user a clue to the net force acting on the virtual object. To make this a direction-invariant haptic feedback, we use frequencies above ${100}\mathrm{\;{Hz}}$ . These frequencies are sensed by Pacini mechanoreceptors, which are not sensitive to the stimuli's directions. + +To evaluate the proposed physically-based virtual hand and the vibration feedback, we conducted a user study where participants interact with virtual objects with different masses and perform virtual tasks. Using qualitative and quantitative methods, we show that the physically-based hand gives a sense of mass to virtual objects, and adding the vibration feedback does improve mass perception and discrimination. + +The main contribution of this work is the design, development, and evaluation of a novel mass rendering method for virtual objects using physically-based hand-object interactions and vibration feedback. + +§ 2 RELATED WORKS + +In this section, we review relevant literature on simulating the mass of virtual objects during a VR experience. Also, we discuss modes of interaction in VR, including physically realistic grasping and interaction. + +Grounded haptic devices are highly sought-after in tool-mediated applications where precision and fidelity are essential such as surgical training $\left\lbrack {{11},{18}}\right\rbrack$ . Hand wearable grounded devices have also been developed. HIRO III [10] is an example of a five-fingered grounded haptic interface, with three DoF for each of its haptic fingers and a $6\mathrm{{DoF}}$ base capable of providing high precision force feedback to a hand while it is attached to each of the fingertips. The main challenge with grounded devices is their limited workspace size. + +Ungrounded haptic devices are attached to the user's body instead of a fixed point in the room, which allows a larger workspace. These devices are either hand-held or attached to the user's fingers, hands, or body. Minamizawa et al. [19] introduce a fingertip mounted ungrounded haptic device called the Gravity Grabber that can create a sense of weight when grabbing virtual objects in specific orientations. Gravity Grabber achieves this using one degree of freedom for shear force feedback and another degree of freedom in the normal direction of the fingertip skin. However, since our skin can detect the direction of skin stretch, this method cannot give a sense of weight to a virtual object in all orientations. Sensing the weight and inertia of a virtual object in all directions requires an ungrounded device with more complex hardware and higher degrees of freedom. Such as the works of Chinello et al. [4], and Prattichizzo et al. [20]. However, such devices are mechanically complicated since they require multiple actuators and limit hand and finger movements. In our method, we use one haptic actuator to render the mass of objects in all directions since we use sinusoidal vibration feedback. + +Hand-held ungrounded devices are desirable for simulating interactions with hand-held tools such as a hammer or a baseball bat. However, they limit the movement of fingers and the hand. Zenner in [26] introduced Drag:on a custom VR hand controller with two actuated fans, which can dynamically adjust the controller's aerodynamic properties, therefore changing the sensed inertia of a virtual object. Zenner et al. [25] introduce Shifty, a hand-held VR controller with an internal prismatic joint connected to a weight that shifts the center of mass of the device, resulting in different rotational inertia and resistance as the user interacts with various virtual tools. In the work of Lykke et al. [17], users have two hand controllers to pick up round virtual objects (scooping), and they should keep their hands closer together when the objects are heavier. Our method tracks the user's own hand instead of using a VR controller, which increases the sense of ownership and realism of the virtual hand [16] while not limiting the fingers' and hand's mobility. + +Humans can use visual cues to determine the weight of a virtual object. Backstrom [1] gives the sensation of mass to virtual objects in VR by limiting the velocity of a virtual object based on how heavy it is. Such constraints on the object's movements are not physically realistic. Dominjon et al. [9] show that manipulating the control-display ratios of virtual objects can change the perceived mass in virtual environments. In other words, if a virtual object's displacement is proportionally increased compared to the user's actual hand, its mass is perceived as lighter than it is and vice versa. Samad et al. [21] utilize the same technique in VR to change the perceived weight of wooden cubes. However, one downside of changing the control-display ratios is that the offset between the actual and the virtual representation of the object increases as the hand gets further away from the initial contact point. Therefore, bi-manual coordination and interaction could become difficult since the virtual hand's relative position is different from the actual hand's, even if it is not moving. Our approach aims to give a sense of mass to objects by using a physically-based virtual hand that enables realistic interactions with virtual objects and preserves the co-location between the virtual and actual palm when the hands are steady or when their acceleration is not changing. + +Interaction is an important part of an immersive virtual experience and increases the user's sense of presence [3] [24]. There are various ways to enable interactions between a virtual hand and virtual objects. In gesture and metaphor-based approaches, the interaction uses specified hand commands. For example, if the virtual hand is in a grasping pose and near a virtual object, that object's orientation follows the virtual hand. Song et al. [23] enables 9 DoF control of a virtual tool using bi-manual gestures. Gesture-based approaches have proven to be robust and effective. However, they are unintuitive and artificial by nature; therefore, they are not suitable for a physically realistic interaction. Another approach is to use physically-based manipulation techniques. For example, Borst and Indugula in [2] propose virtual coupling of the tracked hand to a rigid virtual hand that enables whole hand grasping. In this method, the palm and finger joints of the tracked hand and the virtual hand are connected to the corresponding parts using linear and torsional virtual spring-dampers. Moreover, since the spring damper links work based on applying a limited and proportional amount of force, this method shares the same physical limitations that a realistic interaction has. We modify this method to preserve the co-location between the virtual and actual palms and evaluate it for mass rendering in VR. + +Vibration feedback can be used to simulate different touch stimuli. We use sinusoidal vibrations to render the mass of a virtual object. Asymmetrical vibration is another type of vibration feedback that has been used by Choi et al. [5] to simulate weight in VR. These vibrations cause skin-stretch, and the user can detect their direction. Therefore, multiple actuators are required for simulating weight and inertia in all directions. Moreover, the intensity of these asymmetrical vibrations is much stronger (up to ${20}\mathrm{\;g}\left( {{9.8}{\mathrm{\;{ms}}}^{-2}}\right)$ ) compare to our vibration feedback (less than $1\mathrm{\;g}$ ). Kildal [13] uses grain mechanical vibrations to create the illusion of compliance for a rigid box. Sinusoidal vibration feedback has been used in other haptic applications such as simulating a button press on a rigid box [15] and a virtual button in VR [14]. Moreover, Seo et al. [22] simulate a moving cart by adding vibration feedback to a chair and changing the amplitude and frequency of the vibration feedback proportional to the simulated cart's angular velocity. + +Mass rendering methods in VR limit the hand and finger movements or engage users in unrealistic interactions. Our physically-based interaction is realistic and preserves the co-location of actual and virtual palms when the hands are under no or constant acceleration, and our vibration feedback works with a single actuator on the fingertip without limiting the hand and finger movements. + +§ 3 FORCE-CONTROLLED VIRTUAL HAND + +One of the goals of this paper is to explore the effect of a physically-based interaction on mass perception and discrimination. There is a weight limit on objects in the real world that we can pick up using our hands. Our grip strength and the force that we can apply to a grasped object are bounded. Therefore, there is a limit to how fast we can accelerate an object based on its mass. In VR, we hypothesize that physically-based interaction between the user's virtual hand and object creates a sense of mass for that object. For this purpose, we track the user's hand, couple it with a 3D model of a hand, and use a physically-based simulation for hand-object interactions. We use a vision-based hand tracking system (Leap Motion hand tracker) to allow the user's hand and fingers to move freely, providing a virtual experience analogous to real-world interaction. + +For modeling the hand, we consider one rigid palm and five fingers, each of which has three rigid phalanges. Interaction between VR objects and the force-controlled virtual hand is more realistic than interactions between the tracked hand and VR objects. For example, when grasping an object, the tracked hand can go inside the object, but the virtual hand grasps around the object. Therefore, we only display the force-controlled virtual hand (VR hand). The VR hand must be co-located and coupled with the tracked hand. To achieve this, rather than a purely geometric approach, we modify the physically-based method described by Borst and Indugula in [2]. The physically-based coupling helps us to efficiently prevent unrealistic collisions and interactions between the VR hand and objects. In the physically-based coupling method, we associate one spring-damper to each rigid component of fingers. The spring-dampers apply force to the VR hand's components to match their positions and orientations to the tracked hand's corresponding components. To achieve consistent behavior from the physical simulation, we use a fixed size VR hand. Having a fixed size for the VR hand does not directly influence efficiency in virtual object manipulation tasks, sense of hand ownership, realism, or immersion in VR [16]. + +The spring-damper coupling applies both force and torque to the virtual part. The force at time $t,\overrightarrow{F}\left( t\right)$ , is proportional to ${\Delta }_{\text{ Position }}\left( t\right)$ , the distance between the center of the mass of the two corresponding parts and the torque at time $t,\overrightarrow{\tau }\left( t\right)$ , is proportional to ${\Delta }_{\text{ Rotation }}\left( t\right)$ , the difference in their rotation. To prevent the virtual part from overshooting its target position and orientation, the spring-damper applies another force to the virtual object proportional to $\overrightarrow{V}\left( t\right)$ , its linear velocity and torque proportional to $\overrightarrow{\omega }\left( t\right)$ , its angular velocity. That gives: + +$$ +\overrightarrow{F}\left( t\right) = {k}_{p}^{\prime }{\overrightarrow{\Delta }}_{\text{ Position }}\left( t\right) - {k}_{d}^{\prime }\overrightarrow{V}\left( t\right) , \tag{1} +$$ + +$$ +\overrightarrow{\tau }\left( t\right) = {k}_{p}^{\prime \prime }{\overrightarrow{\Delta }}_{\text{ Rotation }}\left( t\right) - {k}_{d}^{\prime \prime }\overrightarrow{\omega }\left( t\right) \tag{2} +$$ + +where ${k}_{p}^{\prime },{k}_{p}^{\prime \prime },{k}_{d}^{\prime }$ and ${k}_{d}^{\prime \prime }$ are the spring-damper coefficients. These parameters, are set during the preliminary experiments to ensure that the VR hand is responsive and closely and smoothly follows the actual hand and can pick up virtual mass up to $4\mathrm{\;{kg}}$ . + +If we use a similar spring-damper to couple the palms, when the user holds an object using the VR hand, the distance between the VR hand and the actual hand increases until the spring-dampers' forces equal the weight of the VR hand and the object that it is holding. This causes a discrepancy between the visual and the proprioceptive sense. To solve this problem, we introduce an additional term in the spring-damper for the palms: + +$$ +\overrightarrow{{F}_{\text{ Palm }}}\left( t\right) = {k}_{p}^{\prime }{\overrightarrow{\Delta }}_{\text{ Position }}\left( t\right) - {k}_{d}^{\prime }\overrightarrow{V}\left( t\right) + {k}_{i}^{\prime }\mathop{\sum }\limits_{{j = 0}}^{t}{\overrightarrow{\Delta }}_{\text{ Position }}\left( j\right) , \tag{3} +$$ + +$$ +{\tau }_{\text{ Palm }}^{ \rightarrow }\left( t\right) = {k}_{p}^{\prime \prime }{\overrightarrow{\Delta }}_{\text{ Rotation }}\left( t\right) - {k}_{d}^{\prime \prime }\overrightarrow{\omega }\left( t\right) + {k}_{i}^{\prime \prime }\mathop{\sum }\limits_{{j = 0}}^{t}{\overrightarrow{\Delta }}_{\text{ Rotation }}\left( j\right) \tag{4} +$$ + +where ${k}_{i}^{\prime }$ and ${k}_{i}^{\prime \prime }$ are spring-damper coefficients. The added summation term applies force and torque proportional to the accumulation of ${\overrightarrow{\Delta }}_{\text{ Position }}\left( t\right)$ and ${\overrightarrow{\Delta }}_{\text{ Rotation }}\left( t\right)$ over time. Therefore, when the user holds an object, $\overrightarrow{{F}_{\text{ Palm }}}\left( t\right)$ and $\overrightarrow{{\tau }_{\text{ Palm }}}\left( t\right)$ increase until the virtual palm's orientation and position match the tracked hand palm in the steady-state. ${k}_{i}^{\prime }$ and ${k}_{i}^{\prime \prime }$ are set during the preliminary experiments so that position and orientation of the coupled palms quickly match when the hand is not accelerating. Also, ${k}_{p}^{\prime },{k}_{p}^{\prime \prime },{k}_{d}^{\prime }$ and ${k}_{d}^{\prime \prime }$ are set independently for the palm compared to the phalanges since it has different physical properties. + + < g r a p h i c s > + +Figure 2: A weak and a strong virtual grip and the corresponding actual hands. + +Using a force-controlled virtual hand should give a sense of mass perception and allow mass discrimination between virtual objects. However, we suspect that this claim is stronger in some scenarios and weaker in others. While grasping and moving a light object, the spring-damper forces counteract the force of gravity and inertia on the object. Therefore, using our virtual hand, if a user grasps an object with a low virtual mass, they can easily pick it up and quickly move it around the workspace with high acceleration without it coming out of their grip. However, for a heavier object, the user can still pick it up, but they have to increase their effort, such as using more fingers for grasping or closing their grip further so spring-dampers would apply more force on the object (Fig. 2). Also, it is not possible to accelerate it as fast as lighter objects since the inertial forces are higher and can overcome the spring-dampers in the virtual hand and open the virtual grasp. Depending on the spring dampers' coefficients, after a certain point in mass, it would be really difficult or eventually impossible for the user to move or pick up the object. We hypothesize that the limit on how fast the user can accelerate the virtual object in hand and how challenging it is to pick it up gives the user a sense of the virtual object's mass and enable them to discriminate two objects based on their mass. However, using this technique, it is hard to perceive the difference in mass between two light objects $\left( { < 1\mathrm{\;{kg}}}\right)$ since it would be almost effortless to pick both of them up off the ground and move them quickly without dropping them. To overcome this problem, we introduce a vibration feedback effect to complement our VR hand. + +§ 4 VIBROTACTILE FEEDBACK + +In day-to-day physical interactions with real-world objects, we can feel the object's mass and compare it to other heavier or lighter objects through our sense of touch. Virtual experiences that do not provide haptic feedback lack realism compared to real-world experiences. One of the modalities of haptic feedback is vibrotactile feedback in the form of mechanical waves or vibrations. + +Our goal is to complement the VR hand in giving the user a perception of an object's mass by communicating the net force they apply to the object. To achieve this without limiting the hand and finger movements, we use one actuator to render our haptic feedback. We use sinusoidal vibration feedback with a frequency range between ${100}\mathrm{{Hz}}$ and ${150}\mathrm{{Hz}}$ , making it perceivable only by the Pacini mechanoreceptors in the fingertip skin. The Pacini mechanorecep-tors cannot detect the direction of the mechanical waves; therefore, only one actuator is sufficient to render our haptic feedback in all directions. + +We strap a VCA (voice-coil actuator) to the fingertip of the index finger. We chose the index finger because it has a critical role in picking up objects with a pinch grasp. Other fingers, such as the thumb and the middle finger, can have an important role in grasping as well; However, attaching voice-coil actuators to multiple fingers limits the relative movement of fingertips and manual dexterity. + +While a user grasps an object, we render the vibration feedback $O\left( t\right)$ with frequency $O{\left( t\right) }_{F}$ . The amplitude of $O\left( t\right)$ is proportional to the object’s mass $M$ and acceleration $A\left( t\right)$ . This results: + +$$ +O\left( t\right) = {\alpha MA}\left( t\right) \sin \left( {{2\pi tO}{\left( t\right) }_{F}}\right) , \tag{5} +$$ + +where $\alpha$ is a scaling constant to control the range for the vibration energy perceived by the user. The vibration feedback should be only strong enough so that users can perceive the vibration when slowly moving the lightest weight in the scene. The value of $\alpha$ also depends on the hardware components of the haptic chain, such as the signal amplifier and the haptic actuator. For our setup, we set the $\alpha$ value in a way that, if the user accelerates a $1\mathrm{\;{kg}}$ object at $1\mathrm{\;g}$ , the measured vibration at the fingertip is on average ${0.32}\mathrm{\;g}$ , which allows users to perceive the vibration feedback when slowly moving the lightest weight(0.25kg)in our experiments. The frequency of the output signal $O{\left( t\right) }_{F}$ dynamically changes from ${100}\mathrm{{Hz}}$ to ${150}\mathrm{{Hz}}$ based on the velocity of the virtual object $V\left( t\right)$ , that gives: + +$$ +O{\left( t\right) }_{F} = \max \left( {{150},{100}\frac{\left| {V\left( t\right) }\right| + 2}{2}}\right) , \tag{6} +$$ + +where at speeds near zero, the signal’s frequency is ${100}\mathrm{\;{Hz}}$ , and as the speed increases to about one $\mathrm{m}/\mathrm{s}$ , it goes up to ${150}\mathrm{{Hz}}$ . To ensure a smooth vibration signal, we apply a second-order Butterworth lowpass filter to $V\left( t\right)$ and $A\left( t\right)$ . The filter has a sample rate of ${1000}\mathrm{\;{Hz}}$ , and the corner frequency is ${20}\mathrm{\;{Hz}}( - 3\mathrm{\;{db}}$ amplification at ${20}\mathrm{{Hz}}$ ). + +We set the signal’s amplitude proportional to ${MA}\left( t\right)$ which, according to Newton's second law of motion, represents the net force acting on the virtual object. In our method, we ignore balanced or counteracted forces acting on an object since the counteracted forces from grasping can be similar between a light and a heavy object. As an example, we can grip a light object just as hard as a heavier one. + +During a virtual experience, the voice-coil actuator is always strapped to the user's index fingertip. However, the vibration feedback renders only when the user's virtual hand grasps a virtual object and not during their free-hand motions in the scene. To detect if the user is grasping a virtual object, we check whether the virtual object is off the ground and touching the virtual hand's palm and the distal joint of the thumb, index, or middle finger. If grasping is detected, the vibration feedback is rendered for the user through the voice coil actuator. + +Whenever the system detects that the user is no longer grasping a virtual object, the vibration feedback rendering stops. However, in a physical simulation, even when the user is grasping the object, the hand parts may momentarily lose contact with the virtual object for a few cycles, and this might cause on/off pulses in our vibration feedback. To avoid these impulse noises in our signal, we stop the vibration feedback after no grasping is detected for ten milliseconds. + + < g r a p h i c s > + +Figure 3: The output voltage of the vibration feedback for two virtual objects with mass values ${0.5}\mathrm{\;{kg}},{O}_{1}\left( t\right)$ , and $1\mathrm{\;{kg}},{O}_{2}\left( t\right)$ , during an arbitrary shaking movement with acceleration, $A\left( t\right)$ , and velocity $V\left( t\right)$ . + +When the user picks two virtual objects with different mass values and moves them around the scene with the same motion, the vibration effect is more substantial for the heavier object than the lighter object, proportional to their mass difference. In other words, the user feels more energetic mechanical vibrations on their skin when interacting with a heavier object. We suspect users perceive these vibrations as a resistance force to acceleration (similar to the force of inertia), which leads them to perceive the mass of virtual objects. + +The limitation of the force-controlled hand is that if we take two light virtual objects such that one object is twice as heavy as the other, it would be difficult to perceive the mass difference since both masses are well within the threshold of what the virtual hand can grasp and move around in the VR scene. However, with the presented vibration feedback, the vibration at the user's skin for the heavier object has twice the amplitude (Fig. 3). As a result, we expect that the user perceives the mass difference between the objects based on the vibration feedback. + +§ 5 EVALUATION + +We evaluate our VR mass rendering techniques and verify our claims using both qualitative and quantitative measurements. We conducted a user study in which participants interact with virtual objects using the force-controlled co-located virtual hand and perform several object manipulation and comparison tasks. Moreover, we study the effect of the proposed vibration feedback on participants' ability to perceive virtual objects' masses and compare them based on the heaviness. More specifically, we look to assess these two hypotheses in our evaluations: + + * Grasping and manipulating virtual objects using a co-located physically-based hand model in virtual reality gives a sense of mass perception and allows some degree of mass discrimination between virtual objects. + + * The proposed vibration feedback can improve the sense of mass perception and enhance mass discrimination precision during virtual interactions between a physically-based virtual hand and virtual objects. + + < g r a p h i c s > + +Figure 4: The voice coil actuator is strapped to the index fingertip of the user's dominant hand + +To examine the validity of the first hypothesis, participants perform virtual tasks involving interactions with objects with different mass values using the VR hand. However, evaluating these results of the VR hand interactions is not enough to validate our first hypothesis. The virtual environment runs in a physics engine, and users might get other clues to detect the difference in mass between objects that are not from the VR hand interactions only. These clues include: how the object interacts with each other, how they bounce when dropped on the virtual ground, and the speed at which they fall in the presence of air friction. To control the experiment for these additional cues, we ask participants to interact with each object individually and not push or touch an object using another. Additionally, we add a control interaction mode to our platform, called the spherical cursor. In this mode, instead of a co-located hand, users only see a spherical cursor co-located with the center of their palms. If the spherical cursor is within an object and the user puts their hand in a grasp pose, that object follows the cursor around the virtual scene until the user opens their hand. During grasping using the spherical cursor, we move the object by applying force to it in the cursor's direction. However, this force is proportional to the object's mass. As a result, objects with different mass follow the cursor at the same speed and acceleration. Therefore, comparing the quantitative and qualitative results from user interactions using a force-controlled hand versus the spherical cursor as a baseline allows us to validate the first hypothesis. + +To test the second hypothesis, participants interact with virtual objects using the force-controlled hand both with and without the vibration feedback, which allows us to compare the results and analyze the effectiveness of the vibrotactile feedback in mass perception and discrimination. + +§ 5.1 SETUP + +In this subsection, we describe the study setup's hardware and software components and the range of mass values we use for our virtual objects. We use the MMXC-HF VCA by Tactile Labs, a relatively compact tactile actuator $\left( {{36}\mathrm{\;{mm}} \times {9.5}\mathrm{\;{mm}} \times {9.5}\mathrm{\;{mm}}}\right)$ , and the Tactile Labs QuadAmp multi-channel signal amplifier. A pair of thin wires attached the VCA to the signal amplifier placed on a nearby table. The cables from the actuator point outwards from the user's finger, limiting the chance of cables touching the user's hands during virtual interactions. Using a $3\mathrm{\;d}$ printed mount, we attach the voice coil actuator to the user's index fingertip (Fig. 4). We use the PC-powered Oculus Rift as our VR interface, which allows for external PC-based graphical computation. For tracking the user's hands, we attach a Leap Motion controller on the front side of the Oculus Rift VR headset for hand tracking. + +In our system, we use the Bullet physics simulation [8] as our physics engine. One desirable feature of the Bullet library is that it permits the virtual hand's control by applying virtual force and torque from an external source. This feature enables us to implement the virtual coupling between our virtual hand and the tracked hand. + + < g r a p h i c s > + +Figure 5: A participant interacting with a virtual object while wearing the VR headset with the Leap Motion hand tracker, VCA and nose-canceling headphones. + +To render the virtual scene to the VR headset and work with the Bullet physics simulation, we use the Chai3D library. Chai3D [6] is a platform-agnostic haptics, visualization, and interactive real-time simulation library. Moreover, it supports visualizing using the Oculus Rift headset and has built-in Bullet physics integration, making it ideal for immersive and physically realistic haptic experiences. + +In our study, we use cubes as our virtual object's shape since they are easier to grasp. During our experiments, there may be multiple virtual cubes in the scene with different mass ranges. For setting the mass range in our experiments, we should consider the physics engine that we use. The Bullet physics engine recommends keeping the mass of objects around $1\mathrm{\;{kg}}$ and avoid very large or small values [7]. Therefore, during our preliminary experiments, we set the virtual coupling coefficients so that users could pick up virtual cubes with masses up to $4\mathrm{\;{kg}}$ . However, past that mass point, it becomes too difficult to pick up the virtual cubes. Since we expect users to be able to interact and pick up any virtual cube in the scene, we chose ${2.5}\mathrm{\;{kg}}$ as our upper mass limit in our user studies for the heaviest objects and ${0.25}\mathrm{\;{kg}}$ as our lower mass limit for the lightest objects. + +§ 5.2 PARTICIPANTS + +Ten participants ( 5 female, 5 male) took part in this study. All participants were right-handed. Three participants had never used VR headsets before; one participant used them few times per week and the rest at most a few times per year. Seven of them had interacted with virtual objects during their VR experiences, and three had used haptic devices in VR games and applications. This study was approved by the University of Calgary Conjoint Faculties Research Ethics Board (REB18-0708). Participants received 20$ compensation for taking part in this user study. + +§ 5.3 STUDY + +We begin the study by spending a few minutes $\left( { < 8}\right)$ familiarizing the participants with the VR headset, Leap Motion hand tracker, and the virtual study environment. After placing the haptic actuator on their dominant hand's fingertip, they practice how to pick up and move a virtual cube (1.25 kilograms) using the virtual co-located hand. We ask participants to always use their index fingers in grasping since the haptic actuator is attached to it. They are also encouraged to engage more fingers or tighten their grip to increase the grasping strength and move the training object around the scene both slowly and quickly. For consistency, we ask the participants only to use their dominant hand to interact with the virtual elements in the scene when the tasks start. During the virtual tasks, participants wear active noise-canceling headphones while white-noise is played through them to block any audible signal from the haptic actuator (Fig. 5). + + < g r a p h i c s > + +Figure 6: Two virtual cubes with random weights are placed in front of the participant to compare. The co-located spherical cursor mode is active, and the "Vibration Off" label indicates to the participant that they should not expect any vibration from the voice-coil actuator. + + < g r a p h i c s > + +Figure 7: Three virtual cubes with random weights are placed in front of the participant to sort in ascending order from left to right. The "Vibration On" label indicates to the participant that they should expect vibration from the voice-coil actuator when picking up objects. + +In the first task, we present participants with six pairs of cubes and ask them to interact, grasp, move the objects, and think aloud about the experience. Furthermore, we ask them to compare the two cubes based on their mass and say if they feel they have the same mass or if one is slightly or considerably (or to whatever degree they perceive it) heavier than the other. Participants interact with virtual objects using the three interaction modes in the following order: spherical cursor, virtual hand without the vibration feedback, and virtual hand with the vibration feedback. As an example, Fig. 6 shows this task's setup while the interaction mode is set to the spherical cursor. For each interaction mode, participants compare two pairs of cubes. One pair has the largest mass difference given our mass range (0.25 and ${2.5}\mathrm{\;{kg}}$ ), and the other pair has a smaller mass difference (0.25 and ${0.5}\mathrm{\;{kg}}$ ). The system randomly decides if the smaller or larger mass difference pair is first presented to the user and randomly places the two cubes on the table for each set to avoid learning from the previous rounds. + +In the next part, we ask participants to sort virtual cubes based on their mass. In sorting, a higher number of objects to sort means the participant spends more time picking up and moving objects around the scene, which results in a fuller user experience in comparing weights. However, a higher number of objects to sort increases the average time to complete the task, limiting the number of sorting rounds users can perform during a study session. Our preliminary experiments concluded that three cubes could offer a reasonable balance between sorting time and user interaction with objects. + +We quantized our mass range ( ${0.25}\mathrm{\;{kg}}$ to ${2.5}\mathrm{\;{kg}}$ ) into two weight sets of size three. Having more than one weight-set allows a more in-depth analysis of the interaction modes across our mass range. Weber's law states that the difference in magnitude needed to discriminate between a base stimulus and other stimuli increases proportionally to the intensity of the base stimulus [12]. We can easily differentiate a ${0.5}\mathrm{\;{kg}}$ mass versus a $1\mathrm{\;{kg}}$ mass, but it is harder to distinguish a ${10}\mathrm{\;{kg}}$ mass from a ${10.5}\mathrm{\;{kg}}$ even though both pairs have the same weight difference. Therefore we chose our mass values with equal ratios between them using a geometric series. That gives a light weight-set(0.25kg,0.44kg,0.79kg)and a heavy weight-set (0.79kg,1.4kg,2.5kg). + +Participants sort random permutations of the light and the heavy weight-set, using the three different interaction modes (spherical cursor, virtual hand without vibration feedback, the virtual hand with vibration feedback). Therefore we have six modes of sorting. As an example, Fig. 7 shows this task's setup while the interaction mode is set to the virtual hand with vibration feedback. In all sorting modes, three virtual cubes with similar appearance and size are placed on a virtual surface, and participants have to place them from left to right in ascending order based on the perceived mass. Participants perform six rounds of sorting for each mode. During each round, sorting modes are ordered randomly to remove the learning effect between the modes. Before the sorting task begins, we rotate between the modes to familiarize the participant with the scene. Furthermore, we ask participants to grasp each object at least once before finalizing their decision. Also, we recommend keeping each sorting under a minute; however, this is not a hard limit. + +When the sorting task finishes, participants fill out a questionnaire regarding their experience during the two virtual tasks. After participants fill out the questionnaire, we ask them to elaborate on their answers during a semi-structured interview. Our post-session questionnaire is as follows: (each question is repeated for each of the interaction modes) + + * While interacting with objects, I could perceive their mass. 1 to 5 (Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree) + + * I could feel one cube was heavier than the other. I to 5 (Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree) + + * How was your confidence level in sorting objects? 1 to 5 (Not confident at All, , ,, Very Confident) + + * How realistic were the interactions with objects? 1 to 5 (Very Unrealistic, Unrealistic, Neutral, Realistic, Very Realistic) + + * Would you recommend experiencing the "" in VR games during interactions with virtual objects? 1 to 5 (Do Not Recommend at All,, Neutral,, Highly Recommend) + +§ 5.4 RESULTS + +We show the sorting results in the form of confusion matrices in Fig 8. Using the nonparametric Kruskal-Wallis test, we analyze the statistical significance of the difference between placement distributions of light, medium, and heavy objects for each of the sorting modes. For the spherical cursor (control mode), we observe statistically insignificant p-values of 0.463 for the heavy weight-set and 0.800 for the light weight-set, showing that the user could not discriminate between weights in this mode. For the virtual hand with no vibration feedback, we see statistically insignificant results for the light weight-set (p-value 0.928). However, for the heavy set, we see a significant effect of the virtual hand on sorting (p-value $< {0.001})$ . In the case of sorting using the virtual hand with vibration feedback, we see a significant effect on sorting both for the light (p-value <0.001) and heavy (p-value <0.001) weight sets. To check the validation of the first hypothesis, we see a significant improvement for the heavy weight-set compared to the control mode (spherical cursor). However, the same cannot be said for the light weight-set. To check for the second hypothesis, we see a statistically significant improvement in the light weight-set with the vibration feedback compared to only using the virtual hand. However, for the heavy set, we see significant effects both from virtual hand with and without the vibration feedback. Therefore, to check if the observed improvements in the precision of sorting for the light, medium and heavy objects are significant, we perform row by row comparison between the two confusion matrices using the Wilcoxon rank-sum test. Comparing the number of correct sorts for the heavy weight (54 correct sorts versus 33) gives a statistically significant p-value of $< {0.001}$ , for the medium weight (44 correct sorts versus 25) p-value is $< {0.001}$ , and for the light weight ( 48 correct sorts versus 34) p-value is $< {0.01}$ , which shows that for the heavy weight-set the vibration feedback improvement is statistically significant as well. + + < g r a p h i c s > + +Figure 8: Sorting results of the six different sort modes in the form of confusion matrices. The top three matrices show the sorting results for the light weight-set(0.25kg,0.44kg,0.79kg), and the bottom three show the sorting results for the heavy weight-set(0.79kg,1.4kg,2.5kg). From left to right, matrices represent the three interaction modes (spherical cursor, virtual hand with no vibration, virtual hand with vibration). The matrices diagonals show the number of times the objects were sorted correctly. + + < g r a p h i c s > + +Figure 9: Users compare the sense of mass perception and discrimination between the three interaction modes in the post-session questionnaire. The bars represent the mean answer, and the black lines show the standard deviation. + +The results of the questionnaire in Fig. 9 show that participants declared an improvement in mass perception and discrimination when the vibration feedback was enabled compare to only using the virtual hand. P6 (Participant #6) mentioned "With the hand no vibration, it was harder to tell the difference in mass, but I think you could still, it was realistic enough that it was engaging, but the vibration one I'm not if it's like a mental thing, it just helps a lot more with the differentiating between the different masses and the movements". We also see neutral results for the spherical cursor. Generally, participants mentioned they could not differentiate between the objects using the spherical cursor. P2 mentioned, "It was harder for me to use the cursor to compare the weights, most of the time I thought they were like identical". For the virtual hand without the vibration feedback, participants on average expressed neutral opinions regarding its ability to give them the sense of mass perception and discrimination. However, the results from the sorting task show they performed better than the control. Also, some participants mentioned different encounters that enabled them to differentiate between weights. P5 mentioned "I'm picking it up, how long would it slide, ok hold it, I shake it around it slides faster ... if I hold it, it slips faster then it's heavier", and P6 said "(with the virtual hand) if I grab it loose the heavy one just drops as opposed to the light one stays in even if I'm shaking it", and "looking at the movement, if I'm moving my hand it's a bit slower it just feels heavier versus if it's a quick it just feels lighter" + + < g r a p h i c s > + +Figure 10: Users compare the sorting confidence, sense of realism, and gaming experience between the three interaction modes in the post-session questionnaire. The bars represent the mean answer, and the black lines show the standard deviation. + +Fig. 10 shows that participants expressed having more confidence in sorting when the vibration feedback was enabled. However, without the vibration feedback, they expressed neutral confidence. Furthermore, participants generally stated that the vibration feedback added to the interaction's realism and that the virtual hand's interactions were realistic. P4 said "For the vibration also, I felt like it helped me, felt like it's more real, I'm touching things, not just I'm seeing that I'm touching things". Furthermore, participants expressed interest in experiencing the vibration effect in virtual reality games. + +Finally, we asked the participants how did interaction with virtual objects feel when they vibrated. P2 said: "if felt like it has resistancy to move, based on that I felt like it's heavier, might be heavier" and P7 mentioned "When I picked a cube with vibration, I could feel that something is trying to, I don't know, annoy me bother me, might be something like the gravity taking it back to the ground, it feels that I should put more energy to pick it up" and further elaborated "the one that without vibration I just pick it with two fingers I played with that, but the one with vibration when I tried to pick it with two fingers, suddenly I tried to keep it with all my fingers because I thought that it might slides and drops." + +Overall our findings indicate that the presence of the force-controlled virtual hand both with and without the vibration effect gives a sense of weight discrimination and perception. However, the virtual hand without vibration feedback is only effective for heavier objects closer to the hand strength threshold. Furthermore, the virtual hand with the vibration effect improves the weight perception and discrimination sense for both lighter and heavier objects without having a negative effect on the realism of the experience. Therefore, our results validate our hypotheses. + +§ 6 CONCLUSION + +Rendering the mass of objects in virtual reality without limiting the hand movements is a challenging task. In this paper, we propose using a force-controlled hand in VR to give a sense of mass perception and discrimination by enabling physically realistic hand-object interactions. We also propose a complementary vibration effect proportional to the object's mass and acceleration to improve the sense of mass perception and discrimination. We conducted a user study and performed qualitative and quantitative analysis, which indicates that our hypotheses are valid. The physically-based virtual hand can give a sense of mass perception and discrimination for heavier objects closer to the upper limit of its grasping strength. Furthermore, the vibration feedback greatly enhances the mass perception and discrimination for a wider mass range in our study while improving the interaction's realism. + +§ 7 FUTURE WORKS + +One potential future direction for this research is to analyze the mass discrimination ability for the virtual hand and the vibration effect for a broader mass range and different mass ratios between the objects. Moreover, we are interested in analyzing the vibration effect's behavioral effects on the user's movements during virtual interactions. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..d6fed17d8aa35d756eb3acfcc42389683361fa23 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,303 @@ +# Generating Rough Stereoscopic 3D Line Drawings from 3D Images + +Category: Research + +## Abstract + +We present a method to produce stylized drawings from S3D images. Taking advantage of the information provided by the disparity map, we extract object contours and determine their visibility. The discovered contours are stylized and warped to produce an S3D line drawing. Since the produced line drawing can be ambiguous in shape, we add stylized shading to provide monocular depth cues. We investigate using both consistently rendered shading, and inconsistently rendered shading to determine the importance of lines and shading to depth perception. + +Index Terms: Computing methodologies-Non-photorealistic rendering一; + +## 1 Introduction + +Stereoscopic 3D is used in a variety of art forms, such as photography and film, to create the effect of depth. The perceived depth can provide a greater sense of reality, create an immersive or engaging experience, and serve as an artistic medium to induce emotional responses in the viewer. S3D creates a sense of depth by presenting a slightly different image to each eye. The left and right images exhibit horizontal separation between objects, which is interpreted as depth by the brain. Producing S3D content is challenging and emphasis must be placed on consistency, ensuring that the object(s)/scene visible in both views matches exactly to produce a comfortable viewing experience and correct depiction of depth $\left\lbrack {{12},{18}}\right\rbrack$ . + +Line, or pen-and-ink, drawings are one of the oldest S3D art forms, dating back to Sir Charles Wheatstone's original drawings in the ${1830}\mathrm{\;s}$ [26]. This format persists today in comics and diagrams. Although S3D line drawings can be produced from 3D meshes using automated algorithms, producing S3D line drawings from S3D photos has not received significant attention. + +One possibility for producing line drawings from S3D photos is to use a S3D stylization algorithm such as the layer-based method presented by Northam et al. $\left\lbrack {{17},{18}}\right\rbrack$ . This approach divides the S3D image and disparity ${}^{1}$ maps into layers by disparity, such that each layer only contains pixels from a single disparity, then applies stylization to these layers. While their approach can be used with a variety of artistic styles and filters, contours and line drawing were not considered. + +If we try to produce a line drawing using this method, the results are displeasing. This is because object contours will be conflated with other edges, such as lighting and texture boundaries. Thus, the final result contains lines that do not convey shape. We could, however, use the additional information provided by a disparity map to isolate object contours. However, the layer-based approach cannot be used to extract these contours from the disparity map, because layers contain pixels with the same disparity value. Figure 1 illustrates this issue. Note how the edges found for the disparity layer do not correspond to object contours, which can be found in the disparity map. Instead, they correspond to the edges of the region with the given disparity. + +![01963e97-dcdc-7948-843b-3f49f604ff8b_0_951_461_662_309_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_0_951_461_662_309_0.jpg) + +Figure 1: Line drawing from disparity layers. Note how each disparity layer only contains pixels of one value. Hence, extracting lines from such layers produces the edges of the layer pixels instead of the desired object contours. + +In this paper, we present a method to produce stylized stereoscopic 3D line drawings from 3D photos using stereoscopic warping instead of layers. While constructing this method, we observed that some drawings of simple objects were ambiguous and did not uniquely identify the $3\mathrm{D}$ shape. For example, the contour of a sphere is a circle and could represent either a flat circle or a sphere. Shading, a monocular depth cue, can help resolve these ambiguities. Hence, we also investigate the effects of adding stylized shading to our produced drawings. + +The line extraction and rendering algorithm is presented in Section 3, and our shading method is discussed in Section 4. Our results are presented in Section 5. Finally, we present an evaluation of our results to verify their 3D comfort and depth quality in Section 6. + +## 2 Background + +A line drawing is a simplistic representation of an object or scene. It is comprised entirely of lines, which may be stylized, and which do not contain shading or colour. Despite the simplicity of such drawings, these drawings are capable of accurately conveying the subject that they depict. Hertzmann indicates line drawings "work" because they "approximate realistic renderings" [9]. + +Where do artists draw lines? A line drawing study by Cole et al. examined where artists draw lines for a variety of objects [5]. They observed that contours, creases and folds - which describe the shape of the object - were drawn, but lines depicting shadows or highlights were not. This was also observed by Hertzmann et al. while rendering line drawings for smooth meshes [10]. + +In a stereoscopic 3D line drawing, contours, creases and folds give the primary sense of an object’s $3\mathrm{D}$ shape and depth. Without other S3D cues from which to infer depth, it is important that these lines are as consistent as possible between left and right views. Inconsistencies can cause viewing discomfort from binocular rivalry ${}^{2}$ and double vision, detracting from the viewer's perception of object depth. Previous studies have shown that these S3D lines alone sufficiently convey object shape/depth for many images $\left\lbrack {1,{12},{14}}\right\rbrack$ . + +--- + +${}^{2}$ binocular rivalry is a phenomena where the brain rapidly switches between left and right eyes because the images differ + +${}^{1}$ disparity is inversely proportional to depth and conveys the horizontal separation between a point in left and right views + +--- + +A number of algorithms have been proposed to produce stereoscopic 3D line drawings from meshes. Most notably, Kim et al. presented a method that produces 3D line drawings by generating contours for left and right eyes separately [12]. Contours are then pruned for view consistency by checking the visibility of points along the curve formed by creating an epipolar plane between a pair of points on the left and right contours. Kim et al. also describe a method for consistent stylization of lines by linking control points between matching contours and applying stylization to the linked pairs. However, this method can only be used with full 3D models. + +Another paper by Kim et al. describes a method for producing stylized stereoscopic 3D line drawings from S3D photographs [13]. Their paper applies Canny edge detection [4] to the edge tangent field [11] of the left stereo image and warps the discovered edges to the right image using the disparity map. However, the rendered lines are from all edges that can be found in the actual image, not only object contours but also texture or lighting contours. By contrast, a hand-drawn stereoscopic 3D line drawing would be likely to include only object contours and creases. As their method is based on edge detection purely in the colour domain, they cannot differentiate between geometric discontinuities and colour discontinuities. Disparity maps, which indicate the horizontal separation between pixels of the left and right image, isolate geometric information from colour information. Therefore, applying edge detection to the disparity map could uniquely produce object contours. Our method will harness the information in the disparity map to construct S3D line drawings. + +### 2.1 Perception and Monoscopic Depth Cues + +Our perception of depth arises from both monoscopic (2D) and 3D depth cues. Monoscopic depth cues include shading, relative size, occlusion, and motion $\left\lbrack {1,3}\right\rbrack$ . Shading an S3D line drawing can improve depth perception, but the amount of improvement is limited for images with rich detail [14]. However, for S3D line drawings of simple objects with few internal lines, shading provides the necessary information about object shape. For example, imagine a circular contour: is this a line drawing of a circle or a sphere? + +Stereo-consistent shading is complicated by the fact that shading can be view-dependent. Apart from purely Lamber-tian surfaces, shading features such as specular highlights may be visible in only one eye due to the position of the eyes with respect to the light source and object [2]. This phenomenon can also be observed in S3D photographs, as demonstrated in Figure 2. Note that both specular highlights and reflections differ between left and right views and are circled in cyan for visibility. While these specular highlights are natural to the human visual system, they can be problematic for computer vision algorithms commonly used with stereo [2]. Additionally, it is believed in film that specular highlights can cause binocular rivalry if they are rendered inconsistently between views. Therefore, they are often removed or redrawn to be consistent [15]. We provide users the option of adding shading to our S3D line drawings, to improve the perception of shape. However, our shading will remain true-to-nature. That is, we will not remove or adjust the natural lighting of the S3D images to ensure consistency. + +![01963e97-dcdc-7948-843b-3f49f604ff8b_1_944_161_674_480_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_1_944_161_674_480_0.jpg) + +Figure 2: Inconsistent specular highlights and reflections. + +### 2.2 Stylization + +Stylization can be applied to both the S3D lines and shaded regions. Surprisingly, it has been shown that simple line styles such as overdrawn lines, varying thickness, and jitter do not negatively impact depth perception and comfort in S3D images if rendered consistently [14]. + +The naive approach to create consistent stylized lines is to render them in the left view, then use the disparity map and warp (horizontally shift) them to the right. However, the rendered, stylized object contours may have pixels that bleed over onto other surfaces. Therefore, warping individual pixels would not produce smooth lines. Alternatively, the control points of curves or the endpoints of line segments could be warped. But if any of these points from the stylized lines lie on other objects, the lines rendered in the right view may be discontinuous or distorted. Instead, we will match the original control points to their underlying disparities prior to stylization or rendering, similar to the approach used by Kim et al. [12]. Although there are many ways to stylize lines, we focus on overdrawn and jittered styles. + +In addition to stylized contours, we will also stylize shaded regions. Stereoscopic 3D image stylization has been well studied, although existing methods focus on stylizing the whole image consistently instead of a small region. Stavrakis et al. applied stylization to the left image and used the disparity map to warp it to the right, then did the reverse for occluded regions $\left\lbrack {{23},{24}}\right\rbrack$ . As discussed previously, Northam et al. applied stylization using disparity layers $\left\lbrack {{17},{18}}\right\rbrack$ . However, since lighting and specular highlights are view-dependent, these methods would enforce consistency where none exists. Hence, we will apply stylization algorithms to shaded regions in a view-dependent manner to preserve these inconsistencies. By preserving inconsistencies, we contradict Richardt et al. [20] and Northam et al. $\left\lbrack {{17},{18}}\right\rbrack$ , which focus on establishing or maintaining consistency at all cost. However, we believe that because shading is a monocular depth cue, binocular rivalry and randomness will have minimal effects on viewer perception. + +## 3 Producing a Stereoscopic 3D Line Drawing + +Our method is divided into several stages and requires left and right images and disparity maps as input. First, the object-depiction contours are extracted. Next, the contours are split into curves by view visibility: left-only, right-only, and shared. Curves are stylized, then warped from left-to-right. Finally, shading may be added to improve depth perception. + +### 3.1 Extracting Contours + +The shape of an object is described by its silhouette (contours) and interior creases and folds $\left\lbrack {8,{10}}\right\rbrack$ . Both contours and interior lines are needed to give a clear sense of shape [5]. While these lines may be found in the S3D image, they are more easily isolated using information in the disparity map. + +There are two possible approaches to find contours (edges). The first approach to finding edges is to apply edge detection methods to the raw or preprocessed disparity map. The second approach is to perform a 3D reconstruction of the scene using the provided disparity maps and apply a silhouette finding algorithm, such as Hertzmann and Zorin, to identify the edges [10]. + +We use the first method, applying edge detection methods to the raw or preprocessed disparity map, instead of discovering silhouettes from a 3D reconstruction. While many of the edges can be found from the reconstruction using Hertzmann and Zorin's approach, more subtle edges occurring where two objects intersect at the same depth are not always identified, as shown in Figure 3 [10]. Secondly, after identifying the silhouette edges from the 3D reconstruction, the visibility of those edges must then be computed for each eye, as in Kim [12]. However, visibility is given in the disparity map, so recomputing this information is a waste. Finally, we do not assume that the baseline and focal length of the image is known or can be estimated such that a believable 3D reconstruction can be produced. + +![01963e97-dcdc-7948-843b-3f49f604ff8b_2_150_854_717_242_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_2_150_854_717_242_0.jpg) + +Figure 3: Hertzmann and Zorin's method vs our method. + +#### 3.1.1 Finding Edges in a Disparity Map + +There are two types of edges in a disparity map. The first type occurs at a depth discontinuity, where one object occludes another, creating a jump in neighbouring disparity values. The second type occurs where two surfaces meet at the same depth, or as creases and folds on an object's surface. The first type of edge can be found using a Laplacian or Canny edge detector, as shown in Figure 4. + +![01963e97-dcdc-7948-843b-3f49f604ff8b_2_149_1676_714_235_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_2_149_1676_714_235_0.jpg) + +Figure 4: The strong edges of a disparity map may b efound by a Canny edge detector or Laplacian edge detector but they are not ideal. + +The second type of edge is more elusive. Adjusting the parameters of a Laplacian or Canny detector can find these edges, but not uniquely, as shown in Figure 5. + +![01963e97-dcdc-7948-843b-3f49f604ff8b_2_921_159_718_607_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_2_921_159_718_607_0.jpg) + +Figure 5: The second type of edge is hard to detect uniquely using Canny or Laplace. + +Our method does manage to successfully and uniquely identify these edges, as shown in Figure 6. + +![01963e97-dcdc-7948-843b-3f49f604ff8b_2_1039_983_482_324_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_2_1039_983_482_324_0.jpg) + +Figure 6: The second type of edge is easy to detect uniquely using our method. + +To make these low-contrast edges more visible, we can convert disparity to depth and apply a bilateral filter to smooth the plateaus in the result, as suggested by [16]. However, while this improves the visibility of type one edges, it does not improve visibility for all type two edges, as shown in Figure 7. + +(a) A stereoscopic 3D (b) Background (c) Background $n$ creases are not found creases are found + +the background. using [16]'s method. using our method. + +## Figure 7: Our method improves visibility for type two edges. + +We note that finding type two edges, or finding edges in a low-contrast or noisy region, is known to be a difficult problem [19]. Savant indicates that Laplacian, Canny, and other detectors are not able to uniquely identify edges in low-contrast areas [21]. And while second-order derivative methods can identify some edges that are zero-crossings, not all type two edges are zero-crossings. + +Hence, we propose the following method to identify edges in disparity maps. First, we use a Canny edge detector, as suggested by Gelautz and Markovic, to identify type one edges [6]. + +Next, we improve the visibility of type two edges. Hertz-mann suggested rendering a scene where different coloured directional lights are cast along positive and negative axes onto a 3D model to produce a brightly-coloured normal map from which edges could be found [10]. In order to apply this technique to our disparity maps, each pixel needs a surface normal. We assign surface normals by applying a simple surface triangulation to each map. Each pixel position(x, y) is a vertex with depth $z$ equal to the disparity at that position. Triangular "faces" are formed by a point $p$ in the disparity map and two of its immediate neighbours, $q$ and $r$ . A normal can then be calculated for each of these faces, as well as the vertex normal from the average of the eight adjacent triangular face normals. + +The normals are multiplied by a directional light vector to enhance visualization, as illustrated in Figure 8. However, when directional lights are cast onto the lit normal map, we do not observe a smoothly shaded cat as expected. Instead, the normal map appears stepped - with rings of front-facing planes depicted in dark red. This stepped appearance is a consequence of the limited dynamic range of most disparity maps. A perfectly smooth surface cannot be depicted in this discrete space, resulting in many pixels being assigned the same integer disparity instead of their actual values. These artificial edges make discovery of actual edges, such as the interface between the wall and floor, difficult to achieve. + +In order to remove the stepped or plateaued appearance, floating point disparities are needed, along with a smoothing operator to reduce the discretized appearance. Ideally, converting the disparity map from 8-bit to floating point and applying a simple out-of-the-box smoothing operation would smooth these plateaus out. However, directly applying a bilateral filter will preserve or enhance these edges and a Gaussian or box filter would soften all edges indiscriminately, effectively blending objects together. Instead, to smooth these plateaus and generate a smooth lit disparity with preserved edges, we: + +1. Compute the strong edges using a Canny filter, dilating the result to produce an edge mask where edges have a diameter of 10 pixels. + +2. Calculate the surface normals via triangulation as previously discussed. Use larger triangles in regions away from edges and smaller triangles along edges. This hybrid approach gives us clear, prominent lines corresponding to key edges, and additional smoothing elsewhere. Do this for both the discrete (un-smoothed) and floating point (smoothed) disparity maps. + +3. Cast directional lights to colourize and produce the smoothed and un-smoothed maps. This yields Figure 8(a) and Figure 8(b), respectively. + +4. Compute the complexity of the discrete (un-smoothed) disparity map, $\alpha$ , as the number of observed disparities. Apply a bilateral filter to the smoothed map $\frac{\alpha }{10}{}^{3}$ times. + +5. To correct the blown-out edges caused by the previous step, extract the strong mask edges from the unsmoothed map (that is, the pixels of the unsmoothed map coinciding with the pixels of the mask generated in step 1) and superimpose them on the smoothed map. Apply a bilateral filter to the smoothed map $\frac{\alpha }{10}$ times to help blend the edges in. + +6. Overlay the original, un-dilated version of the mask on top of the smoothed map, as seen in Figure 8(c), to aid in the identification of strong edges. + +We can now apply the Canny edge operator to the smoothed and coloured map in an automated fashion. + +In general, we seek to compensate for less detailed masks with more permissive Canny thresholds that yield a more detailed final edge set. Conversely, more detailed masks imply that a stricter threshold should be used, to prevent an overly noisy final edge set. Let the number of mask pixels be $\beta$ . Let $\beta$ divided by the total number of pixels be $x$ . We recognize that the level of detail in the mask, $x$ , is inversely proportional to the number of pixels desired in the final edge set. + +We also want to take into account the aforementioned disparity map complexity, $\alpha$ . This is another inversely proportional relationship, between disparity map complexity and the target number of pixels in the final edge set. The more complex the disparity map is, the greater the number of easily identifiable edges, the less permissive the Canny threshold is required to be. Conversely, the less detailed the disparity map, the more difficulty Canny will have extracting edges from it, and the more permissive we should make the Canny threshold. + +We will use these two complexity measures - and the direct, inversely proportional relationships that we observed - to select Canny thresholds automatically. In so doing, we are letting the image - not the user - do the talking. + +Let $\phi = \frac{1}{\alpha x}$ be the target number of pixels in the final edge set. + +We want the final edge set to contain at least as many pixels as the mask; only then can more edges be found to supplement the mask edges. Therefore, we modify our target to $\left( {1 + \sigma }\right) \beta$ where $\sigma = \min \left( {3,\phi }\right)$ . Notice that we set a cap of 3 on $\phi$ . This is because, experimentally, we have found that larger values introduce a lot of noise. + +What we have established is a target number of final edge pixels, not a Canny threshold parameter. But each Canny threshold parameter will produce a certain number of edge pixels. The higher the threshold, the less pixels in the final result; the lower the threshold, the more pixels in the final result. The minimum threshold parameter is min=0 and the maximum is $\max = {255}$ . Using these boundaries, we can conduct a binary search for the ideal parameter. We start by using a threshold parameter midway between min and max. We then count the number of pixels in the resulting edge set. If the result is below target, we need to be more permissive, so we lower our max and try a lower threshold in the next iteration. If the result exceeds the target, we need to be less permissive, so we increase our min. We stop once max=min, or the edge pixel count equals the target. + +Once edges are found, we use the findContours function in OpenCV to extract curve points from the raster image. The extracted curve points are processed to remove curve duplication. The curves are also split whenever adjacent point disparities differ by more than a small threshold, under the assumption that the adjacent points belong to separate surfaces. We note that some of the original detail may be lost in this process. + +--- + +${}^{3}\frac{\alpha }{10}$ , and other parameter values, were selected after applying the method to our test set of 12 images and selecting the parameter value that produced the best results overall for all images + +--- + +![01963e97-dcdc-7948-843b-3f49f604ff8b_4_155_164_1483_437_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_4_155_164_1483_437_0.jpg) + +Figure 8: Directionally lit disparity maps. + +#### 3.1.2 Splitting Lines for Consistency + +Edges extracted from the disparity maps are rendered as smooth splines for the corresponding view. However, view dependent rendering introduces inconsistencies, which arise when edges are found in one disparity map but not the other due to the slight variation in the viewpoint. + +To prevent these inconsistencies - which cause discomfort - we arbitrarily select the left view to be the "true" edges. Then, any line in the left view that is visible in the right is warped using the disparity map to the right view. Lines only visible in the left are rendered only in the left; likewise, lines only visible in the right are rendered only in the right. View visibility is determined by the disparity map. Any pixel $p\left( {x, y}\right)$ is visible in both left and right views if $L\left( {x, y}\right) = d$ and the corresponding pixel $R\left( {x - d, y}\right) = d$ , where $L$ and $R$ are the left and right disparity maps respectively, and $d$ is the disparity of pixel $p$ . + +Contours are warped by their control points and then rendered in a view-dependent manner. While this method of rendering potentially introduces inconsistencies, warping the rendered lines would introduce noise as the lines may lie on surfaces with different disparities. + +Finally, we note that long edges extracted from the disparity map may span multiple objects and both occluded and visible regions. Warping the entire stroke can result in partially occluded edges being visible in the wrong view. To prevent this, we split strokes whenever the visibilities of adjacent control points change, i.e. from visible to occluded. We also split strokes when the disparities of adjacent control points differ by more than some threshold ${\tau }_{d}{}^{4}$ . Strokes that cannot be warped, because they are only visible in one view, are rendered in only one view. + +#### 3.1.3 Consistent Control Point Stylization + +Monoscopic and S3D line drawings are often stylized and represent objects using rough, overdrawn and jittery lines. To increase visual interest, we provide the option of stylizing S3D lines with an overdrawn or jittered style. + +Kim et al. discussed a method for stylizing stereoscopic 3D lines [12]. Their method performs stylization after lines have been discovered for both left and right images. Specifically, it links line segments in the left view to the matching segments in the right and consistently renders texture to these linked and parameterized curves. + +Our approach is similar and stylizes lines prior to warping by replicating and transforming control points. To produce overdrawn lines, curves are duplicated a fixed number of times. Lines can then be scaled (about their centroids or the centre of the image) by a small random factor, so that the overdrawn lines are visibly distinct. A jittered or rough appearance is created by adding small random translation vectors to each control point of a line. Note that, prior to altering the control points of a line, it is important to store the original, pre-transformed disparities of those control points, so that they can be correctly warped after stylization + +## 4 Shading + +S3D line drawings depict the shapes of objects. However, these lines do not convey information such as surface texture or roundness - but shading and highlights do. Object shading and shadows are monocular depth cues [7]. Shading, particularly involving specular highlights, is view dependent [25]. Adding monocular depth cues to S3D line drawings can improve understanding of surface shape, and enhance depth perception. Also, because lighting is view dependent, the left and right views will be stylized independently to preserve their separate lighting characteristics. + +To produce the stylized shading, the left and right input images are converted to grayscale and stylized using a variety of algorithms. While any stylization algorithm or filter could be used, we chose those that do not explicitly render contours, as our method will produce those separately. Finally, the stylized shading and S3D lines are combined to produce the final image. + +## 5 Results + +We tested our method on several S3D images, some of which are shown in Figure 9. Seventy-five percent of the images used as input to our method have high-quality or near-perfect disparity maps. + +Figure 10 illustrates some of the $3\mathrm{D}$ line drawings produced by our method. Note that, since lines are generated from the disparity map, contours and interior lines are the only lines visible. + +Stylizing the S3D lines yielded the images depicted in Figure 11. Note that even with jittered and overdrawn lines, the left and right views remain consistent. + +We used three types of stylization for shading: toon-like shading produced by quantization of the RGB image, impressionist, and halftoning with large particles. None of these stylizations explicitly render contours, so there is no overlap between shading and the line drawing. We combined the stylized S3D line drawings with stylized shading to produce our final images, some of which are shown in Figure 12. To ensure the visibility of the S3D lines, we reduced the darkness of shaded regions by ${50}\%$ . + +--- + +${}^{4}$ We used ${\tau }_{d} = 5$ , as we observed that curve points are close together, with no large jumps in depth. + +--- + +![01963e97-dcdc-7948-843b-3f49f604ff8b_5_158_161_692_668_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_5_158_161_692_668_0.jpg) + +Figure 9: Sample inputs to S3D line drawing algorithm. + +![01963e97-dcdc-7948-843b-3f49f604ff8b_5_169_908_678_661_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_5_169_908_678_661_0.jpg) + +Figure 10: S3D line drawings. + +![01963e97-dcdc-7948-843b-3f49f604ff8b_5_145_1642_722_348_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_5_145_1642_722_348_0.jpg) + +Figure 11: S3D line drawings (red/cyan anaglyph). + +![01963e97-dcdc-7948-843b-3f49f604ff8b_5_941_369_679_988_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_5_941_369_679_988_0.jpg) + +Figure 12: Final stylized S3D line drawings (red/cyan anaglyph). + +We also applied our method to S3D photos with computed disparity maps. These maps contain noise, disparity mismatches, and obscured object contours, which pose a challenge for many S3D algorithms. Figure 13 demonstrates our method's performance on S3D photos with disparity maps of varying quality. Despite these disparity errors, our method is still able to produce line drawings, as demonstrated in Figure 13. Note how some lines appear to be missing, typically because they are not visible in the disparity map. + +## 6 Evaluation + +To evaluate our results for quality of depth reproduction and viewing comfort, we conducted a short study. For health and safety reasons due to COVID-19, our study was conducted remotely. We asked participants to view a set of 24 images from our dataset using either anaglyph glasses, a 3D TV, a VR headset, or by free-viewing in their homes. For each image, participants were asked to rate the viewing comfort and apparent depth on a Likert scale from 1 to 5 . Participants were also asked to rate how aesthetically pleasing they found each image. Images were randomized, and participants were not aware of what they would be viewing. + +Overall, participants found that our consistent line drawings are more comfortable, reproduce a greater sense of depth, and are more aesthetically appealing than the raw, inconsistently-rendered line drawings. Table 1 indicates the average difference between each of our results and the inconsistently-rendered line drawings. This difference is a percentage increase from the raw, unstylized lines to our method. So, for example, the first cell demonstrates that the average score was ${26}\%$ better for our consistently-rendered, unstylized lines than for raw, unstylized lines rendered inconsistently. Note that adding stylization to our lines improved comfort, depth reproduction, and the overall aesthetic. We expected participants to find the stylized lines more aesthetically pleasing, but we did not anticipate they would find these more comfortable or conducive to a greater sense of depth. However, the stylized lines are more prominent than the unstylized lines and may provide participants with more visual information to fuse, resulting in greater viewing comfort and depth. Also note that adding shading, a monocular depth cue, significantly improved all metrics, regardless of how that shading was rendered. Even the halftone/newsprint shader applied inconsistently, which renders large circles into the scene, was more comfortable, produced more depth, and was more aesthetically pleasing than plain lines. This is interesting, because these images were ${55}\%$ less consistent than our plain line drawing. We computed consistency by comparing the colour values of pixels that should match according to the disparity map. + +![01963e97-dcdc-7948-843b-3f49f604ff8b_6_244_161_1307_314_0.jpg](images/01963e97-dcdc-7948-843b-3f49f604ff8b_6_244_161_1307_314_0.jpg) + +Figure 13: Line drawings from photos with disparity maps of varying quality. + +
comfortdepthappearance
our unstylized lines26%14%20%
our stylized lines32%16%26%
our unstylized lines with consistent shading46%46%71%
our unstylized lines with inconsistent shading27%42%41%
our stylized lines with consistent shading48%45%59%
our stylized lines with inconsistent shading52%41%66%
+ +Table 1: The difference in comfort, depth reproduction, and aesthetic appearance between raw, unstylized and inconsistently rendered lines and our method. Note that the averaged participant scores for view-dependent, unstylized lines are presented in Table 2. + +
comfortdepthappearance
view dependent, unstylized lines2.52.62.1
our stylized lines with consistent shading3.73.83.4
+ +Table 2: Averaged participant scores for view-dependent, unstylized, and inconsistently rendered lines that were used to to compare our various methods to. Note that we have provided averaged participant scores for stylized lines with consistent shading for reference. + +Ideally, our participants would be a random sample of individuals with varying backgrounds and exposure to S3D. However, as we were required to run this study remotely, we relied on finding individuals that owned their own S3D viewing equipment or were able to free-view. Hence, our participant pool was drawn from individuals that could be considered S3D enthusiasts. Consequently, participants were critical, and quick to identify and articulate flaws in images, such as window violations and ghosting. However, we appreciated their honest and experienced assessments as they provided a clearer and more concise evaluation of our results. + +We also note that the study conditions were not ideal. Firstly, we relied on participants to self-report their ability to perceive depth. Second, due to the rarity and variety of S3D viewing equipment available, it is unlikely that any two participants used the exact same viewing technology. We categorized viewing mechanisms into three groups: anaglyph, $3\mathrm{{DTV}}/3\mathrm{{DS}}/\mathrm{{VR}}$ , and free-viewing. Of the 16 participants, ${50}\%$ used anaglyph glasses, which are prone to crosstalk and ghosting that may cause discomfort. A smaller number of participants, 31.2%, used some other 3D viewing apparatus, such as a 3DTV. This technology may exhibit some crosstalk or ghosting, but significantly less than anaglyph glasses, typically making this technology more comfortable to use. Finally, about ${18.8}\%$ of participants free-viewed the images. The study conditions may have thereby contributed disproportionally to viewing discomfort. + +## 7 Conclusion and Limitations + +Our algorithm successfully produces stylized stereoscopic 3D line drawings from photographs. These line drawings reproduce 3D shape, especially when combined with mono-scopic shading. Furthermore, for fine-grained stylizations, inconsistent shading did not have a negative impact on the perception of depth or comfort. However, large-grained styl-izations, such as halftoning, were not as comfortable as their consistently-shaded variations, as expected. + +A major limitation of our method is that the quality of our method's results largely depends on the quality of the disparity maps provided. Noisy, non-smooth disparity maps, as well as those with obfuscated or obscured object contours, will likely produce noisy line drawings where the object contours are not clearly visible. This, in turn, may produce line drawings with no identifiable subject. Overcoming this limitation is the subject of future work. + +## References + +[1] M. S. Banks, J. C. A. Read, R. S. Allison, and S. J. Watt. Stereoscopy and the human visual system. SMPTE Motion Imaging Journal, 121(4):24-43, 5 2012. + +[2] D. N. Bhat and S. K. Nayar. Stereo and specular reflection. Int. J. Comput. Vision, 26(2):91-106, 2 1998. + +[3] K. R. Brooks. Depth perception and the history of three-dimensional art: Who produced the first stereoscopic images? i-Perception, 8(1):2041669516680114, 2017. + +[4] J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-8(6):679-698, 11 1986. + +[5] F. Cole, A. Golovinskiy, A. Limpaecher, H. S. Barros, A. Finkelstein, T. Funkhouser, and S. Rusinkiewicz. Where + +do people draw lines? In ACM SIGGRAPH 2008 Papers, SIGGRAPH '08, pp. 88:1-88:11. ACM, New York, NY, USA, 2008. + +[6] M. Gelautz and D. Markovic. Recognition of object contours + +from stereo images: an edge combination approach. In ${3D}$ Data Processing, Visualization and Transmission, 2004. + +[7] E. Goldstein. Sensation and Perception. Cengage Learning, 2009. + +[8] A. Hertzmann. Introduction to 3D non-photorealistic rendering: Silhouettes and outlines. SIGGRAPH Course Notes, 01 1999. + +[9] A. Hertzmann. Why do line drawings work? a realism hypothesis. Perception, 49(4):439-451, 2020. PMID: 32126897. doi: 10.1177/0301006620908207 + +[10] A. Hertzmann and D. Zorin. Illustrating smooth surfaces. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH, pp. 517- 526. ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 2000. + +[11] H. Kang, S. Lee, and C. K. Chui. Coherent line drawing. In Proceedings of the 5th International Symposium on Non-photorealistic Animation and Rendering, NPAR '07, pp. 43- 50. ACM, New York, NY, USA, 2007. + +[12] Y. Kim, Y. Lee, H. Kang, and S. Lee. Stereoscopic 3D line drawing. ACM Transactions on Graphics, 32(4):57:1-57:13, 7 2013. + +[13] Y.-S. Kim, J.-Y. Kwon, and I.-K. Lee. Stereoscopic line drawing using depth maps. In ACM SIGGRAPH 2012 Posters, SIGGRAPH '12, pp. 113:1-113:1. ACM, New York, NY, USA, 2012. + +[14] Y. Lee, Y. Kim, H. Kang, and S. Lee. Binocular depth perception of stereoscopic 3D line drawings. In Proceedings of the ACM Symposium on Applied Perception, SAP '13, pp. 31-34. ACM, New York, NY, USA, 2013. + +[15] B. Mendiburu. 3D Movie Making: Stereoscopic Digital Cinema from Script to Screen. Focal Press, 52009. + +[16] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgib-bon. Kinectfusion: Real-time dense surface mapping and tracking. In 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 127-136, 2011. doi: 10. 1109/ISMAR. 2011.6092378 + +[17] L. Northam, P. Asente, and C. S. Kaplan. Consistent stylization and painterly rendering of stereoscopic 3D images. In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, NPAR '12, pp. 47-56. Eurographics Association, Goslar Germany, Germany, 2012. + +[18] L. Northam, P. Asente, and C. S. Kaplan. Stereoscopic 3D image stylization. Computers and Graphics, 37:389-402, 08 2013. + +[19] N. Ofir, M. Galun, S. Alpert, A. Brandt, B. Nadler, and R. Basri. On detection of faint edges in noisy images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4):894-908, 2020. doi: 10.1109/TPAMI.2019.2892134 + +[20] C. Richardt, L. Šwirski, I. P. Davies, and N. A. Dodgson. Predicting stereoscopic viewing comfort using a coherence-based computational model. In Proceedings of the International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging, pp. 97-104. ACM, New York, NY, USA, 2011. + +[21] S. Savant. A review on edge detection techniques for image segmentation. International Journal of Computer Science and Information Technologies, 4:5898-5900, 08 2014. + +[22] D. Scharstein and C. Pal. Learning conditional random fields for stereo. In Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on, pp. 1-8, 6 2007. + +[23] E. Stavrakis and M. Gelautz. Image-based stereoscopic painterly rendering. In Proceedings of the Fifteenth Euro-graphics Conference on Rendering Techniques, EGSR'04, pp. 53-60. Eurographics Association, Aire-la-Ville, Switzerland, + +Switzerland, 2004. + +[24] E. Stavrakis and M. Gelautz. Stereoscopic painting with varying levels of detail. In In Proceedings of SPIE - Stereoscopic Displays and Virtual Reality Systems XII, pp. 55-64, 2005. + +[25] R. Toth, J. Hasselgren, and T. Akenine-Möller. Perception of highlight disparity at a distance in consumer head-mounted displays. In Proceedings of the 7th Conference on High-Performance Graphics, HPG '15, pp. 61-66. ACM, New York, NY, USA, 2015. + +[26] C. Wheatstone. Contributions to the Physiology of Vision: Part the First: On Some Remarkable and Hitherto Unobserved Phenomena of Binocular Vision. 1838. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..4c61c8509054b669ca10988790ec34898626bff8 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,263 @@ +§ GENERATING ROUGH STEREOSCOPIC 3D LINE DRAWINGS FROM 3D IMAGES + +Category: Research + +§ ABSTRACT + +We present a method to produce stylized drawings from S3D images. Taking advantage of the information provided by the disparity map, we extract object contours and determine their visibility. The discovered contours are stylized and warped to produce an S3D line drawing. Since the produced line drawing can be ambiguous in shape, we add stylized shading to provide monocular depth cues. We investigate using both consistently rendered shading, and inconsistently rendered shading to determine the importance of lines and shading to depth perception. + +Index Terms: Computing methodologies-Non-photorealistic rendering一; + +§ 1 INTRODUCTION + +Stereoscopic 3D is used in a variety of art forms, such as photography and film, to create the effect of depth. The perceived depth can provide a greater sense of reality, create an immersive or engaging experience, and serve as an artistic medium to induce emotional responses in the viewer. S3D creates a sense of depth by presenting a slightly different image to each eye. The left and right images exhibit horizontal separation between objects, which is interpreted as depth by the brain. Producing S3D content is challenging and emphasis must be placed on consistency, ensuring that the object(s)/scene visible in both views matches exactly to produce a comfortable viewing experience and correct depiction of depth $\left\lbrack {{12},{18}}\right\rbrack$ . + +Line, or pen-and-ink, drawings are one of the oldest S3D art forms, dating back to Sir Charles Wheatstone's original drawings in the ${1830}\mathrm{\;s}$ [26]. This format persists today in comics and diagrams. Although S3D line drawings can be produced from 3D meshes using automated algorithms, producing S3D line drawings from S3D photos has not received significant attention. + +One possibility for producing line drawings from S3D photos is to use a S3D stylization algorithm such as the layer-based method presented by Northam et al. $\left\lbrack {{17},{18}}\right\rbrack$ . This approach divides the S3D image and disparity ${}^{1}$ maps into layers by disparity, such that each layer only contains pixels from a single disparity, then applies stylization to these layers. While their approach can be used with a variety of artistic styles and filters, contours and line drawing were not considered. + +If we try to produce a line drawing using this method, the results are displeasing. This is because object contours will be conflated with other edges, such as lighting and texture boundaries. Thus, the final result contains lines that do not convey shape. We could, however, use the additional information provided by a disparity map to isolate object contours. However, the layer-based approach cannot be used to extract these contours from the disparity map, because layers contain pixels with the same disparity value. Figure 1 illustrates this issue. Note how the edges found for the disparity layer do not correspond to object contours, which can be found in the disparity map. Instead, they correspond to the edges of the region with the given disparity. + + < g r a p h i c s > + +Figure 1: Line drawing from disparity layers. Note how each disparity layer only contains pixels of one value. Hence, extracting lines from such layers produces the edges of the layer pixels instead of the desired object contours. + +In this paper, we present a method to produce stylized stereoscopic 3D line drawings from 3D photos using stereoscopic warping instead of layers. While constructing this method, we observed that some drawings of simple objects were ambiguous and did not uniquely identify the $3\mathrm{D}$ shape. For example, the contour of a sphere is a circle and could represent either a flat circle or a sphere. Shading, a monocular depth cue, can help resolve these ambiguities. Hence, we also investigate the effects of adding stylized shading to our produced drawings. + +The line extraction and rendering algorithm is presented in Section 3, and our shading method is discussed in Section 4. Our results are presented in Section 5. Finally, we present an evaluation of our results to verify their 3D comfort and depth quality in Section 6. + +§ 2 BACKGROUND + +A line drawing is a simplistic representation of an object or scene. It is comprised entirely of lines, which may be stylized, and which do not contain shading or colour. Despite the simplicity of such drawings, these drawings are capable of accurately conveying the subject that they depict. Hertzmann indicates line drawings "work" because they "approximate realistic renderings" [9]. + +Where do artists draw lines? A line drawing study by Cole et al. examined where artists draw lines for a variety of objects [5]. They observed that contours, creases and folds - which describe the shape of the object - were drawn, but lines depicting shadows or highlights were not. This was also observed by Hertzmann et al. while rendering line drawings for smooth meshes [10]. + +In a stereoscopic 3D line drawing, contours, creases and folds give the primary sense of an object’s $3\mathrm{D}$ shape and depth. Without other S3D cues from which to infer depth, it is important that these lines are as consistent as possible between left and right views. Inconsistencies can cause viewing discomfort from binocular rivalry ${}^{2}$ and double vision, detracting from the viewer's perception of object depth. Previous studies have shown that these S3D lines alone sufficiently convey object shape/depth for many images $\left\lbrack {1,{12},{14}}\right\rbrack$ . + +${}^{2}$ binocular rivalry is a phenomena where the brain rapidly switches between left and right eyes because the images differ + +${}^{1}$ disparity is inversely proportional to depth and conveys the horizontal separation between a point in left and right views + +A number of algorithms have been proposed to produce stereoscopic 3D line drawings from meshes. Most notably, Kim et al. presented a method that produces 3D line drawings by generating contours for left and right eyes separately [12]. Contours are then pruned for view consistency by checking the visibility of points along the curve formed by creating an epipolar plane between a pair of points on the left and right contours. Kim et al. also describe a method for consistent stylization of lines by linking control points between matching contours and applying stylization to the linked pairs. However, this method can only be used with full 3D models. + +Another paper by Kim et al. describes a method for producing stylized stereoscopic 3D line drawings from S3D photographs [13]. Their paper applies Canny edge detection [4] to the edge tangent field [11] of the left stereo image and warps the discovered edges to the right image using the disparity map. However, the rendered lines are from all edges that can be found in the actual image, not only object contours but also texture or lighting contours. By contrast, a hand-drawn stereoscopic 3D line drawing would be likely to include only object contours and creases. As their method is based on edge detection purely in the colour domain, they cannot differentiate between geometric discontinuities and colour discontinuities. Disparity maps, which indicate the horizontal separation between pixels of the left and right image, isolate geometric information from colour information. Therefore, applying edge detection to the disparity map could uniquely produce object contours. Our method will harness the information in the disparity map to construct S3D line drawings. + +§ 2.1 PERCEPTION AND MONOSCOPIC DEPTH CUES + +Our perception of depth arises from both monoscopic (2D) and 3D depth cues. Monoscopic depth cues include shading, relative size, occlusion, and motion $\left\lbrack {1,3}\right\rbrack$ . Shading an S3D line drawing can improve depth perception, but the amount of improvement is limited for images with rich detail [14]. However, for S3D line drawings of simple objects with few internal lines, shading provides the necessary information about object shape. For example, imagine a circular contour: is this a line drawing of a circle or a sphere? + +Stereo-consistent shading is complicated by the fact that shading can be view-dependent. Apart from purely Lamber-tian surfaces, shading features such as specular highlights may be visible in only one eye due to the position of the eyes with respect to the light source and object [2]. This phenomenon can also be observed in S3D photographs, as demonstrated in Figure 2. Note that both specular highlights and reflections differ between left and right views and are circled in cyan for visibility. While these specular highlights are natural to the human visual system, they can be problematic for computer vision algorithms commonly used with stereo [2]. Additionally, it is believed in film that specular highlights can cause binocular rivalry if they are rendered inconsistently between views. Therefore, they are often removed or redrawn to be consistent [15]. We provide users the option of adding shading to our S3D line drawings, to improve the perception of shape. However, our shading will remain true-to-nature. That is, we will not remove or adjust the natural lighting of the S3D images to ensure consistency. + + < g r a p h i c s > + +Figure 2: Inconsistent specular highlights and reflections. + +§ 2.2 STYLIZATION + +Stylization can be applied to both the S3D lines and shaded regions. Surprisingly, it has been shown that simple line styles such as overdrawn lines, varying thickness, and jitter do not negatively impact depth perception and comfort in S3D images if rendered consistently [14]. + +The naive approach to create consistent stylized lines is to render them in the left view, then use the disparity map and warp (horizontally shift) them to the right. However, the rendered, stylized object contours may have pixels that bleed over onto other surfaces. Therefore, warping individual pixels would not produce smooth lines. Alternatively, the control points of curves or the endpoints of line segments could be warped. But if any of these points from the stylized lines lie on other objects, the lines rendered in the right view may be discontinuous or distorted. Instead, we will match the original control points to their underlying disparities prior to stylization or rendering, similar to the approach used by Kim et al. [12]. Although there are many ways to stylize lines, we focus on overdrawn and jittered styles. + +In addition to stylized contours, we will also stylize shaded regions. Stereoscopic 3D image stylization has been well studied, although existing methods focus on stylizing the whole image consistently instead of a small region. Stavrakis et al. applied stylization to the left image and used the disparity map to warp it to the right, then did the reverse for occluded regions $\left\lbrack {{23},{24}}\right\rbrack$ . As discussed previously, Northam et al. applied stylization using disparity layers $\left\lbrack {{17},{18}}\right\rbrack$ . However, since lighting and specular highlights are view-dependent, these methods would enforce consistency where none exists. Hence, we will apply stylization algorithms to shaded regions in a view-dependent manner to preserve these inconsistencies. By preserving inconsistencies, we contradict Richardt et al. [20] and Northam et al. $\left\lbrack {{17},{18}}\right\rbrack$ , which focus on establishing or maintaining consistency at all cost. However, we believe that because shading is a monocular depth cue, binocular rivalry and randomness will have minimal effects on viewer perception. + +§ 3 PRODUCING A STEREOSCOPIC 3D LINE DRAWING + +Our method is divided into several stages and requires left and right images and disparity maps as input. First, the object-depiction contours are extracted. Next, the contours are split into curves by view visibility: left-only, right-only, and shared. Curves are stylized, then warped from left-to-right. Finally, shading may be added to improve depth perception. + +§ 3.1 EXTRACTING CONTOURS + +The shape of an object is described by its silhouette (contours) and interior creases and folds $\left\lbrack {8,{10}}\right\rbrack$ . Both contours and interior lines are needed to give a clear sense of shape [5]. While these lines may be found in the S3D image, they are more easily isolated using information in the disparity map. + +There are two possible approaches to find contours (edges). The first approach to finding edges is to apply edge detection methods to the raw or preprocessed disparity map. The second approach is to perform a 3D reconstruction of the scene using the provided disparity maps and apply a silhouette finding algorithm, such as Hertzmann and Zorin, to identify the edges [10]. + +We use the first method, applying edge detection methods to the raw or preprocessed disparity map, instead of discovering silhouettes from a 3D reconstruction. While many of the edges can be found from the reconstruction using Hertzmann and Zorin's approach, more subtle edges occurring where two objects intersect at the same depth are not always identified, as shown in Figure 3 [10]. Secondly, after identifying the silhouette edges from the 3D reconstruction, the visibility of those edges must then be computed for each eye, as in Kim [12]. However, visibility is given in the disparity map, so recomputing this information is a waste. Finally, we do not assume that the baseline and focal length of the image is known or can be estimated such that a believable 3D reconstruction can be produced. + + < g r a p h i c s > + +Figure 3: Hertzmann and Zorin's method vs our method. + +§ 3.1.1 FINDING EDGES IN A DISPARITY MAP + +There are two types of edges in a disparity map. The first type occurs at a depth discontinuity, where one object occludes another, creating a jump in neighbouring disparity values. The second type occurs where two surfaces meet at the same depth, or as creases and folds on an object's surface. The first type of edge can be found using a Laplacian or Canny edge detector, as shown in Figure 4. + + < g r a p h i c s > + +Figure 4: The strong edges of a disparity map may b efound by a Canny edge detector or Laplacian edge detector but they are not ideal. + +The second type of edge is more elusive. Adjusting the parameters of a Laplacian or Canny detector can find these edges, but not uniquely, as shown in Figure 5. + + < g r a p h i c s > + +Figure 5: The second type of edge is hard to detect uniquely using Canny or Laplace. + +Our method does manage to successfully and uniquely identify these edges, as shown in Figure 6. + + < g r a p h i c s > + +Figure 6: The second type of edge is easy to detect uniquely using our method. + +To make these low-contrast edges more visible, we can convert disparity to depth and apply a bilateral filter to smooth the plateaus in the result, as suggested by [16]. However, while this improves the visibility of type one edges, it does not improve visibility for all type two edges, as shown in Figure 7. + +(a) A stereoscopic 3D (b) Background (c) Background $n$ creases are not found creases are found + +the background. using [16]'s method. using our method. + +§ FIGURE 7: OUR METHOD IMPROVES VISIBILITY FOR TYPE TWO EDGES. + +We note that finding type two edges, or finding edges in a low-contrast or noisy region, is known to be a difficult problem [19]. Savant indicates that Laplacian, Canny, and other detectors are not able to uniquely identify edges in low-contrast areas [21]. And while second-order derivative methods can identify some edges that are zero-crossings, not all type two edges are zero-crossings. + +Hence, we propose the following method to identify edges in disparity maps. First, we use a Canny edge detector, as suggested by Gelautz and Markovic, to identify type one edges [6]. + +Next, we improve the visibility of type two edges. Hertz-mann suggested rendering a scene where different coloured directional lights are cast along positive and negative axes onto a 3D model to produce a brightly-coloured normal map from which edges could be found [10]. In order to apply this technique to our disparity maps, each pixel needs a surface normal. We assign surface normals by applying a simple surface triangulation to each map. Each pixel position(x, y) is a vertex with depth $z$ equal to the disparity at that position. Triangular "faces" are formed by a point $p$ in the disparity map and two of its immediate neighbours, $q$ and $r$ . A normal can then be calculated for each of these faces, as well as the vertex normal from the average of the eight adjacent triangular face normals. + +The normals are multiplied by a directional light vector to enhance visualization, as illustrated in Figure 8. However, when directional lights are cast onto the lit normal map, we do not observe a smoothly shaded cat as expected. Instead, the normal map appears stepped - with rings of front-facing planes depicted in dark red. This stepped appearance is a consequence of the limited dynamic range of most disparity maps. A perfectly smooth surface cannot be depicted in this discrete space, resulting in many pixels being assigned the same integer disparity instead of their actual values. These artificial edges make discovery of actual edges, such as the interface between the wall and floor, difficult to achieve. + +In order to remove the stepped or plateaued appearance, floating point disparities are needed, along with a smoothing operator to reduce the discretized appearance. Ideally, converting the disparity map from 8-bit to floating point and applying a simple out-of-the-box smoothing operation would smooth these plateaus out. However, directly applying a bilateral filter will preserve or enhance these edges and a Gaussian or box filter would soften all edges indiscriminately, effectively blending objects together. Instead, to smooth these plateaus and generate a smooth lit disparity with preserved edges, we: + +1. Compute the strong edges using a Canny filter, dilating the result to produce an edge mask where edges have a diameter of 10 pixels. + +2. Calculate the surface normals via triangulation as previously discussed. Use larger triangles in regions away from edges and smaller triangles along edges. This hybrid approach gives us clear, prominent lines corresponding to key edges, and additional smoothing elsewhere. Do this for both the discrete (un-smoothed) and floating point (smoothed) disparity maps. + +3. Cast directional lights to colourize and produce the smoothed and un-smoothed maps. This yields Figure 8(a) and Figure 8(b), respectively. + +4. Compute the complexity of the discrete (un-smoothed) disparity map, $\alpha$ , as the number of observed disparities. Apply a bilateral filter to the smoothed map $\frac{\alpha }{10}{}^{3}$ times. + +5. To correct the blown-out edges caused by the previous step, extract the strong mask edges from the unsmoothed map (that is, the pixels of the unsmoothed map coinciding with the pixels of the mask generated in step 1) and superimpose them on the smoothed map. Apply a bilateral filter to the smoothed map $\frac{\alpha }{10}$ times to help blend the edges in. + +6. Overlay the original, un-dilated version of the mask on top of the smoothed map, as seen in Figure 8(c), to aid in the identification of strong edges. + +We can now apply the Canny edge operator to the smoothed and coloured map in an automated fashion. + +In general, we seek to compensate for less detailed masks with more permissive Canny thresholds that yield a more detailed final edge set. Conversely, more detailed masks imply that a stricter threshold should be used, to prevent an overly noisy final edge set. Let the number of mask pixels be $\beta$ . Let $\beta$ divided by the total number of pixels be $x$ . We recognize that the level of detail in the mask, $x$ , is inversely proportional to the number of pixels desired in the final edge set. + +We also want to take into account the aforementioned disparity map complexity, $\alpha$ . This is another inversely proportional relationship, between disparity map complexity and the target number of pixels in the final edge set. The more complex the disparity map is, the greater the number of easily identifiable edges, the less permissive the Canny threshold is required to be. Conversely, the less detailed the disparity map, the more difficulty Canny will have extracting edges from it, and the more permissive we should make the Canny threshold. + +We will use these two complexity measures - and the direct, inversely proportional relationships that we observed - to select Canny thresholds automatically. In so doing, we are letting the image - not the user - do the talking. + +Let $\phi = \frac{1}{\alpha x}$ be the target number of pixels in the final edge set. + +We want the final edge set to contain at least as many pixels as the mask; only then can more edges be found to supplement the mask edges. Therefore, we modify our target to $\left( {1 + \sigma }\right) \beta$ where $\sigma = \min \left( {3,\phi }\right)$ . Notice that we set a cap of 3 on $\phi$ . This is because, experimentally, we have found that larger values introduce a lot of noise. + +What we have established is a target number of final edge pixels, not a Canny threshold parameter. But each Canny threshold parameter will produce a certain number of edge pixels. The higher the threshold, the less pixels in the final result; the lower the threshold, the more pixels in the final result. The minimum threshold parameter is min=0 and the maximum is $\max = {255}$ . Using these boundaries, we can conduct a binary search for the ideal parameter. We start by using a threshold parameter midway between min and max. We then count the number of pixels in the resulting edge set. If the result is below target, we need to be more permissive, so we lower our max and try a lower threshold in the next iteration. If the result exceeds the target, we need to be less permissive, so we increase our min. We stop once max=min, or the edge pixel count equals the target. + +Once edges are found, we use the findContours function in OpenCV to extract curve points from the raster image. The extracted curve points are processed to remove curve duplication. The curves are also split whenever adjacent point disparities differ by more than a small threshold, under the assumption that the adjacent points belong to separate surfaces. We note that some of the original detail may be lost in this process. + +${}^{3}\frac{\alpha }{10}$ , and other parameter values, were selected after applying the method to our test set of 12 images and selecting the parameter value that produced the best results overall for all images + + < g r a p h i c s > + +Figure 8: Directionally lit disparity maps. + +§ 3.1.2 SPLITTING LINES FOR CONSISTENCY + +Edges extracted from the disparity maps are rendered as smooth splines for the corresponding view. However, view dependent rendering introduces inconsistencies, which arise when edges are found in one disparity map but not the other due to the slight variation in the viewpoint. + +To prevent these inconsistencies - which cause discomfort - we arbitrarily select the left view to be the "true" edges. Then, any line in the left view that is visible in the right is warped using the disparity map to the right view. Lines only visible in the left are rendered only in the left; likewise, lines only visible in the right are rendered only in the right. View visibility is determined by the disparity map. Any pixel $p\left( {x,y}\right)$ is visible in both left and right views if $L\left( {x,y}\right) = d$ and the corresponding pixel $R\left( {x - d,y}\right) = d$ , where $L$ and $R$ are the left and right disparity maps respectively, and $d$ is the disparity of pixel $p$ . + +Contours are warped by their control points and then rendered in a view-dependent manner. While this method of rendering potentially introduces inconsistencies, warping the rendered lines would introduce noise as the lines may lie on surfaces with different disparities. + +Finally, we note that long edges extracted from the disparity map may span multiple objects and both occluded and visible regions. Warping the entire stroke can result in partially occluded edges being visible in the wrong view. To prevent this, we split strokes whenever the visibilities of adjacent control points change, i.e. from visible to occluded. We also split strokes when the disparities of adjacent control points differ by more than some threshold ${\tau }_{d}{}^{4}$ . Strokes that cannot be warped, because they are only visible in one view, are rendered in only one view. + +§ 3.1.3 CONSISTENT CONTROL POINT STYLIZATION + +Monoscopic and S3D line drawings are often stylized and represent objects using rough, overdrawn and jittery lines. To increase visual interest, we provide the option of stylizing S3D lines with an overdrawn or jittered style. + +Kim et al. discussed a method for stylizing stereoscopic 3D lines [12]. Their method performs stylization after lines have been discovered for both left and right images. Specifically, it links line segments in the left view to the matching segments in the right and consistently renders texture to these linked and parameterized curves. + +Our approach is similar and stylizes lines prior to warping by replicating and transforming control points. To produce overdrawn lines, curves are duplicated a fixed number of times. Lines can then be scaled (about their centroids or the centre of the image) by a small random factor, so that the overdrawn lines are visibly distinct. A jittered or rough appearance is created by adding small random translation vectors to each control point of a line. Note that, prior to altering the control points of a line, it is important to store the original, pre-transformed disparities of those control points, so that they can be correctly warped after stylization + +§ 4 SHADING + +S3D line drawings depict the shapes of objects. However, these lines do not convey information such as surface texture or roundness - but shading and highlights do. Object shading and shadows are monocular depth cues [7]. Shading, particularly involving specular highlights, is view dependent [25]. Adding monocular depth cues to S3D line drawings can improve understanding of surface shape, and enhance depth perception. Also, because lighting is view dependent, the left and right views will be stylized independently to preserve their separate lighting characteristics. + +To produce the stylized shading, the left and right input images are converted to grayscale and stylized using a variety of algorithms. While any stylization algorithm or filter could be used, we chose those that do not explicitly render contours, as our method will produce those separately. Finally, the stylized shading and S3D lines are combined to produce the final image. + +§ 5 RESULTS + +We tested our method on several S3D images, some of which are shown in Figure 9. Seventy-five percent of the images used as input to our method have high-quality or near-perfect disparity maps. + +Figure 10 illustrates some of the $3\mathrm{D}$ line drawings produced by our method. Note that, since lines are generated from the disparity map, contours and interior lines are the only lines visible. + +Stylizing the S3D lines yielded the images depicted in Figure 11. Note that even with jittered and overdrawn lines, the left and right views remain consistent. + +We used three types of stylization for shading: toon-like shading produced by quantization of the RGB image, impressionist, and halftoning with large particles. None of these stylizations explicitly render contours, so there is no overlap between shading and the line drawing. We combined the stylized S3D line drawings with stylized shading to produce our final images, some of which are shown in Figure 12. To ensure the visibility of the S3D lines, we reduced the darkness of shaded regions by ${50}\%$ . + +${}^{4}$ We used ${\tau }_{d} = 5$ , as we observed that curve points are close together, with no large jumps in depth. + + < g r a p h i c s > + +Figure 9: Sample inputs to S3D line drawing algorithm. + + < g r a p h i c s > + +Figure 10: S3D line drawings. + + < g r a p h i c s > + +Figure 11: S3D line drawings (red/cyan anaglyph). + + < g r a p h i c s > + +Figure 12: Final stylized S3D line drawings (red/cyan anaglyph). + +We also applied our method to S3D photos with computed disparity maps. These maps contain noise, disparity mismatches, and obscured object contours, which pose a challenge for many S3D algorithms. Figure 13 demonstrates our method's performance on S3D photos with disparity maps of varying quality. Despite these disparity errors, our method is still able to produce line drawings, as demonstrated in Figure 13. Note how some lines appear to be missing, typically because they are not visible in the disparity map. + +§ 6 EVALUATION + +To evaluate our results for quality of depth reproduction and viewing comfort, we conducted a short study. For health and safety reasons due to COVID-19, our study was conducted remotely. We asked participants to view a set of 24 images from our dataset using either anaglyph glasses, a 3D TV, a VR headset, or by free-viewing in their homes. For each image, participants were asked to rate the viewing comfort and apparent depth on a Likert scale from 1 to 5 . Participants were also asked to rate how aesthetically pleasing they found each image. Images were randomized, and participants were not aware of what they would be viewing. + +Overall, participants found that our consistent line drawings are more comfortable, reproduce a greater sense of depth, and are more aesthetically appealing than the raw, inconsistently-rendered line drawings. Table 1 indicates the average difference between each of our results and the inconsistently-rendered line drawings. This difference is a percentage increase from the raw, unstylized lines to our method. So, for example, the first cell demonstrates that the average score was ${26}\%$ better for our consistently-rendered, unstylized lines than for raw, unstylized lines rendered inconsistently. Note that adding stylization to our lines improved comfort, depth reproduction, and the overall aesthetic. We expected participants to find the stylized lines more aesthetically pleasing, but we did not anticipate they would find these more comfortable or conducive to a greater sense of depth. However, the stylized lines are more prominent than the unstylized lines and may provide participants with more visual information to fuse, resulting in greater viewing comfort and depth. Also note that adding shading, a monocular depth cue, significantly improved all metrics, regardless of how that shading was rendered. Even the halftone/newsprint shader applied inconsistently, which renders large circles into the scene, was more comfortable, produced more depth, and was more aesthetically pleasing than plain lines. This is interesting, because these images were ${55}\%$ less consistent than our plain line drawing. We computed consistency by comparing the colour values of pixels that should match according to the disparity map. + + < g r a p h i c s > + +Figure 13: Line drawings from photos with disparity maps of varying quality. + +max width= + +X comfort depth appearance + +1-4 +our unstylized lines 26% 14% 20% + +1-4 +our stylized lines 32% 16% 26% + +1-4 +our unstylized lines with consistent shading 46% 46% 71% + +1-4 +our unstylized lines with inconsistent shading 27% 42% 41% + +1-4 +our stylized lines with consistent shading 48% 45% 59% + +1-4 +our stylized lines with inconsistent shading 52% 41% 66% + +1-4 + +Table 1: The difference in comfort, depth reproduction, and aesthetic appearance between raw, unstylized and inconsistently rendered lines and our method. Note that the averaged participant scores for view-dependent, unstylized lines are presented in Table 2. + +max width= + +X comfort depth appearance + +1-4 +view dependent, unstylized lines 2.5 2.6 2.1 + +1-4 +our stylized lines with consistent shading 3.7 3.8 3.4 + +1-4 + +Table 2: Averaged participant scores for view-dependent, unstylized, and inconsistently rendered lines that were used to to compare our various methods to. Note that we have provided averaged participant scores for stylized lines with consistent shading for reference. + +Ideally, our participants would be a random sample of individuals with varying backgrounds and exposure to S3D. However, as we were required to run this study remotely, we relied on finding individuals that owned their own S3D viewing equipment or were able to free-view. Hence, our participant pool was drawn from individuals that could be considered S3D enthusiasts. Consequently, participants were critical, and quick to identify and articulate flaws in images, such as window violations and ghosting. However, we appreciated their honest and experienced assessments as they provided a clearer and more concise evaluation of our results. + +We also note that the study conditions were not ideal. Firstly, we relied on participants to self-report their ability to perceive depth. Second, due to the rarity and variety of S3D viewing equipment available, it is unlikely that any two participants used the exact same viewing technology. We categorized viewing mechanisms into three groups: anaglyph, $3\mathrm{{DTV}}/3\mathrm{{DS}}/\mathrm{{VR}}$ , and free-viewing. Of the 16 participants, ${50}\%$ used anaglyph glasses, which are prone to crosstalk and ghosting that may cause discomfort. A smaller number of participants, 31.2%, used some other 3D viewing apparatus, such as a 3DTV. This technology may exhibit some crosstalk or ghosting, but significantly less than anaglyph glasses, typically making this technology more comfortable to use. Finally, about ${18.8}\%$ of participants free-viewed the images. The study conditions may have thereby contributed disproportionally to viewing discomfort. + +§ 7 CONCLUSION AND LIMITATIONS + +Our algorithm successfully produces stylized stereoscopic 3D line drawings from photographs. These line drawings reproduce 3D shape, especially when combined with mono-scopic shading. Furthermore, for fine-grained stylizations, inconsistent shading did not have a negative impact on the perception of depth or comfort. However, large-grained styl-izations, such as halftoning, were not as comfortable as their consistently-shaded variations, as expected. + +A major limitation of our method is that the quality of our method's results largely depends on the quality of the disparity maps provided. Noisy, non-smooth disparity maps, as well as those with obfuscated or obscured object contours, will likely produce noisy line drawings where the object contours are not clearly visible. This, in turn, may produce line drawings with no identifiable subject. Overcoming this limitation is the subject of future work. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..a712927bb314e3eede073cc43f44cd3d93cb52a0 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,437 @@ +# Paper Forager: Supporting the Rapid Exploration of Research Document Collections + +Author One, Author Two, Author Three + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_0_165_391_1469_498_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_0_165_391_1469_498_0.jpg) + +Fig. 1. Three views of the Paper Forager system: (A) the initial state of system showing all 5,055 papers in the sample corpus from the ACM CHI and UIST conferences, (B) the filtered results showing only the papers containing an individual keyword, and (C) a sample paper overview page which further allows a user to click on a page to read the content. + +Abstract- We present Paper Forager, a web-based system which allows users to rapidly explore large collections of research documents. Our sample corpus uses 5,055 papers published at the ACM CHI and UIST conferences. Paper Forager provides a visually based browsing experience, allowing users to identify papers of interest based on their graphical appearance, in addition to providing traditional faceted search techniques. A cloud-based architecture stores the papers as multi-resolution images, giving users immediate access to reading individual pages of a paper, thus reducing the transaction cost between finding, scanning, and reading papers of interest. Initial user feedback sessions elicited positive subjective feedback, while a 24-month external deployment generated in-the-wild usage data which we analyze. Users of the system indicated that they would be enthusiastic to continue having access to the Paper Forager system in the future. + +Index Terms-literature review, document search, document browsing, corpus visualization + +## 1 INTRODUCTION + +Literature reviews can be a long and tedious task requiring information seekers to sort through a large number of documents and follow extended chains of related research. With paper proceedings, users can easily scan and read any of the papers, but finding specific papers can be difficult. + +In contrast, online digital libraries and search systems improve the ability to find specific papers of interest. A number of new systems have been developed [1]-[6] which provide advanced faceted search and filtering capabilities. However, these systems are driven by metadata and textual content and ignore visual qualities such as figures, graphics, layout, and design. Furthermore, such systems require the user to download the source PDF file before the paper can be read in detail. We seek a single system that can support a continuous transition between finding, scanning, and reading documents within a corpus. + +- Submitted to GI 2021 + +Web technologies such as DeepZoom [7] and Google Maps support browsing of extremely large image-based data sets through the progressive loading of multi-resolution images. This type of architecture is beneficial in that it gives users rapid access to detailed content. However, we are unaware of any prior systems which have used such an architecture for document exploration. + +In this paper we present Paper Forager, a system to support the rapid filtering and exploration of a collection of research papers. Paper Forager relies on a cloud-based architecture, storing the papers as multi-resolution images that can be progressively downloaded on-demand. By using this architecture, we allow the user to transition from browsing an entire corpus of thousands of papers, to reading any individual page within that corpus, within seconds. In doing so, we accomplish our goal of reducing the transaction cost between finding, scanning, and reading papers of interest. + +Our main research contribution is the development of a novel system for literature review, which synthesizes previously explored concepts such as faceted search and zooming based interfaces. We present the design and implementation of Paper Forager and its associated architecture, implemented on a sample corpus of 5,055 papers from the ${ACM}$ CHI and UIST conferences. Additionally, we present results gathered from initial user feedback and a 24 month external deployment of the system. Users of the system felt it was easy and enjoyable to use, and the majority indicated that would like to continue using Paper Forager in the future. + +## 2 RELATED WORK + +### 2.1 Faceted Search + +Faceted search allows users to explore a collection by filtering on multiple dimensions. While powerful, representing all of the available options in a user interface can be problematic [8]. Many papers have looked at improving the faceted searching experience. FacetLens [9] represents facets as nested areas on the interface and FacetAtlas [10] displays the relationships between related facets through a weighted network diagram and colored density map. Pivot Slice [6] used a collection of research papers as a sample corpus, and allows users to explore relationships between facets using direct manipulation. The faceted search system in Paper Forager is designed to be more approachable for new users than the above systems, at the expense of being less versatile in the types of queries which can be performed. + +### 2.2 Visual Document Browsing + +There have been numerous research projects exploring the space of visually exploring a collection of documents. + +The WebBook and Web Forager [11] pre-loaded and rendered web pages so they could be rapidly flipped through, and more recently, Hong et al. [12] looked at improving the digital page flipping experience. Document Cards [13] extracts important terms and images from a document and displays them in compact representations. + +The DocuBrowse system [14] is designed to browse and search for documents in large online enterprise document collections. Similar to Paper Forager, DocuBrowse includes both a faceted search interface and visual thumbnails of results. While source content can be opened, it is not clear how long it would take to download and view an individual document. Paper Forager expands upon ideas from the DocuBrowse interface, and uses a cloud-based architecture to support rapid viewing through the progressive loading of multi-resolution images. Paper Forager also takes advantage of the connections, such as citation networks, while DocuBrowse supports a wider range of file types without looking at their interconnectivity. + +While not directly related to document browsing, the PhotoMesa system [15] allows zooming into a large number of images which are grouped and sorted by available metadata. Similarly, the Pivot Viewer component of the Silverlight framework [16] supports faceted searching of a collection of images based on associated metadata. Results are displayed using a dynamically resizing grid of images, using the Silverlight Deep Zoom technology [7]. We are unaware of attempts to use this type of technology for the exploration of research document collections. Paper Forager implements an architecture similar to Pivot Viewer, but with a customized design and interface for the purpose of rapidly exploring a corpus of research literature. + +### 2.3 Research Literature Exploration Tools + +There are many deployed systems which provide search access to collections of research papers including Google Scholar [17], Mendelay [18], CiteSeerX [19], Microsoft Academic Search [20], and the ACM Digital Library [21]. For a thorough analysis readers are directed to Gove et al.'s evaluation of 14 such systems [4] which highlights the strengths and weaknesses of each system. + +There are also research systems which have looked at the topic of research literature exploration. Aris et al. [1] and PaperLens [5] are visualization tools which look at paper metadata to show temporal patterns of paper publication, and each uses citation links among papers to explore a field's rate of growth and identify key topics. Along similar lines, the PULP system [22] uses reinforcement learning to find and present a visualization of how the topics in a corpus of research papers have change over time. + +GraphTrail [2] is a system for exploring general purpose large networked datasets, and used a corpus of ACM CHI papers as a sample database. GraphTrail supports the piecewise construction of complex queries while keeping a history of the steps taken which allows for easy backtracking and modification of earlier stages. Systems such as Citeology [23] and CiteRivers [24] support exploring scientific literature through their citation networks and patterns, with CiteRivers also including additional data about the document contents. PaperQuest [25] aims to help researchers make efficient decisions about which papers to read next by displaying the minimum amount of relevant information, and considering papers for which the researcher has already displayed an interest. + +Another research exploration tool is the Action Science Explorer (ASE) [3], [4]. The ASE system uses a citation network visualization in the center of the interface and makes use of citation sentence extraction, ranking and filtering by network statistics, automatic document clustering and summarization, and reference management. + +The main difference between Paper Forager and the above systems is that while these existing systems all perform some amount of analysis, visualization, or filtering based on the metadata or text of a paper, they hide the design, layout, and images of the actual research documents. Furthermore, with existing systems, users must wait until the document is downloaded before reading the paper in detail. Paper Forager provides a basic level of faceted metadata searching along with emphasizing the visual content of the documents, and provides immediate access to reading individual pages of the documents. + +An example of a visually-focused research exploration tool is the UIST Archive Explorer [26] which was created for the ${20}^{\text{th }}$ anniversary of the UIST conference and provided an interface for browsing the collection of papers previously published at UIST. Papers could be viewed by year, keyword, or author. Selecting a paper caused the pages of the paper to be arranged in a row and the user could zoom in for more details. Compared to Paper Forager, the UIST Archive Explorer used a smaller corpus of documents (578 vs. 5055), was hosted locally (whereas Paper Forager uses a cloud-based architecture), and did not allow for navigation between papers based on their citation networks. + +## 3 THE LITERATURE REVIEW PROCESS + +The theory of information foraging [27] suggests that information seekers try to find documents with potentially high value and then use the available informational "scent" cues to determine which documents, if any, are worthwhile to examine further. We can thus think about the process of literature review being composed of three main stages: + +Finding: filtering the collection of all possible papers down to those you might want to read, either by browsing the collection, or explicitly searching. + +Scanning: making a decision for each individual paper as to whether it is worthwhile to read based on the available information scent cues. + +Reading: looking through the content of the paper for useful information. + +In order to maintain flow [28] during the literature review process, it is desirable for the transitions between the stages to be as smooth as possible. Research exploring the dynamics of task switching [29], [30] has shown that small interaction improvements can cause categorical behavior changes that far exceed the benefits of decreased task times. + +When papers were primarily distributed in printed proceedings, the finding phase of the process was inefficient. However, once a collection of possibly relevant papers was found, the process of scanning the papers consisted of flipping through the pages. The informational scene cues [27] presented to the information gatherer to make a reading decision consisted of what was visible in the printed form of the paper - namely the title, text, figures, and the paper's overall graphic design and layout. Based on these cues, a decision to read or not would be made, and the cost of transitioning between the scanning and reading phases was minimal (Fig. 2). + +With digital libraries the finding phase of the process is much more efficient, and the transition cost between finding and scanning was greatly reduced. However, the available informational scent cues presented during the scanning phase was reduced to basic textual information such as the title, authors, and sometimes the abstract of the paper. Advanced paper browsing tools such as ASE [3] provide additional functionality in the finding phase as well as incorporating additional scent cues to inform the reading decision such as visualizations and statistical measures of keywords, authorship, and citation networks. But still, the images and visual design of the original paper are not available to the researcher during the scanning phase; the graphics of a paper are not visible until after the decision has been made to move from scanning to reading. Additionally, the transaction cost when deciding to read a paper is relatively high: the paper needs to first be downloaded, which even on a fast network can often take between 3 and 15 seconds, and then it is opened for reading in a secondary application (or at least a new window within the same application). Besides the time cost, the context switch to a secondary application can be disrupt the flow of the information gathering process. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_2_150_566_725_250_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_2_150_566_725_250_0.jpg) + +Fig. 2. Four main approaches to paper discovery the context switches required between the various stages of the literature review process. + +### 3.1 Design Goals + +With Paper Forager, we want to take the quick searching and filtering benefits of modern advanced paper discovery systems and combine them with the visual qualities and benefits of paper proceedings. Additionally, we want to reduce the cost of transitioning between stages (Fig. 2) which will improve the flow of the literature review process and encourage a wider exploration of the paper space. By supporting more exploration, the system may put users in a position to make more serendipitous discoveries [31]. + +## 4 Paper Forager + +We created Paper Forager to address the problems encountered while exploring large collections of research papers. As a sample corpus we used 5,055 papers published at the ACM CHI and UIST conferences. The metadata was collected using the Microsoft Academic Search API [20] and the source documents were automatically downloaded using links from Google Scholar where possible and manually downloaded from the ACM DL otherwise. + +The Paper Forager interface is composed of a set of interface controls at the top of the screen, and a main display area below. On startup, Paper Forager arranges all documents in the collection in the main display area, sorted with the oldest papers at the top and new newest at the bottom (Fig. 1A). + +### 4.1 Interface Controls + +Along the top of the window are the interface controls for refining the displayed collection of papers which includes the search field, histogram filters, author list, history bar, and saved paper controls (Fig. ). + +#### 4.1.1 Search Field + +On the left is the search field (Fig. 3) which initializes keyword searches of the titles and abstracts of the papers, as well as searches for authors and conference titles. The search system will automatically recognize author and conference names. For example, a search for "database" would find all papers with the term "database" in the title or abstract (Fig. 1B), whereas a search for "Buxton" would be recognized as an author search for "William Buxton" and would find all papers published by that author. Additionally, searching for "CHI" or "UIST" will return all papers published at the respective conference, and adding a year to the end of a search term, such as "CHI 2007", modifies the filters to show only the papers from the 2007 edition of the CHI conference. + +By default, entering a term in the search field will perform a new query using the entire collection as input, but prefacing a search term with a plus sign (+) creates an additive search filter. For example, if after searching for "Buxton" the user searches for "+mouse", only papers authored by William Buxton which include the term "mouse" will be displayed. + +#### 4.1.2 Histogram Filters + +Beside the search field are histogram filters displaying the number of papers published in each year and the relative distribution of the number of citations each paper has received (Fig. 4). Users can click the Year and Citations headings to set the sorting order of the papers in the main display area. As search events occur, the histograms dynamically update and animate to reflect the distribution for the actively displayed grid of papers. Under each histogram is a dual value slider which allows the selection of displayed papers to be limited to a specific range of years or number of citations. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_2_929_986_730_219_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_2_929_986_730_219_0.jpg) + +Fig. 4. (A) Histogram filters and Author List for all papers in the CHI and UIST corpus and (B) after searching for the term "tangible". + +#### 4.1.3 Author List + +To the right of the filter histograms is a list of the top authors of the papers within the current search results (Fig. 3). For example, Fig. 4A shows that Ravin Balakrishnan has the most papers overall in the database, while Fig. 4B shows that Hiroshi Ishii has the most papers for the search term "tangible". Clicking on an author name is equivalent to creating an additive search for the author, so in Fig. B, clicking on "Scott Klemmer" is equivalent to entering "+Scott Klemmer" in the search field, and will result in showing all papers for the term "tangible" which have Scott Klemmer as an author. + +#### 4.1.4 History Bar + +Previous research has demonstrated the benefits of keeping a history of actions during information foraging [2], [32]. The history bar in + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_2_177_1869_1443_221_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_2_177_1869_1443_221_0.jpg) + +Fig. 3. The interface controls of Paper Forager. + +Paper Forager is designed for this purpose and provides a way for users to see how they arrived at their current view and the ability to easily backtrack if desired. + +Q "Mouse" [108] ${}^{ \times }$ H ▲ James Landay [41] ${}^{x}$ Q Published at ACM CHI [4273]* 1 L James Landay, 1981-2007 [24]* 2 William Buxton [33]* J $\bot$ James Landay,>8 citations [10] ${}^{\mathrm{x}}$ A My Saved Papers [2] ${}^{ \times }$ K $\bot$ James Landay, $> 5$ citations [9] ${}^{\mathrm{x}}$ - Digital tape drawing ${}^{ * }$ - Digital tape drawing ${}^{ * }$ ) ${}^{ * }$ References [6] ${}^{ * }$ i $\blacksquare$ Digital tape drawing ${}^{x}\rangle$ of Citations [8] ${}^{x}$ + +Fig. 5. History tokens for (A) search terms, (B) conferences, (C) authors, (D) saved paper lists, (E) individual papers, (F) references of a paper, (G) citations of a paper, and tokens with filters applied (H-K). + +Each type of search event has its own history token icon (Fig. 5, A-G) and as the histogram filter sliders are adjusted, the ranges are displayed beside the description of the active search (Fig. 6, H-K). The number of results matching the query is displayed in square brackets at the end of the history token. + +A All Papers [5055] + +BAII Papers [5055] | Q "mouse" [108] * + +CAII Papers [5055] 【Q “mouse" $+ \bot$ Brad Myers [7]* + +D All Papers [5055] | Q "mouse":+上Brad Myers, 201818 [3]* + +E All Papers [5055] | Q’+⊥’ || Encapsulating interactive behaviors * + +F All Papers [5055] | Q + 1. $\bot$ Encapsulating interactive behaviors ${}^{ \times }$ $\rangle$ -G Citations [9] ${}^{ \times }$ + +G All Papers [5055] | Q + 1. || (§) - 3. || (c) Why good engineers (sometimes) create ... * + +Fig. 6. Initial state of the history bar (A) and changes after a series of operations: (B) searching for "mouse", (C) clicking on the author Brad Myers, (D) adjusting the year and citation filters, (E) selecting a paper, (F) viewing that paper's citations, and (G) selecting another paper. + +Each search or filtering event is accompanied by a new token in the history bar (Fig. 7). As the list of tokens grows longer, the previous ones are minimized to show only their icon and their full description is displayed in a tooltip. + +Inserted between the history tokens are three different separation symbols (Fig. 7): a vertical line when the new state is independent from the previous one, a plus sign when an additive query is entered, and a right facing arrow when looking at references or citations of a particular paper. Clicking on a token in the history list will remove all subsequent query events leaving the clicked token as the active search state. The tokens also include an ’ $x$ ’ button to remove the query from the history list. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_3_146_1603_703_129_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_3_146_1603_703_129_0.jpg) + +Fig. 7. History token separators. + +#### 4.1.5 Saved Paper Controls + +Paper Forager allows users to mark papers as saved. The collection of the user's saved papers, as well as all papers saved by the user community can be accessed through links in the top right corner (Fig. 3). Besides accessing the collection of saved papers for viewing, clicking the "Reference List" button copies a formatted list of paper references suitable for a "References" section of a paper to the user's clipboard. + +### 4.2 Main Display Area + +The main display area offers a collection view, a paper view, and a page view. + +#### 4.2.1 Collection View + +The collection view is used to display all papers that match the current query and filters. Papers within the collection are sized so that all results are initially within view. As searches are performed the grid of papers is animated to remove those papers which do not satisfy the query and re-arrange those that do to fill the available space (Fig. 8). + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_3_909_515_723_175_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_3_909_515_723_175_0.jpg) + +Fig. 8. Stages of the reordering animation. (A) initial state, (B) removed papers fade away, (C) remaining tiles move and resize into new position. + +The total animation time is 1.5 seconds, where the outgoing tiles fade out for the first 0.75 of a second, and the remaining tiles rearrange for the next 0.75 seconds. A similar animation occurs when papers not previously on the screen are added. + +As the cursor moves around the grid of displayed papers, the paper under the cursor highlights and a large tooltip is displayed with the paper's title, abstract, authors, year, conference, and number of citations (Fig. 9). Clicking on a paper will bring that paper into focus in the paper view. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_3_920_1142_711_354_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_3_920_1142_711_354_0.jpg) + +Fig. 9. Example of a paper tooltip. + +#### 4.2.2 Paper View + +Once a paper is selected, either by clicking on a single paper, or by executing a query with only one result, it is displayed in the paper view (Fig. 10). Here, the composite image of the paper is fit to the main canvas area, with additional metadata including the title, abstract, authors, venue, and year displayed on the right. A badge icon can be clicked to add the paper to the user's list of saved papers. Clicking an author's name will load all papers by that author (equivalent to searching for the author's name), and there is also a link to follow the DOI link for the paper to view the official page in the ACM Digital Library. + +The lower section of the side panel contains thumbnails for each of the papers in the corpus which are referenced by the active paper, as well as all the papers which cite the active paper. Hovering over these thumbnails triggers the associated paper tooltip (Fig. 9) and clicking on a paper thumbnail adds the paper to the history bar and brings it into focus. Clicking on either of the "References" or "Citations" labels takes the system back to the collection view, displaying all of the referenced/cited papers. Below the paper image is a button to return to the paper collection view, as well as buttons to navigate to the previous and next papers in the current collection. For example, after searching for "mouse" and selecting a paper, repeatedly clicking on "next paper" will let you flip through all papers for the term "mouse". This functionality is also accessible through the left and right arrow keys. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_4_163_137_713_368_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_4_163_137_713_368_0.jpg) + +Fig. 10. Interface elements of the single paper view. + +#### 4.2.3 Page View + +Clicking on a single page animates the display to fit that page into the view (Fig. 11), allowing users to read individual pages. In this page view, the navigational controls and arrow keys change to support navigation between the pages of the document. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_4_163_938_691_391_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_4_163_938_691_391_0.jpg) + +Fig. 11. The page view displays individual pages. + +Once the last page in the paper is reached the view zooms back to the paper view, and subsequent navigation operations will navigate at the paper level. This enables an efficient workflow of first flipping through papers, then going through the pages of an interesting paper, and then coming back out to flip through more papers (Fig. 12). The layout of the main window is designed so that on 24" or larger monitors the body text of the focused page is large enough to be read comfortably. For smaller monitors, or for more detailed examination of a portion of a page, the page view supports zooming and panning with the mouse wheel and left mouse button respectively. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_4_173_1551_700_244_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_4_173_1551_700_244_0.jpg) + +Fig.12. Workflow for navigating between and within papers. (Note: "Paper B" has only 4 pages.) + +#### 4.2.4 Preloading Images + +On a reasonably fast broadband internet connection it takes approximately 2 to 3 seconds to download and display a composite paper image (such as in Fig. ) on a 24" monitor. This is an unacceptable delay if trying to rapidly flip through a collection of papers. To address this, when a paper is brought into single paper view, the images for the previous and next papers are automatically downloaded and composited at the proper resolution so they can be immediately displayed when requested. + +#### 4.2.5 Interaction Model + +The intent of the Paper Forager design is to support a primary interaction model of searching or filtering for relevant documents, and then clicking on papers or pages to enlarge their view to see them in more detail. Additionally, similar to zooming user interfaces [33], the collection, paper, and page views support interactive zooming and panning. We anticipate that even though the system supports freeform panning and zooming, that users will prefer, and gravitate towards the search/filter/click interaction model. + +### 4.3 System Implementation + +Paper Forager is implemented as an in-browser application using the Microsoft Silverlight framework. During development, this allowed for the application to be used in browsers on both Mac OS X and Windows computers with the Silverlight runtime is installed. Due to recent changes to the plug-in architects of major browsers now limit the Silverlight runtime to Internet Explorer on Windows. + +The components of the deployed system (Fig. 13) are hosted and stored using parts of the Amazon Web Services (AWS) framework. The application binaries, images, and metadata are stored on and hosted from an AWS Secure Simple Storage (S3) instance. Usage log data and saved paper information are stored in separate AWS SimpleDB (SDB) tables. Due to cross-domain security policies which restrict communication of Silverlight applications, an AWS EC2 server hosts and interprets PHP scripts which facilitate communication between the application and the databases. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_4_952_1178_687_409_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_4_952_1178_687_409_0.jpg) + +Fig. 13. System architecture diagram. + +#### 4.3.1 Image Pyramids + +To enable fast streaming of papers over the internet and allow the papers to be viewed at a range of resolutions from very small thumbnails up to a large size suitable for reading, papers were converted in to a collection of "image pyramids" following the Microsoft Deep Zoom file format [7]. Each document is rendered at 14 resolutions, from the smallest size of 1 pixel square, up to the original size of the image, in our case, 10,048 pixels wide by 6098 pixels tall. At each resolution of the "pyramid", the images are divided into smaller "tiles" so that only the parts of the image which are needed at that resolution are downloaded (Fig. 14). + +We tried maximum tile sizes of 256, 512, and 1024 pixels and found that 512 pixel square tiles provided the best performance for the types of images streamed with our system. On the client side, a Silverlight MultiScaleImage component handles downloading and compositing the tiles to display the image at the requested resolution. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_5_156_135_704_253_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_5_156_135_704_253_0.jpg) + +Fig. 14. Image Pyramid data format example. + +#### 4.3.2 Data Processing + +The original PDF versions of the papers go through a multi-stage processing pipeline to convert them into their multi-scale image pyramid format (Fig. 15). First, the PDF files are split into individual pages and converted to JPG image files at 300 dpi. Using the "Data Sets" feature of Adobe Photoshop, composited PSD files are created combining all the pages of the paper into a single image (Fig. 16). The last step of the process involves converting the large combined JPG image into the image pyramid format. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_5_159_824_672_791_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_5_159_824_672_791_0.jpg) + +Fig. 15. Data processing pipeline. + +The conversion process for each paper took approximately 1 minute on a workstation computer with ${24}\mathrm{{GB}}$ of ram and dual ${2.53}\mathrm{{GHz}}$ Xeon processors, and the entire sample corpus of 5,055 papers took approximately 90 hours to process, producing $\sim {1.9}$ million small .jpg images, which generated $\sim {54}\mathrm{{GB}}$ of total image data. Each paper can be processed independently, so the pipeline is well suited for parallelization or computation on remote clusters or servers. + +#### 4.3.3 Paper Layout + +If the paper has 5 or less pages, it uses the 5-page template, and otherwise it uses the 10-page layout. This version of the system did not support papers with more than 10 pages, but it would not be difficult to extend this pattern one more level to a 17-page layout ( 1 large first page, and a 4-by-4 grid for subsequent pages). + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_5_929_142_685_229_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_5_929_142_685_229_0.jpg) + +Fig. 16. Sample 5-page (left) and 10-page layouts (right). + +We choose to combine all pages of each paper into a single image object before creating the image pyramid as a performance optimization to limit the number of individual objects the system would need to display at any one time. Alternative strategies will be discussed as future work. + +## 5 EVALUATION + +Quantifying the benefits of information visualization systems is notoriously tricky [34]. To gain insights and usage observations related to our system, we ran two evaluations: a small controlled session to collect initial user feedback, and then a broad, long-term external deployment. + +### 5.1 Initial User Feedback + +We conducted a qualitative user study to evaluate the features and usability of the Paper Forager system. We wanted to collect initlal feedback from users, and validate that some simple (and not so simple) tasks can be accomplished by users in a reasonable amount of time. We recruited 6 participants that were taking an HCI course at a local university ( 4 male, 2 female, ages 21-24). These students had recently completed a project which required them to gather references for a $\mathrm{{HCI}}$ topic of choice. As such, they were ideal candidates to give feedback on our system and provide a comparative analysis of Paper Forager to the systems and strategies that they had independently used for their literature reviews. + +The feedback sessions began with a 5 minute overview demonstrating the main features of the system, after which the participants explored the system on their own for an additional 5 minutes. The sessions concluded with the participants completing a series of 8 tasks, of generally increasing difficulty (Fig. 17). + +The tasks were devised such that some could likely be accomplished with a standard digital library search system, some would benefit from faceted searching capabilities, and three of them (c, e, and h) would be prohibitively difficult to accomplish without the added capabilities afforded by the Paper Forager system. The goal of the tasks was to encourage the participants to try different aspects of the system rather than cover all possible use cases for the application. After completing the tasks, participants were asked for thoughts about the system and suggestions for improvements. + +### 5.2 Results + +All 6 users were able to complete the 8 tasks. While the tasks were not specifically designed to test the speed of using the Paper Forager system compared to traditional digital libraries, task completion times were recorded to see the range of completion times for the various tasks across the set of participants. + +Mean task completion times ranged from 33 seconds (task $I$ ) to 3 minutes and 45 seconds (task8). In addition to the 6 study participants, a Paper Forager user with approximately 3 hours of experience was asked to perform the tasks to benchmark expert performance levels of these tasks. The longer time for the last task was due to participants not always knowing which part of the paper to read in detail to find the necessary information (Fig. 17) + +It is interesting to note that for the tasks 1 through 7, the fastest times from the "novice" study participants after their brief introduction to the system are similar to the completion times from the + +"expert" user, suggesting that some of the novice users were becoming proficient with using the system after only a short amount of time. In the comments section of the survey half (3 of 6) of the participants mentioned that their favourite feature was the ability to string together multiple queries with the " + " operator, and 2 of 6 commented that they particularly liked that they could see thumbnails for the referenced and cited papers in the paper view. During the 5 minute exploration phase, all participants experimented with the dynamic zooming and panning functionality using the mouse. However, during the tasks, they chose to use the search/filter/click interaction style. Additional features which were requested included auto-completion in the search field, additional conferences in the corpus, and more social sharing capabilities. Overall, participants were extremely enthusiastic about the system, and were hopeful that it would be publically released so they could continue to use it. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_6_163_220_708_727_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_6_163_220_708_727_0.jpg) + +Fig. 17. Task completion times for the 6 study participants, as well as times from one 'expert' user. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_6_197_1460_656_246_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_6_197_1460_656_246_0.jpg) + +Fig. 18. Images used in questions 3 (left) and 8 (right). + +With the overall positive feedback of the system and the confirmation that users of the system would be able to complete some useful tasks, we went forward to go forward with a broader deployment. + +### 5.3 External Deployment + +To gain additional feedback and in-the-wild usage data, as well as to validate the deployability of our cloud-based architecture, we conducted a long-term external deployment of the system. To maintain compliance with ACM copyright policies (as the papers used in the system are from ACM CHI and ACM UIST), access to Paper Forager was restricted to users with a private ACM account with access permissions for CHI and UIST papers, and to IP ranges with site license to the ACM Digital Library (such as most post-secondary institutions). The system was deployed and available for use continuously over a 2 year period. + +### 5.4 Usage Data and Feedback + +Over the 24-month deployment period, 493 log-in events were registered from 153 unique users, with 49 of the users logging into the system more than one time. There were a number of "regular" users with 20 users logging into the system more than 5 times each, and 11 users logging more than 100 minutes of active usage. A total of 1,887 papers were viewed in "paper view mode" (Fig. 10) and 1,851 searches were performed over the course of the deployment. + +#### 5.4.1 Types of Usage + +Since Paper Forager was designed to support the various stages of the literature review process (Finding, Scanning, and Reading), we analysed the log data to see if people were using the system in different ways, or if all users were using the system in a similar manner. To do this we looked at usage along two dimensions: Browsing vs. Searching, and Scanning vs. Reading. + +#### 5.4.2 Browsing vs. Searching (Methods of "Finding") + +In this dimension we are looking at different ways users can locate papers which might be relevant, during the finding phase of the literature review process. Using the system in a more "browsing" manner would involve looking at collections of papers, following citation or reference links, and reading many tooltips. Alternatively, a more "search-based" approach to the finding process involves specifically entering search terms into the search field. This dimension is calculated as the user's ratio of "search" events to "browsing" (viewing collections, inspecting tooltips) events. + +#### 5.4.3 Scanning vs. Reading + +In the scanning phase of the literature review process, a user is quickly looking at papers to figure out if they are worth reading. In Paper Forager, a reasonable proxy for a user spending lots of time in the scanning phase could be a user looking at many papers in the overview paper view mode, and zooming into view many single pages in the page view could indicate a user spending lots of time in the reading phase. This dimension is calculated as the ratio of "paper view" events to "page view" events. The 60 users of the system with multiple logins or more than 20 minutes of continuous usage have their activity plotted along these two dimensions in Fig. 19. Each point represents a user, with the size of the point proportional to the amount of activity for that user. Each axis spans a ${25}\mathrm{x}$ difference in behaviour; that is, users at the bottom of the chart looked at ${25}\mathrm{x}$ more individual pages than users at the top, and users on the left side performed ${25}\mathrm{x}$ more searches than those on the right. It is interesting and encouraging to see that users exhibited such a wide range of usage behaviours. Even among the most active users (those with larger circles) we can they are distributed around the plot, suggesting that the system can be successfully used for different stages of the review process. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_6_952_1350_707_499_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_6_952_1350_707_499_0.jpg) + +Fig. 19. Usage log analysis showing usage patterns for finding behavior (x-axis) and scanning vs. reading (y-axis). + +#### 5.4.4 Feedback and Suggestions + +At the end of the deployment each user who logged into the system was sent a short voluntary questionnaire ( 30 of 153 responded, 20% response rate), where they were asked to answer five questions on a 5-point Likert scale (Strongly Disagree, Somewhat Disagree, Neither Agree/Nor Disagree, Somewhat Agree, Strongly Agree): + +- I found Paper Forager easy to use. + +- I found Paper Forager enjoyable to use. + +- Paper Forager is a more effective way to research papers than the techniques/systems I have been using previously. + +- Paper Forager is an efficient way to explore research papers. + +- If kept up to date with papers in my field, I would use Paper Forager to explore research papers in the future. + +The first four questions where based on the criteria outlined by Jeng [35] on which factors contribute to the usability of a digital library system (learnability, satisfaction, effectiveness, and efficiency). Results are shown in Fig. 20. In general, users felt Paper Forager was easy and enjoyable to use, and a majority of users said if kept up to date with papers in their field, they would continue to use Paper Forager in the future. Besides the subjective questions, users were also asked to provide details about features they liked and suggestions for improvement. + +![01963e9f-6de7-7149-9afd-2b2b86b6494e_7_142_981_715_372_0.jpg](images/01963e9f-6de7-7149-9afd-2b2b86b6494e_7_142_981_715_372_0.jpg) + +Fig. 20. Results from the subjective questions asked after the external deployment. + +Several users relayed interesting ways in which they used the system. One Ph.D. student was writing his first paper with a new supervisor and wanted to ensure that his paper followed the general conventions that the supervisor had used in the past. By searching for the supervisor's name and rapidly flipping through his previous papers the student was able to get the answer to a number of questions about the supervisor's style: + +How many figures does he usually include in a paper? Does he dock figures at the top and bottom of columns, or does he float them in the middle? Does he like using long figure captions? Does he use a particular color scheme for charts? How often does he include an explicit "Contributions" section? How does he typically word his conclusions? + +Before the student had access to Paper Forager he was looking at a single example paper of the supervisor's to try and answer these questions; it was too much work to download and look at all of the supervisor's papers individually. With Paper Forager, each of these tasks took a very short amount of time and effort. + +Another user (3D user interface researcher) mentioned they used Paper Forager not only in the process of writing papers, for but other tasks as well: + +It is so extremely fast and easy to search various topics. You get an idea of what has been in a field, dig for follow up papers (in-depth search) or other related papers (breadth search). I have even used it to find the best reviewers for a paper, or find relevant researchers on any topic (committees, collaborations, etc.) This is just how the digital library should look! This tool has saved me HUGE amounts of time. + +Finally, a grad student finishing up their Ph.D. mentioned that Paper Forager changed the way they approached writing papers: + +It allowed me to rapidly compare papers to get a sense of structure and style. For instance, when I was writing my own paper, I would quickly look at several examples from related papers to understand what was the typical approach. + +The responsiveness also allowed me to view more related papers. With Google Scholar or the ACM DL, it's often several clicks to view papers, and I have to download the PDF first; with Paper Forager I can quickly look at a paper and decide whether it's relevant, so I would actively look at more papers than I would have otherwise. + +A common issue with the system was that it covered too few conferences, and users wanted the collection expanded to cover more of their interests. Several users (particularly those with slower computers and larger monitors) had trouble with performance, finding the interface to not be as responsive as they would have liked. + +Many users mentioned liking that the references and citations were prominently displayed in the side panel of the paper view, and suggested that the links between related papers could be shown even more emphasized by showing the relationships in the main collection view. A number of users said they liked the collection view as they often remember papers by their "Fig. 1", however for very large collections (such as the entire 5,055 paper collection shown on launch), some users felt the view was not very helpful, and suggested alternatives such as using a different view of papers when they are displayed very small which could more clearly display relevant information. + +## 6 Discussion & Future Work + +The Paper Forager system was designed and optimized to work with collections on the order of 10,000 research documents. It will be interesting to look at how the interaction model should change for much larger collections of papers (an entire digital library for example), as well as how the performance of the system would be affected. Additionally, we would like to explore using the system with other collections of documents with citation networks such as patent applications or court proceedings. + +Related to the system performance, Paper Forager combines all pages of each paper into a single image object. It would also be interesting to explore the design opportunities that would arise from storing each page of the paper individually. This would allow for more varied arrangements such as selectively showing only the first page of a paper, arranging the pages of each paper in a row, or highlighting the pages with the most figures. In the time since the system was first developed, Silverlight as a technology has become less-well supported (notably, the Silverlight plug-in will no longer run in the Chrome browser). Re-engineering the system as a HTML5/JavaScript web application would be worthwhile. + +To preserve the design and layout work the authors put into creating their papers, we maintained the formatting from the original document. However, we are interested in exploring different representations for the papers when they are at small sizes such as those explored in previous work [26], [36], [37]. It would also be interesting to consider automated approaches for determining good miniaturized representations of research papers and other types of documents. + +We would also like to look at ways of annotating the thumbnail images to show aspects of the metadata such as number of citations or which papers have been saved the most often. A coloring technique similar to the one used in AppMap [38] were the thumbnails are shaded based on one variable and sorted by another could lead to interesting discoveries. The searching and filtering capabilities of Paper Forager were purposefully simplified to improve the approachability of the system, but it would be useful to explore combining the visual aspects of Paper Forager with an advanced paper filtering system such as ASE [3], [4] or a visualization of the citation space such as Citeology [23]. + +Using an image format to display papers has some downsides compared to viewing the actual PDF file, even when the image is at a high resolution. For example, users are unable to select text from a paper in Paper Forager. We believe there is a great potential in a hybrid system where multi-scale images would be used to immediately display the paper while a PDF file loaded in the background. Once a PDF is loaded it could seamlessly replace the multi-image representation. + +The ACM paper template contains the guidance "Please read previous years' proceedings to understand the writing style and conventions that successful authors have used." We agree that this is a useful, although laborious, task for prospective authors, and hope that Paper Forager could serve as a mechanism to simplify this process. + +## 7 CONCLUSION + +With Paper Forager we have created a cloud-based system which allows users to rapidly explore a collection of research articles. Our tests of the system produced positive feedback from users who overall agreed that Paper Forager was easy and enjoyable to use while being effective and efficient. We believe our work fills an important gap in existing systems for exploring document collections, allowing users to seamlessly transition between finding, scanning, and reading documents of interest. We hope our work can inspire future research and development in the area. + +## REFERENCES + +[1] A. Aris, B. Shneiderman, V. Qazvinian, and D. Radev, "Visual overviews for discovering key papers and influences across research fronts," J. Am. Soc. Inf. Sci. Technol., vol. 60, no. 11, pp. 2219-2228, Nov. 2009. + +[2] C. Dunne, N. Henry Riche, B. Lee, R. Metoyer, and G. Robertson, "GraphTrail: Analyzing Large Multivariate, Heterogeneous Networks While Supporting Exploration History," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2012, pp. 1663-1672. + +[3] C. Dunne, B. Shneiderman, R. Gove, J. Klavans, and B. Dorr, "Rapid Understanding of Scientific Paper Collections: Integrating Statistics, Text Analytics, and Visualization," J Am Soc Inf Sci Technol, vol. 63, no. 12, pp. 2351-2369, Dec. 2012. + +[4] R. Gove, C. Dunne, B. Shneiderman, J. Klavans, and B. Dorr, "Evaluating visual and statistical exploration of scientific literature networks," in 2011 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), 2011, pp. 217-224. + +[5] B. Lee, M. Czerwinski, G. Robertson, and B. B. Bederson, "Understanding Research Trends in Conferences Using paperLens," in CHI '05 Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA, 2005, pp. 1969-1972. + +[6] J. Zhao, C. Collins, F. Chevalier, and R. Balakrishnan, "Interactive Exploration of Implicit and Explicit Relations in Faceted Datasets," IEEE Trans. Vis. Comput. Graph., vol. 19, no. 12, pp. 2080-2089, Dec. 2013. + +[7] "Deep Zoom | Features | Microsoft Silverlight." [Online]. Available: http://www.microsoft.com/silverlight/deep-zoom/.[Accessed: 22- Sep-2015]. + +[8] M. Hearst, "UIs for Faceted Navigation: Recent Advances and Remaining Open Problems," in International Journal of Machine Learning and Computing, 2008, vol. 1, pp. 337-343. + +[9] B. Lee, G. Smith, G. G. Robertson, M. Czerwinski, and D. S. Tan, "FacetLens: Exposing Trends and Relationships to Support Sensemaking Within Faceted Datasets," in Proceedings of the + +SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2009, pp. 1293-1302. + +[10] N. Cao, J. Sun, Y.-R. Lin, D. Gotz, S. Liu, and H. Qu, "FacetAtlas: Multifaceted Visualization for Rich Text Corpora," IEEE Trans. Vis. Comput. Graph., vol. 16, no. 6, pp. 1172-1181, Nov. 2010. + +[11] S. K. Card, G. G. Robertson, and W. York, "The WebBook and the Web Forager: An Information Workspace for the World-Wide Web," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 1996, p. 111-. + +[12] L. Hong, S. K. Card, and J. (JD) Chen, "Turning Pages of 3D Electronic Books," in Proceedings of the 3D User Interfaces, Washington, DC, USA, 2006, pp. 159-165. + +[13] H. Strobelt, D. Oelke, C. Rohrdantz, A. Stoffel, D. A. Keim, and O. Deussen, "Document cards: A top trumps visualization for documents," vol. 15, no. 6, pp. 1145-1152, 2009. + +[14] A. Girgensohn, F. Shipman, F. Chen, and L. Wilcox, "DocuBrowse: Faceted Searching, Browsing, and Recommendations in an Enterprise Context," in Proceedings of the 15th International Conference on Intelligent User Interfaces, New York, NY, USA, 2010, pp. 189-198. + +[15] B. B. Bederson, "PhotoMesa: A Zoomable Image Browser Using Quantum Treemaps and Bubblemaps," in Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 2001, pp. 71-80. + +[16] "PivotViewer | Features | Microsoft Silverlight." [Online]. Available: http://www.microsoft.com/silverlight/pivotviewer/.[Accessed: 22- Sep-2015]. + +[17] J. L. Howland, T. C. Wright, R. A. Boughan, and B. C. Roberts, "How Scholarly Is Google Scholar? A Comparison to Library Databases," Coll. Res. Libr., vol. 70, no. 3, pp. 227-234, May 2009. + +[18] "Dashboard | Mendeley." [Online]. Available: https://www.mendeley.com/dashboard/.[Accessed: 22-Sep-2015]. + +[19] C. L. Giles, K. D. Bollacker, and S. Lawrence, "CiteSeer: An Automatic Citation Indexing System," in Proceedings of the Third ACM Conference on Digital Libraries, New York, NY, USA, 1998, pp. 89-98. + +[20] "Microsoft Academic Search." [Online]. Available: http://academic.research.microsoft.com/.[Accessed: 22-Sep-2015]. + +[21] J. R. White, "On the 10th Anniversary of ACM's Digital Library," Commun ACM, vol. 51, no. 11, pp. 5-5, Nov. 2008. + +[22] A. Medlar, K. Ilves, P. Wang, W. Buntine, and D. Glowacka, "PULP: A System for Exploratory Search of Scientific Literature," in Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, 2016, pp. 1133-1136. + +[23] J. Matejka, T. Grossman, and G. Fitzmaurice, "Citeology: visualizing paper genealogy," in CHI'12 Extended Abstracts on Human Factors in Computing Systems, 2012, pp. 181-190. + +[24] F. Heimerl, Q. Han, S. Koch, and T. Ertl, "CiteRivers: Visual Analytics of Citation Patterns," IEEE Trans. Vis. Comput. Graph., vol. PP, no. 99, pp. 1-1, 2015. + +[25] A. Ponsard, F. Escalona, and T. Munzner, "PaperQuest: A Visualization Tool to Support Literature Review," in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA, 2016, pp. 2264-2271. + +[26] "ZUIST - Homepage." [Online]. Available: http://zvtm.sourceforge.net/zuist/.[Accessed: 22-Sep-2015]. + +[27] P. Pirolli and S. Card, "Information Foraging in Information Access Environments," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 1995, pp. 51- 58. + +[28] B. B. Bederson, "Interfaces for Staying in the Flow," Ubiquity, vol. 2004, no. September, pp. 1-1, Sep. 2004. + +[29] J. Brandt, M. Dontcheva, M. Weskamp, and S. R. Klemmer, "Example-centric Programming: Integrating Web Search into the Development Environment," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2010, pp. 513-522. + +[30] W. D. Gray and D. A. Boehm-Davis, "Milliseconds matter: An introduction to microstrategies and to their use in describing and predicting interactive behavior," J. Exp. Psychol. Appl., vol. 6, no. 4, pp. 322-335, 2000. + +[31] P. André, m. c. schraefel, J. Teevan, and S. T. Dumais, "Discovery is Never by Chance: Designing for (Un)Serendipity," in Proceedings of the Seventh ACM Conference on Creativity and Cognition, New York, NY, USA, 2009, pp. 305-314. + +[32] A. Wexelblat and P. Maes, "Footprints: History-rich Tools for Information Foraging," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 1999, pp. 270-277. + +[33] B. B. Bederson and J. D. Hollun, "Pad++: A zooming graphical interface for exploring alternate interface physics," in In Proceedings of User Interface and Software Technology, 1994. + +[34] C. Plaisant, "The Challenge of Information Visualization Evaluation," in Proceedings of the Working Conference on Advanced Visual Interfaces, New York, NY, USA, 2004, pp. 109-116. + +[35] J. Jeng, "What Is Usability in the Context of the Digital Library," Inf. Technol. Libr., vol. 24, no. 2, pp. 47-57, 2004. + +[36] J. Teevan et al., "Visual Snippets: Summarizing Web Pages for Search and Revisitation," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2009, pp. 2023-2032. + +[37] A. Woodruff, A. Faulring, R. Rosenholtz, J. Morrsion, and P. Pirolli, "Using Thumbnails to Search the Web," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2001, pp. 198-205. + +[38] M. Rooke, T. Grossman, and G. Fitzmaurice, "AppMap: Exploring User Interface Visualizations," in Proceedings of Graphics Interface 2011, School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada, 2011, pp. 111-118. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..14ba74d4bb8c534446d7182a7f04024b40309cb3 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,357 @@ +§ PAPER FORAGER: SUPPORTING THE RAPID EXPLORATION OF RESEARCH DOCUMENT COLLECTIONS + +Author One, Author Two, Author Three + + < g r a p h i c s > + +Fig. 1. Three views of the Paper Forager system: (A) the initial state of system showing all 5,055 papers in the sample corpus from the ACM CHI and UIST conferences, (B) the filtered results showing only the papers containing an individual keyword, and (C) a sample paper overview page which further allows a user to click on a page to read the content. + +Abstract- We present Paper Forager, a web-based system which allows users to rapidly explore large collections of research documents. Our sample corpus uses 5,055 papers published at the ACM CHI and UIST conferences. Paper Forager provides a visually based browsing experience, allowing users to identify papers of interest based on their graphical appearance, in addition to providing traditional faceted search techniques. A cloud-based architecture stores the papers as multi-resolution images, giving users immediate access to reading individual pages of a paper, thus reducing the transaction cost between finding, scanning, and reading papers of interest. Initial user feedback sessions elicited positive subjective feedback, while a 24-month external deployment generated in-the-wild usage data which we analyze. Users of the system indicated that they would be enthusiastic to continue having access to the Paper Forager system in the future. + +Index Terms-literature review, document search, document browsing, corpus visualization + +§ 1 INTRODUCTION + +Literature reviews can be a long and tedious task requiring information seekers to sort through a large number of documents and follow extended chains of related research. With paper proceedings, users can easily scan and read any of the papers, but finding specific papers can be difficult. + +In contrast, online digital libraries and search systems improve the ability to find specific papers of interest. A number of new systems have been developed [1]-[6] which provide advanced faceted search and filtering capabilities. However, these systems are driven by metadata and textual content and ignore visual qualities such as figures, graphics, layout, and design. Furthermore, such systems require the user to download the source PDF file before the paper can be read in detail. We seek a single system that can support a continuous transition between finding, scanning, and reading documents within a corpus. + + * Submitted to GI 2021 + +Web technologies such as DeepZoom [7] and Google Maps support browsing of extremely large image-based data sets through the progressive loading of multi-resolution images. This type of architecture is beneficial in that it gives users rapid access to detailed content. However, we are unaware of any prior systems which have used such an architecture for document exploration. + +In this paper we present Paper Forager, a system to support the rapid filtering and exploration of a collection of research papers. Paper Forager relies on a cloud-based architecture, storing the papers as multi-resolution images that can be progressively downloaded on-demand. By using this architecture, we allow the user to transition from browsing an entire corpus of thousands of papers, to reading any individual page within that corpus, within seconds. In doing so, we accomplish our goal of reducing the transaction cost between finding, scanning, and reading papers of interest. + +Our main research contribution is the development of a novel system for literature review, which synthesizes previously explored concepts such as faceted search and zooming based interfaces. We present the design and implementation of Paper Forager and its associated architecture, implemented on a sample corpus of 5,055 papers from the ${ACM}$ CHI and UIST conferences. Additionally, we present results gathered from initial user feedback and a 24 month external deployment of the system. Users of the system felt it was easy and enjoyable to use, and the majority indicated that would like to continue using Paper Forager in the future. + +§ 2 RELATED WORK + +§ 2.1 FACETED SEARCH + +Faceted search allows users to explore a collection by filtering on multiple dimensions. While powerful, representing all of the available options in a user interface can be problematic [8]. Many papers have looked at improving the faceted searching experience. FacetLens [9] represents facets as nested areas on the interface and FacetAtlas [10] displays the relationships between related facets through a weighted network diagram and colored density map. Pivot Slice [6] used a collection of research papers as a sample corpus, and allows users to explore relationships between facets using direct manipulation. The faceted search system in Paper Forager is designed to be more approachable for new users than the above systems, at the expense of being less versatile in the types of queries which can be performed. + +§ 2.2 VISUAL DOCUMENT BROWSING + +There have been numerous research projects exploring the space of visually exploring a collection of documents. + +The WebBook and Web Forager [11] pre-loaded and rendered web pages so they could be rapidly flipped through, and more recently, Hong et al. [12] looked at improving the digital page flipping experience. Document Cards [13] extracts important terms and images from a document and displays them in compact representations. + +The DocuBrowse system [14] is designed to browse and search for documents in large online enterprise document collections. Similar to Paper Forager, DocuBrowse includes both a faceted search interface and visual thumbnails of results. While source content can be opened, it is not clear how long it would take to download and view an individual document. Paper Forager expands upon ideas from the DocuBrowse interface, and uses a cloud-based architecture to support rapid viewing through the progressive loading of multi-resolution images. Paper Forager also takes advantage of the connections, such as citation networks, while DocuBrowse supports a wider range of file types without looking at their interconnectivity. + +While not directly related to document browsing, the PhotoMesa system [15] allows zooming into a large number of images which are grouped and sorted by available metadata. Similarly, the Pivot Viewer component of the Silverlight framework [16] supports faceted searching of a collection of images based on associated metadata. Results are displayed using a dynamically resizing grid of images, using the Silverlight Deep Zoom technology [7]. We are unaware of attempts to use this type of technology for the exploration of research document collections. Paper Forager implements an architecture similar to Pivot Viewer, but with a customized design and interface for the purpose of rapidly exploring a corpus of research literature. + +§ 2.3 RESEARCH LITERATURE EXPLORATION TOOLS + +There are many deployed systems which provide search access to collections of research papers including Google Scholar [17], Mendelay [18], CiteSeerX [19], Microsoft Academic Search [20], and the ACM Digital Library [21]. For a thorough analysis readers are directed to Gove et al.'s evaluation of 14 such systems [4] which highlights the strengths and weaknesses of each system. + +There are also research systems which have looked at the topic of research literature exploration. Aris et al. [1] and PaperLens [5] are visualization tools which look at paper metadata to show temporal patterns of paper publication, and each uses citation links among papers to explore a field's rate of growth and identify key topics. Along similar lines, the PULP system [22] uses reinforcement learning to find and present a visualization of how the topics in a corpus of research papers have change over time. + +GraphTrail [2] is a system for exploring general purpose large networked datasets, and used a corpus of ACM CHI papers as a sample database. GraphTrail supports the piecewise construction of complex queries while keeping a history of the steps taken which allows for easy backtracking and modification of earlier stages. Systems such as Citeology [23] and CiteRivers [24] support exploring scientific literature through their citation networks and patterns, with CiteRivers also including additional data about the document contents. PaperQuest [25] aims to help researchers make efficient decisions about which papers to read next by displaying the minimum amount of relevant information, and considering papers for which the researcher has already displayed an interest. + +Another research exploration tool is the Action Science Explorer (ASE) [3], [4]. The ASE system uses a citation network visualization in the center of the interface and makes use of citation sentence extraction, ranking and filtering by network statistics, automatic document clustering and summarization, and reference management. + +The main difference between Paper Forager and the above systems is that while these existing systems all perform some amount of analysis, visualization, or filtering based on the metadata or text of a paper, they hide the design, layout, and images of the actual research documents. Furthermore, with existing systems, users must wait until the document is downloaded before reading the paper in detail. Paper Forager provides a basic level of faceted metadata searching along with emphasizing the visual content of the documents, and provides immediate access to reading individual pages of the documents. + +An example of a visually-focused research exploration tool is the UIST Archive Explorer [26] which was created for the ${20}^{\text{ th }}$ anniversary of the UIST conference and provided an interface for browsing the collection of papers previously published at UIST. Papers could be viewed by year, keyword, or author. Selecting a paper caused the pages of the paper to be arranged in a row and the user could zoom in for more details. Compared to Paper Forager, the UIST Archive Explorer used a smaller corpus of documents (578 vs. 5055), was hosted locally (whereas Paper Forager uses a cloud-based architecture), and did not allow for navigation between papers based on their citation networks. + +§ 3 THE LITERATURE REVIEW PROCESS + +The theory of information foraging [27] suggests that information seekers try to find documents with potentially high value and then use the available informational "scent" cues to determine which documents, if any, are worthwhile to examine further. We can thus think about the process of literature review being composed of three main stages: + +Finding: filtering the collection of all possible papers down to those you might want to read, either by browsing the collection, or explicitly searching. + +Scanning: making a decision for each individual paper as to whether it is worthwhile to read based on the available information scent cues. + +Reading: looking through the content of the paper for useful information. + +In order to maintain flow [28] during the literature review process, it is desirable for the transitions between the stages to be as smooth as possible. Research exploring the dynamics of task switching [29], [30] has shown that small interaction improvements can cause categorical behavior changes that far exceed the benefits of decreased task times. + +When papers were primarily distributed in printed proceedings, the finding phase of the process was inefficient. However, once a collection of possibly relevant papers was found, the process of scanning the papers consisted of flipping through the pages. The informational scene cues [27] presented to the information gatherer to make a reading decision consisted of what was visible in the printed form of the paper - namely the title, text, figures, and the paper's overall graphic design and layout. Based on these cues, a decision to read or not would be made, and the cost of transitioning between the scanning and reading phases was minimal (Fig. 2). + +With digital libraries the finding phase of the process is much more efficient, and the transition cost between finding and scanning was greatly reduced. However, the available informational scent cues presented during the scanning phase was reduced to basic textual information such as the title, authors, and sometimes the abstract of the paper. Advanced paper browsing tools such as ASE [3] provide additional functionality in the finding phase as well as incorporating additional scent cues to inform the reading decision such as visualizations and statistical measures of keywords, authorship, and citation networks. But still, the images and visual design of the original paper are not available to the researcher during the scanning phase; the graphics of a paper are not visible until after the decision has been made to move from scanning to reading. Additionally, the transaction cost when deciding to read a paper is relatively high: the paper needs to first be downloaded, which even on a fast network can often take between 3 and 15 seconds, and then it is opened for reading in a secondary application (or at least a new window within the same application). Besides the time cost, the context switch to a secondary application can be disrupt the flow of the information gathering process. + + < g r a p h i c s > + +Fig. 2. Four main approaches to paper discovery the context switches required between the various stages of the literature review process. + +§ 3.1 DESIGN GOALS + +With Paper Forager, we want to take the quick searching and filtering benefits of modern advanced paper discovery systems and combine them with the visual qualities and benefits of paper proceedings. Additionally, we want to reduce the cost of transitioning between stages (Fig. 2) which will improve the flow of the literature review process and encourage a wider exploration of the paper space. By supporting more exploration, the system may put users in a position to make more serendipitous discoveries [31]. + +§ 4 PAPER FORAGER + +We created Paper Forager to address the problems encountered while exploring large collections of research papers. As a sample corpus we used 5,055 papers published at the ACM CHI and UIST conferences. The metadata was collected using the Microsoft Academic Search API [20] and the source documents were automatically downloaded using links from Google Scholar where possible and manually downloaded from the ACM DL otherwise. + +The Paper Forager interface is composed of a set of interface controls at the top of the screen, and a main display area below. On startup, Paper Forager arranges all documents in the collection in the main display area, sorted with the oldest papers at the top and new newest at the bottom (Fig. 1A). + +§ 4.1 INTERFACE CONTROLS + +Along the top of the window are the interface controls for refining the displayed collection of papers which includes the search field, histogram filters, author list, history bar, and saved paper controls (Fig. ). + +§ 4.1.1 SEARCH FIELD + +On the left is the search field (Fig. 3) which initializes keyword searches of the titles and abstracts of the papers, as well as searches for authors and conference titles. The search system will automatically recognize author and conference names. For example, a search for "database" would find all papers with the term "database" in the title or abstract (Fig. 1B), whereas a search for "Buxton" would be recognized as an author search for "William Buxton" and would find all papers published by that author. Additionally, searching for "CHI" or "UIST" will return all papers published at the respective conference, and adding a year to the end of a search term, such as "CHI 2007", modifies the filters to show only the papers from the 2007 edition of the CHI conference. + +By default, entering a term in the search field will perform a new query using the entire collection as input, but prefacing a search term with a plus sign (+) creates an additive search filter. For example, if after searching for "Buxton" the user searches for "+mouse", only papers authored by William Buxton which include the term "mouse" will be displayed. + +§ 4.1.2 HISTOGRAM FILTERS + +Beside the search field are histogram filters displaying the number of papers published in each year and the relative distribution of the number of citations each paper has received (Fig. 4). Users can click the Year and Citations headings to set the sorting order of the papers in the main display area. As search events occur, the histograms dynamically update and animate to reflect the distribution for the actively displayed grid of papers. Under each histogram is a dual value slider which allows the selection of displayed papers to be limited to a specific range of years or number of citations. + + < g r a p h i c s > + +Fig. 4. (A) Histogram filters and Author List for all papers in the CHI and UIST corpus and (B) after searching for the term "tangible". + +§ 4.1.3 AUTHOR LIST + +To the right of the filter histograms is a list of the top authors of the papers within the current search results (Fig. 3). For example, Fig. 4A shows that Ravin Balakrishnan has the most papers overall in the database, while Fig. 4B shows that Hiroshi Ishii has the most papers for the search term "tangible". Clicking on an author name is equivalent to creating an additive search for the author, so in Fig. B, clicking on "Scott Klemmer" is equivalent to entering "+Scott Klemmer" in the search field, and will result in showing all papers for the term "tangible" which have Scott Klemmer as an author. + +§ 4.1.4 HISTORY BAR + +Previous research has demonstrated the benefits of keeping a history of actions during information foraging [2], [32]. The history bar in + + < g r a p h i c s > + +Fig. 3. The interface controls of Paper Forager. + +Paper Forager is designed for this purpose and provides a way for users to see how they arrived at their current view and the ability to easily backtrack if desired. + +Q "Mouse" [108] ${}^{ \times }$ H ▲ James Landay [41] ${}^{x}$ Q Published at ACM CHI [4273]* 1 L James Landay, 1981-2007 [24]* 2 William Buxton [33]* J $\bot$ James Landay,>8 citations [10] ${}^{\mathrm{x}}$ A My Saved Papers [2] ${}^{ \times }$ K $\bot$ James Landay, $> 5$ citations [9] ${}^{\mathrm{x}}$ - Digital tape drawing ${}^{ * }$ - Digital tape drawing ${}^{ * }$ ) ${}^{ * }$ References [6] ${}^{ * }$ i $\blacksquare$ Digital tape drawing ${}^{x}\rangle$ of Citations [8] ${}^{x}$ + +Fig. 5. History tokens for (A) search terms, (B) conferences, (C) authors, (D) saved paper lists, (E) individual papers, (F) references of a paper, (G) citations of a paper, and tokens with filters applied (H-K). + +Each type of search event has its own history token icon (Fig. 5, A-G) and as the histogram filter sliders are adjusted, the ranges are displayed beside the description of the active search (Fig. 6, H-K). The number of results matching the query is displayed in square brackets at the end of the history token. + +A All Papers [5055] + +BAII Papers [5055] | Q "mouse" [108] * + +CAII Papers [5055] 【Q “mouse" $+ \bot$ Brad Myers [7]* + +D All Papers [5055] | Q "mouse":+上Brad Myers, 201818 [3]* + +E All Papers [5055] | Q’+⊥’ || Encapsulating interactive behaviors * + +F All Papers [5055] | Q + 1. $\bot$ Encapsulating interactive behaviors ${}^{ \times }$ $\rangle$ -G Citations [9] ${}^{ \times }$ + +G All Papers [5055] | Q + 1. || (§) - 3. || (c) Why good engineers (sometimes) create ... * + +Fig. 6. Initial state of the history bar (A) and changes after a series of operations: (B) searching for "mouse", (C) clicking on the author Brad Myers, (D) adjusting the year and citation filters, (E) selecting a paper, (F) viewing that paper's citations, and (G) selecting another paper. + +Each search or filtering event is accompanied by a new token in the history bar (Fig. 7). As the list of tokens grows longer, the previous ones are minimized to show only their icon and their full description is displayed in a tooltip. + +Inserted between the history tokens are three different separation symbols (Fig. 7): a vertical line when the new state is independent from the previous one, a plus sign when an additive query is entered, and a right facing arrow when looking at references or citations of a particular paper. Clicking on a token in the history list will remove all subsequent query events leaving the clicked token as the active search state. The tokens also include an ’ $x$ ’ button to remove the query from the history list. + + < g r a p h i c s > + +Fig. 7. History token separators. + +§ 4.1.5 SAVED PAPER CONTROLS + +Paper Forager allows users to mark papers as saved. The collection of the user's saved papers, as well as all papers saved by the user community can be accessed through links in the top right corner (Fig. 3). Besides accessing the collection of saved papers for viewing, clicking the "Reference List" button copies a formatted list of paper references suitable for a "References" section of a paper to the user's clipboard. + +§ 4.2 MAIN DISPLAY AREA + +The main display area offers a collection view, a paper view, and a page view. + +§ 4.2.1 COLLECTION VIEW + +The collection view is used to display all papers that match the current query and filters. Papers within the collection are sized so that all results are initially within view. As searches are performed the grid of papers is animated to remove those papers which do not satisfy the query and re-arrange those that do to fill the available space (Fig. 8). + + < g r a p h i c s > + +Fig. 8. Stages of the reordering animation. (A) initial state, (B) removed papers fade away, (C) remaining tiles move and resize into new position. + +The total animation time is 1.5 seconds, where the outgoing tiles fade out for the first 0.75 of a second, and the remaining tiles rearrange for the next 0.75 seconds. A similar animation occurs when papers not previously on the screen are added. + +As the cursor moves around the grid of displayed papers, the paper under the cursor highlights and a large tooltip is displayed with the paper's title, abstract, authors, year, conference, and number of citations (Fig. 9). Clicking on a paper will bring that paper into focus in the paper view. + + < g r a p h i c s > + +Fig. 9. Example of a paper tooltip. + +§ 4.2.2 PAPER VIEW + +Once a paper is selected, either by clicking on a single paper, or by executing a query with only one result, it is displayed in the paper view (Fig. 10). Here, the composite image of the paper is fit to the main canvas area, with additional metadata including the title, abstract, authors, venue, and year displayed on the right. A badge icon can be clicked to add the paper to the user's list of saved papers. Clicking an author's name will load all papers by that author (equivalent to searching for the author's name), and there is also a link to follow the DOI link for the paper to view the official page in the ACM Digital Library. + +The lower section of the side panel contains thumbnails for each of the papers in the corpus which are referenced by the active paper, as well as all the papers which cite the active paper. Hovering over these thumbnails triggers the associated paper tooltip (Fig. 9) and clicking on a paper thumbnail adds the paper to the history bar and brings it into focus. Clicking on either of the "References" or "Citations" labels takes the system back to the collection view, displaying all of the referenced/cited papers. Below the paper image is a button to return to the paper collection view, as well as buttons to navigate to the previous and next papers in the current collection. For example, after searching for "mouse" and selecting a paper, repeatedly clicking on "next paper" will let you flip through all papers for the term "mouse". This functionality is also accessible through the left and right arrow keys. + + < g r a p h i c s > + +Fig. 10. Interface elements of the single paper view. + +§ 4.2.3 PAGE VIEW + +Clicking on a single page animates the display to fit that page into the view (Fig. 11), allowing users to read individual pages. In this page view, the navigational controls and arrow keys change to support navigation between the pages of the document. + + < g r a p h i c s > + +Fig. 11. The page view displays individual pages. + +Once the last page in the paper is reached the view zooms back to the paper view, and subsequent navigation operations will navigate at the paper level. This enables an efficient workflow of first flipping through papers, then going through the pages of an interesting paper, and then coming back out to flip through more papers (Fig. 12). The layout of the main window is designed so that on 24" or larger monitors the body text of the focused page is large enough to be read comfortably. For smaller monitors, or for more detailed examination of a portion of a page, the page view supports zooming and panning with the mouse wheel and left mouse button respectively. + + < g r a p h i c s > + +Fig.12. Workflow for navigating between and within papers. (Note: "Paper B" has only 4 pages.) + +§ 4.2.4 PRELOADING IMAGES + +On a reasonably fast broadband internet connection it takes approximately 2 to 3 seconds to download and display a composite paper image (such as in Fig. ) on a 24" monitor. This is an unacceptable delay if trying to rapidly flip through a collection of papers. To address this, when a paper is brought into single paper view, the images for the previous and next papers are automatically downloaded and composited at the proper resolution so they can be immediately displayed when requested. + +§ 4.2.5 INTERACTION MODEL + +The intent of the Paper Forager design is to support a primary interaction model of searching or filtering for relevant documents, and then clicking on papers or pages to enlarge their view to see them in more detail. Additionally, similar to zooming user interfaces [33], the collection, paper, and page views support interactive zooming and panning. We anticipate that even though the system supports freeform panning and zooming, that users will prefer, and gravitate towards the search/filter/click interaction model. + +§ 4.3 SYSTEM IMPLEMENTATION + +Paper Forager is implemented as an in-browser application using the Microsoft Silverlight framework. During development, this allowed for the application to be used in browsers on both Mac OS X and Windows computers with the Silverlight runtime is installed. Due to recent changes to the plug-in architects of major browsers now limit the Silverlight runtime to Internet Explorer on Windows. + +The components of the deployed system (Fig. 13) are hosted and stored using parts of the Amazon Web Services (AWS) framework. The application binaries, images, and metadata are stored on and hosted from an AWS Secure Simple Storage (S3) instance. Usage log data and saved paper information are stored in separate AWS SimpleDB (SDB) tables. Due to cross-domain security policies which restrict communication of Silverlight applications, an AWS EC2 server hosts and interprets PHP scripts which facilitate communication between the application and the databases. + + < g r a p h i c s > + +Fig. 13. System architecture diagram. + +§ 4.3.1 IMAGE PYRAMIDS + +To enable fast streaming of papers over the internet and allow the papers to be viewed at a range of resolutions from very small thumbnails up to a large size suitable for reading, papers were converted in to a collection of "image pyramids" following the Microsoft Deep Zoom file format [7]. Each document is rendered at 14 resolutions, from the smallest size of 1 pixel square, up to the original size of the image, in our case, 10,048 pixels wide by 6098 pixels tall. At each resolution of the "pyramid", the images are divided into smaller "tiles" so that only the parts of the image which are needed at that resolution are downloaded (Fig. 14). + +We tried maximum tile sizes of 256, 512, and 1024 pixels and found that 512 pixel square tiles provided the best performance for the types of images streamed with our system. On the client side, a Silverlight MultiScaleImage component handles downloading and compositing the tiles to display the image at the requested resolution. + + < g r a p h i c s > + +Fig. 14. Image Pyramid data format example. + +§ 4.3.2 DATA PROCESSING + +The original PDF versions of the papers go through a multi-stage processing pipeline to convert them into their multi-scale image pyramid format (Fig. 15). First, the PDF files are split into individual pages and converted to JPG image files at 300 dpi. Using the "Data Sets" feature of Adobe Photoshop, composited PSD files are created combining all the pages of the paper into a single image (Fig. 16). The last step of the process involves converting the large combined JPG image into the image pyramid format. + + < g r a p h i c s > + +Fig. 15. Data processing pipeline. + +The conversion process for each paper took approximately 1 minute on a workstation computer with ${24}\mathrm{{GB}}$ of ram and dual ${2.53}\mathrm{{GHz}}$ Xeon processors, and the entire sample corpus of 5,055 papers took approximately 90 hours to process, producing $\sim {1.9}$ million small .jpg images, which generated $\sim {54}\mathrm{{GB}}$ of total image data. Each paper can be processed independently, so the pipeline is well suited for parallelization or computation on remote clusters or servers. + +§ 4.3.3 PAPER LAYOUT + +If the paper has 5 or less pages, it uses the 5-page template, and otherwise it uses the 10-page layout. This version of the system did not support papers with more than 10 pages, but it would not be difficult to extend this pattern one more level to a 17-page layout ( 1 large first page, and a 4-by-4 grid for subsequent pages). + + < g r a p h i c s > + +Fig. 16. Sample 5-page (left) and 10-page layouts (right). + +We choose to combine all pages of each paper into a single image object before creating the image pyramid as a performance optimization to limit the number of individual objects the system would need to display at any one time. Alternative strategies will be discussed as future work. + +§ 5 EVALUATION + +Quantifying the benefits of information visualization systems is notoriously tricky [34]. To gain insights and usage observations related to our system, we ran two evaluations: a small controlled session to collect initial user feedback, and then a broad, long-term external deployment. + +§ 5.1 INITIAL USER FEEDBACK + +We conducted a qualitative user study to evaluate the features and usability of the Paper Forager system. We wanted to collect initlal feedback from users, and validate that some simple (and not so simple) tasks can be accomplished by users in a reasonable amount of time. We recruited 6 participants that were taking an HCI course at a local university ( 4 male, 2 female, ages 21-24). These students had recently completed a project which required them to gather references for a $\mathrm{{HCI}}$ topic of choice. As such, they were ideal candidates to give feedback on our system and provide a comparative analysis of Paper Forager to the systems and strategies that they had independently used for their literature reviews. + +The feedback sessions began with a 5 minute overview demonstrating the main features of the system, after which the participants explored the system on their own for an additional 5 minutes. The sessions concluded with the participants completing a series of 8 tasks, of generally increasing difficulty (Fig. 17). + +The tasks were devised such that some could likely be accomplished with a standard digital library search system, some would benefit from faceted searching capabilities, and three of them (c, e, and h) would be prohibitively difficult to accomplish without the added capabilities afforded by the Paper Forager system. The goal of the tasks was to encourage the participants to try different aspects of the system rather than cover all possible use cases for the application. After completing the tasks, participants were asked for thoughts about the system and suggestions for improvements. + +§ 5.2 RESULTS + +All 6 users were able to complete the 8 tasks. While the tasks were not specifically designed to test the speed of using the Paper Forager system compared to traditional digital libraries, task completion times were recorded to see the range of completion times for the various tasks across the set of participants. + +Mean task completion times ranged from 33 seconds (task $I$ ) to 3 minutes and 45 seconds (task8). In addition to the 6 study participants, a Paper Forager user with approximately 3 hours of experience was asked to perform the tasks to benchmark expert performance levels of these tasks. The longer time for the last task was due to participants not always knowing which part of the paper to read in detail to find the necessary information (Fig. 17) + +It is interesting to note that for the tasks 1 through 7, the fastest times from the "novice" study participants after their brief introduction to the system are similar to the completion times from the + +"expert" user, suggesting that some of the novice users were becoming proficient with using the system after only a short amount of time. In the comments section of the survey half (3 of 6) of the participants mentioned that their favourite feature was the ability to string together multiple queries with the " + " operator, and 2 of 6 commented that they particularly liked that they could see thumbnails for the referenced and cited papers in the paper view. During the 5 minute exploration phase, all participants experimented with the dynamic zooming and panning functionality using the mouse. However, during the tasks, they chose to use the search/filter/click interaction style. Additional features which were requested included auto-completion in the search field, additional conferences in the corpus, and more social sharing capabilities. Overall, participants were extremely enthusiastic about the system, and were hopeful that it would be publically released so they could continue to use it. + + < g r a p h i c s > + +Fig. 17. Task completion times for the 6 study participants, as well as times from one 'expert' user. + + < g r a p h i c s > + +Fig. 18. Images used in questions 3 (left) and 8 (right). + +With the overall positive feedback of the system and the confirmation that users of the system would be able to complete some useful tasks, we went forward to go forward with a broader deployment. + +§ 5.3 EXTERNAL DEPLOYMENT + +To gain additional feedback and in-the-wild usage data, as well as to validate the deployability of our cloud-based architecture, we conducted a long-term external deployment of the system. To maintain compliance with ACM copyright policies (as the papers used in the system are from ACM CHI and ACM UIST), access to Paper Forager was restricted to users with a private ACM account with access permissions for CHI and UIST papers, and to IP ranges with site license to the ACM Digital Library (such as most post-secondary institutions). The system was deployed and available for use continuously over a 2 year period. + +§ 5.4 USAGE DATA AND FEEDBACK + +Over the 24-month deployment period, 493 log-in events were registered from 153 unique users, with 49 of the users logging into the system more than one time. There were a number of "regular" users with 20 users logging into the system more than 5 times each, and 11 users logging more than 100 minutes of active usage. A total of 1,887 papers were viewed in "paper view mode" (Fig. 10) and 1,851 searches were performed over the course of the deployment. + +§ 5.4.1 TYPES OF USAGE + +Since Paper Forager was designed to support the various stages of the literature review process (Finding, Scanning, and Reading), we analysed the log data to see if people were using the system in different ways, or if all users were using the system in a similar manner. To do this we looked at usage along two dimensions: Browsing vs. Searching, and Scanning vs. Reading. + +§ 5.4.2 BROWSING VS. SEARCHING (METHODS OF "FINDING") + +In this dimension we are looking at different ways users can locate papers which might be relevant, during the finding phase of the literature review process. Using the system in a more "browsing" manner would involve looking at collections of papers, following citation or reference links, and reading many tooltips. Alternatively, a more "search-based" approach to the finding process involves specifically entering search terms into the search field. This dimension is calculated as the user's ratio of "search" events to "browsing" (viewing collections, inspecting tooltips) events. + +§ 5.4.3 SCANNING VS. READING + +In the scanning phase of the literature review process, a user is quickly looking at papers to figure out if they are worth reading. In Paper Forager, a reasonable proxy for a user spending lots of time in the scanning phase could be a user looking at many papers in the overview paper view mode, and zooming into view many single pages in the page view could indicate a user spending lots of time in the reading phase. This dimension is calculated as the ratio of "paper view" events to "page view" events. The 60 users of the system with multiple logins or more than 20 minutes of continuous usage have their activity plotted along these two dimensions in Fig. 19. Each point represents a user, with the size of the point proportional to the amount of activity for that user. Each axis spans a ${25}\mathrm{x}$ difference in behaviour; that is, users at the bottom of the chart looked at ${25}\mathrm{x}$ more individual pages than users at the top, and users on the left side performed ${25}\mathrm{x}$ more searches than those on the right. It is interesting and encouraging to see that users exhibited such a wide range of usage behaviours. Even among the most active users (those with larger circles) we can they are distributed around the plot, suggesting that the system can be successfully used for different stages of the review process. + + < g r a p h i c s > + +Fig. 19. Usage log analysis showing usage patterns for finding behavior (x-axis) and scanning vs. reading (y-axis). + +§ 5.4.4 FEEDBACK AND SUGGESTIONS + +At the end of the deployment each user who logged into the system was sent a short voluntary questionnaire ( 30 of 153 responded, 20% response rate), where they were asked to answer five questions on a 5-point Likert scale (Strongly Disagree, Somewhat Disagree, Neither Agree/Nor Disagree, Somewhat Agree, Strongly Agree): + + * I found Paper Forager easy to use. + + * I found Paper Forager enjoyable to use. + + * Paper Forager is a more effective way to research papers than the techniques/systems I have been using previously. + + * Paper Forager is an efficient way to explore research papers. + + * If kept up to date with papers in my field, I would use Paper Forager to explore research papers in the future. + +The first four questions where based on the criteria outlined by Jeng [35] on which factors contribute to the usability of a digital library system (learnability, satisfaction, effectiveness, and efficiency). Results are shown in Fig. 20. In general, users felt Paper Forager was easy and enjoyable to use, and a majority of users said if kept up to date with papers in their field, they would continue to use Paper Forager in the future. Besides the subjective questions, users were also asked to provide details about features they liked and suggestions for improvement. + + < g r a p h i c s > + +Fig. 20. Results from the subjective questions asked after the external deployment. + +Several users relayed interesting ways in which they used the system. One Ph.D. student was writing his first paper with a new supervisor and wanted to ensure that his paper followed the general conventions that the supervisor had used in the past. By searching for the supervisor's name and rapidly flipping through his previous papers the student was able to get the answer to a number of questions about the supervisor's style: + +How many figures does he usually include in a paper? Does he dock figures at the top and bottom of columns, or does he float them in the middle? Does he like using long figure captions? Does he use a particular color scheme for charts? How often does he include an explicit "Contributions" section? How does he typically word his conclusions? + +Before the student had access to Paper Forager he was looking at a single example paper of the supervisor's to try and answer these questions; it was too much work to download and look at all of the supervisor's papers individually. With Paper Forager, each of these tasks took a very short amount of time and effort. + +Another user (3D user interface researcher) mentioned they used Paper Forager not only in the process of writing papers, for but other tasks as well: + +It is so extremely fast and easy to search various topics. You get an idea of what has been in a field, dig for follow up papers (in-depth search) or other related papers (breadth search). I have even used it to find the best reviewers for a paper, or find relevant researchers on any topic (committees, collaborations, etc.) This is just how the digital library should look! This tool has saved me HUGE amounts of time. + +Finally, a grad student finishing up their Ph.D. mentioned that Paper Forager changed the way they approached writing papers: + +It allowed me to rapidly compare papers to get a sense of structure and style. For instance, when I was writing my own paper, I would quickly look at several examples from related papers to understand what was the typical approach. + +The responsiveness also allowed me to view more related papers. With Google Scholar or the ACM DL, it's often several clicks to view papers, and I have to download the PDF first; with Paper Forager I can quickly look at a paper and decide whether it's relevant, so I would actively look at more papers than I would have otherwise. + +A common issue with the system was that it covered too few conferences, and users wanted the collection expanded to cover more of their interests. Several users (particularly those with slower computers and larger monitors) had trouble with performance, finding the interface to not be as responsive as they would have liked. + +Many users mentioned liking that the references and citations were prominently displayed in the side panel of the paper view, and suggested that the links between related papers could be shown even more emphasized by showing the relationships in the main collection view. A number of users said they liked the collection view as they often remember papers by their "Fig. 1", however for very large collections (such as the entire 5,055 paper collection shown on launch), some users felt the view was not very helpful, and suggested alternatives such as using a different view of papers when they are displayed very small which could more clearly display relevant information. + +§ 6 DISCUSSION & FUTURE WORK + +The Paper Forager system was designed and optimized to work with collections on the order of 10,000 research documents. It will be interesting to look at how the interaction model should change for much larger collections of papers (an entire digital library for example), as well as how the performance of the system would be affected. Additionally, we would like to explore using the system with other collections of documents with citation networks such as patent applications or court proceedings. + +Related to the system performance, Paper Forager combines all pages of each paper into a single image object. It would also be interesting to explore the design opportunities that would arise from storing each page of the paper individually. This would allow for more varied arrangements such as selectively showing only the first page of a paper, arranging the pages of each paper in a row, or highlighting the pages with the most figures. In the time since the system was first developed, Silverlight as a technology has become less-well supported (notably, the Silverlight plug-in will no longer run in the Chrome browser). Re-engineering the system as a HTML5/JavaScript web application would be worthwhile. + +To preserve the design and layout work the authors put into creating their papers, we maintained the formatting from the original document. However, we are interested in exploring different representations for the papers when they are at small sizes such as those explored in previous work [26], [36], [37]. It would also be interesting to consider automated approaches for determining good miniaturized representations of research papers and other types of documents. + +We would also like to look at ways of annotating the thumbnail images to show aspects of the metadata such as number of citations or which papers have been saved the most often. A coloring technique similar to the one used in AppMap [38] were the thumbnails are shaded based on one variable and sorted by another could lead to interesting discoveries. The searching and filtering capabilities of Paper Forager were purposefully simplified to improve the approachability of the system, but it would be useful to explore combining the visual aspects of Paper Forager with an advanced paper filtering system such as ASE [3], [4] or a visualization of the citation space such as Citeology [23]. + +Using an image format to display papers has some downsides compared to viewing the actual PDF file, even when the image is at a high resolution. For example, users are unable to select text from a paper in Paper Forager. We believe there is a great potential in a hybrid system where multi-scale images would be used to immediately display the paper while a PDF file loaded in the background. Once a PDF is loaded it could seamlessly replace the multi-image representation. + +The ACM paper template contains the guidance "Please read previous years' proceedings to understand the writing style and conventions that successful authors have used." We agree that this is a useful, although laborious, task for prospective authors, and hope that Paper Forager could serve as a mechanism to simplify this process. + +§ 7 CONCLUSION + +With Paper Forager we have created a cloud-based system which allows users to rapidly explore a collection of research articles. Our tests of the system produced positive feedback from users who overall agreed that Paper Forager was easy and enjoyable to use while being effective and efficient. We believe our work fills an important gap in existing systems for exploring document collections, allowing users to seamlessly transition between finding, scanning, and reading documents of interest. We hope our work can inspire future research and development in the area. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..cfb403d9a02eb085ab6d04335c2300a4b1d7fb74 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,365 @@ +Contour Line Stylization to Visualize Multivariate Information + +Category: n/a + +![01963e93-e7e7-7080-98a3-76d4428f2722_0_217_332_1362_473_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_0_217_332_1362_473_0.jpg) + +Figure 1: (left) A geographic map, and the contour plots of four climatic parameters A (albedo), B (soil moisture), C (pressure), and D (temperature) on a part of the map. (right) Four of our five designs that encode B, C, and D along the contour lines of A. + +## Abstract + +Contour plots are widely used in geospatial data visualization as they provide natural interpretation of information across spatial scales. To compare a geospatial attribute against others, contour plots for the base attribute (e.g., elevation) are often overlaid, blended, or examined side by side with other attributes (e.g., temperature or pressure). Such visual inspection is challenging since overlay and color blending both clutter the visualization, and a side-by-side arrangement requires users to mentally integrate the information from different plots. Therefore, these approaches become less efficient as the number of attributes grows. + +In this paper we examine the fundamental question of whether the base contour lines, which are already present in the map space, can be leveraged to visualize how other attributes relate to the base attribute. We present five different designs for stylizing contour lines, and investigate their interpretability using three crowdsourced studies. Our first two studies examined how contour width and number of contour intervals affect interpretability, using synthetic datasets where we controlled the underlying data distribution. We then compared the designs in a third study that used both synthetic and real-world meteorological data. Our studies show the effectiveness of stylizing contour lines to enrich the understanding of how different attributes relate to the reference contour plot, reveal trade-offs among design parameters, and provide designers with important insights into the factors that influence interpretability. + +Index Terms: Human-centered computing-Visualization-Visualization techniques; Human-centered computing-Visualization-Visualization design and evaluation methods + +## 1 INTRODUCTION + +Contour plots are widely used to visualize geospatial information on two-dimensional maps. Contour lines and contour intervals are two important features of a contour plot. A contour line (isoline) represents a fixed threshold value and connects map points having that value. A contour interval corresponds to a range of values within the bounds indicated by two successive threshold values. + +The simplicity and rich information found in contour plots make them a popular choice for infographic posters and in geospatial data analysis $\left\lbrack {5,{13},{30}}\right\rbrack$ . Contour lines provide us with a potentially-useful visualization resource - a set of points that are already on the map. We can leverage these points to show other data attributes along the contour line, which can provide insights into how other geospatial attributes relate to the base attribute. To the best of our knowledge, effectiveness of contour line stylization and the boundaries of human perception to interpret them are not well understood. + +In this paper, we examine how to stylize contour lines to provide useful additional information to the viewer (Figure 1). We do not entwine ourselves with any domain specific application, but rather attempt to improve our understanding of various facets of contour line stylization. The contour plots may result from geospatial datasets, mathematical surfaces, or even scatterplot densities. However, there exist several motivating scenarios (e.g., analyzing historical change in contour lines or understanding correlation) where contour stylization may be useful. Figure 2 shows such a motivating example based on front prediction in meteorological analysis. The development of a front depends on several factors such as temperature, moisture, wind direction and pressure. Figure 2 (left) shows front prediction by the National Oceanic and Atmospheric Administration (NOAA) Weather Prediction Center (WPC) archive, where the curved lines (red, blue or mixed) correspond to various types of fronts (warm, cold or stationary fronts, respectively). Note that such fronts can be derived using software or by painstaking inspection of the numbers plotted on the map representing various weather parameters. Figure 2 (right) shows our contour stylization for 4 weather variables (pressure, temperature, relative humidity and precipitable water). The contour lines represent isobars (pressure). The temperature, relative humidity and precipitable water are encoded in red, blue, and white lines respectively. Contour line stylization can readily reveal some potential cold fronts (yellow curves 1,3, and 5) and warm fronts (curves 2 and 4), which shows the potential of using contour line stylization alongside traditional visualizations. + +Multivariate visualizations that encode data attributes into different preattentive perceptual features of a visual element (glyph) [3, 34,41] such as size, shape, color, and texture, are typical ways to visualize geospatial information on a map. A well-known limitation of a glyph-based visualization is that it clutters the map [10]. While a dense overlay occludes the view of the base map (Figure 3 (left)), a sparse overlay compromises perception of geospatial connectedness and lacks the gradient information that naturally comes from a contour plot (Figure 3 (right)). + +Our Contribution: We consider geospatial data with four attributes $- \mathrm{A},\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ - and encode $\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ along the contour lines of A. We design five visual encodings and investigate whether users can interpret the attribute values (high, low), trends (increasing or decreasing), and relationships (similar or opposite trends) along a contour line, or across a set of contour lines. Since the encoding position of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ is determined by $\mathrm{A}$ ’s contour line, users may want to vary the number of contouring thresholds for A, or use a different base contour plot. Therefore, we describe how to design a synthetic dataset to examine the influence of various design parameters through controlled experiments. + +![01963e93-e7e7-7080-98a3-76d4428f2722_1_153_151_714_271_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_1_153_151_714_271_0.jpg) + +Figure 2: (left) Front detection by NOAA WPC. (right) detection using one of our five visualization techniques. + +![01963e93-e7e7-7080-98a3-76d4428f2722_1_152_506_718_368_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_1_152_506_718_368_0.jpg) + +Figure 3: Multivariate visualization with (left) glyphs occludes the map, and (right) grid stylization lacks the gradient information. + +We conducted three crowdsourced studies that evaluate our designs. The first two studies reveal how contour width and the number of contour intervals influence the visual interpretability of our designs. The third study used both synthetic and real-world meteorological datasets to assess how the designs and datasets compared in terms of task completion time and accuracy, for common geospatial data analysis tasks. In addition to revealing insights into our designs, our experimental results also suggest that results obtained using synthetic datasets generalize to real-world datasets. + +## 2 RELATED WORK + +### 2.1 Multivariate Visualization on a Map + +Geospatial data are often shown using choropleth maps [25, 28], and contour plots [27]. While choropleth maps and cartograms [14] reveal properties of a region, a contour plot helps to understand the data distribution on a map and find regions with similar properties. Data analysts often use color blending for finding probable correlations between two geospatial variables [15]. However, creating a high-quality bivariate choropleth or contour map requires careful choice of blending colors and textures [24]. + +Researchers have also attempted to construct trivariate choropleth maps using the CMY color model [6, 35]. Wu and Zhang [44] examined a 4-variate map that captures the contour band information for each variable in thin visual ribbons, and then overlays the ribbons for all four variables using four different colors. Overlaying glyphs $\left\lbrack {{29},{39}}\right\rbrack$ or charts $\left\lbrack {4,{16}}\right\rbrack$ on a map is a popular way to visualize geospatial information. Glyphs are often designed to encode data into features that can be perceived through preattentive visual channels [43]. A rich body of visualization design research examines how humans perceive various combinations of geometric, optical, relational, and semantic channels. We refer readers to recent surveys $\left\lbrack {9,{17},{41}}\right\rbrack$ for a detailed review of glyph design. Glyph based visualizations often must use a careful glyph positioning technique $\left\lbrack {{29},{44}}\right\rbrack$ , as creating glyphs for many data points on a map creates overlapping. + +Various texture metrics such as contrast, coarseness, periodicity, and directionality $\left\lbrack {{26},{38}}\right\rbrack$ have been used to visualize multivariate data. Healey and Enns [20] introduced pexels that encode multidimensional datasets into multi-colored perceptual textures with height, density, and regularity properties. Shenas and Interrante [36] showed that color and texture can be combined to meaningfully convey multivariate information with four or more variables on a choropleth map. + +### 2.2 Stylization of Lines and Boundaries + +Stylized lines naturally appear in the visualization of trajectory data. For example, traffic flow data are often color-coded on road networks as heatmaps $\left\lbrack {{23},{40}}\right\rbrack$ . Andrienko et al. $\left\lbrack 1\right\rbrack$ extracted characteristic points from car trajectories and aggregated them to create flows between cellular areas to reveal movement patterns in a city. They used stylization to depict various information about the aggregated flows. Huang et al. [23] modeled taxi trajectories using a graph. They stylized the streets based on node centrality and overlaid rose charts to visualize other traffic information. Perin et al. $\left\lbrack {2,{33}}\right\rbrack$ investigated combinations of thickness, monochromatic color scheme, and tick mark frequency on a line to encode time and speed on a two-dimensional line. They observed that encoding speed with a color scheme and time using one of the other two features improved user perception. + +Geographic cluster visualization and map generation techniques have also considered line stylization. Christophe et al. [8] proposed a pipeline for generating artistic and cartographic maps that integrates linear stylization, patch-based region filling and vector texture generation. Kim et al. [24] created Bristle Maps that put bristles perpendicular to the linear elements (streets, subway lines) of the map and then encoded multivariate information into the length, density, color, orientation, and transparency of the bristles. Zhang et al. [45] introduced TopoGroups that aggregate spatial data into hierarchical clusters, and show information about geographic clusters on the cluster boundaries. Although TopoGroups summarizes cluster information along the boundary, Zhang et al. noted that users may mistakenly see the visualization as representing local statistics near the boundary. In subsequent work, Zhang et al. [46] proposed TopoText that replaces the boundaries using oriented text. + +Visual encoding of lines and boundaries has been widely used in visualizing data uncertainty [7, 18]. Cedilnik and Rheingans [7] overlaid a regular grid on the map and then stylized the grid edges using blur, jitter, and wave. Data uncertainty has also been mapped to contour lines, where uncertainty is mapped to line color, thickness, and dash frequency [31]. Line stylization has also been used in cartograms. Görtler [18] proposed bubble treemap that represents uncertainty information using wavy circle boundaries, and varied wave frequency and amplitude based on the uncertainty. Patterson and Lodha [32] encoded five socio-economic variables simultaneously on a world map using country fill color, glyph fill color, glyph size, country boundary color, and cartogram distortion. + +## 3 Visual Encoding + +In this section we describe five contour-based designs (Figures 4) for encoding geospatial information with four-attributes: $\mathrm{A},\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ . We assume that all the attributes are numeric and positive. We create a set of contour lines using the A, and then encode the attributes B, $\mathrm{C}$ , and $\mathrm{D}$ along the contour lines of $\mathrm{A}$ using visual features. + +![01963e93-e7e7-7080-98a3-76d4428f2722_2_154_148_1491_233_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_2_154_148_1491_233_0.jpg) + +Figure 4: Encoding multivariate information using Parallel Lines, Color Blending, Pie, Thickness-Shade and Side-by-Side. + +Rationale: For encoding, we used visual features that are preat-tentive [42] and intuitive to interpret, or has been used in prior research [33], e.g., line thickness, monochromatic color scheme, and pie slice. Most of our designs are based on the notion of channel separability [21], but we also kept color blending as it has often been used in the context for correlation analysis in geospatial data $\left\lbrack {{15},{22},{37}}\right\rbrack$ . + +Design 1 (Parallel Lines): This design maps B, C, and D into three lines with distinct colors. The lines for $\mathrm{B}$ and $\mathrm{C}$ lie on opposite sides of the contour line of $\mathrm{A}$ , and the line for $\mathrm{D}$ follows the contour line of A. The data values are encoded using line width (between 0 and $w$ ), and the value of the attribute is linearly mapped to the range $\left\lbrack {0, w}\right\rbrack$ . If D’s value is 0, the base contour line A becomes visible. + +Design 2 (Color Blending): This design encodes B and C with distinct colors, and then blends them on the contour line of A. The attribute $\mathrm{D}$ is mapped to the width of the contour line. Note that since the contour line of $\mathrm{A}$ has a non-zero width $u$ , the values of $\mathrm{D}$ are mapped to the linewidth range $\left\lbrack {u, w}\right\rbrack$ . Consequently, B and $\mathrm{C}$ remain visible even when $\mathrm{D}$ is 0 . + +Design 3 (Pie): This design encodes B, C, and D using pie slices of distinct colors, and puts them together to create a pie icon. The only difference from a pie chart is that the sum of the values of $\mathrm{B}$ , $\mathrm{C}$ , and $\mathrm{D}$ may not be equal to the total pie area. The pie icons are placed successively along the contour line of A. The pie slices for B, C, D start at ${0}^{ \circ },{120}^{ \circ }$ and ${240}^{ \circ }$ (assuming the top as ${0}^{ \circ }$ ), and can grow clockwise to cover an angle of ${120}^{ \circ }$ . An attribute value is encoded into the angle covered by the corresponding pie slice. + +Design 4 (Thickness-Shade): This design represents B and D using two distinct lines. The lines of B and D lie on opposite sides of the contour lines of $\mathrm{A}$ , with values encoded using line width. The values of $\mathrm{C}$ are encoded using a monochromatic color scheme, where the color appears on B’s line. A low C value corresponds to a lighter shade, and a high value to a darker shade. The minimum line width for $\mathrm{B}$ is set to a positive threshold $u$ , making a range of $\left\lbrack {u, w}\right\rbrack$ so that $\mathrm{C}$ remains visible even when $\mathrm{B}$ is 0 . + +Design 5 (Side-by-Side): This design shows B, C, and D in separate side-by-side views. Each of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ is encoded using a distinct monochromatic color scheme. The color appears on the contour lines of A. We ensured that the width and height of each of the Side-by-Side views to be $\lceil \sqrt{A/3}\rceil$ , where $A$ is the total pixel area of any other design, assuming a square display. + +## 4 IMPLEMENTATION DETAILS AND DATASETS + +The choice for contouring thresholds are application specific. But in our controlled experiment, we used $k$ -quantiles as the thresholds. This allows us to reduce visual clutter and to examine the design across a large number of contouring thresholds. We first computed the contour lines for A, and then further processed these polylines by dividing long line segments uniformly to create fine-grained polygonal chains. We then encoded the attributes by interpolating the values at the endpoints of these tiny segments. Figure 5 illustrates the parameters used for the design. Here $b, c$ and $d$ denote the normalized $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ values, respectively, and $t$ is a thickness factor that was used to linearly map the attribute values to the input line-thickness range. The number of discrete shades in the perceptual color scale depends on the number of contour intervals. For blending, we used CSS mix-blend-mode, where the scheme Multiply was chosen in a pilot study comparing 3 possible candidate schemes: Multiply, Darken and Difference. + +Synthetic Data: For each of the four attributes, we created scatterplots consisting of four Gaussian clusters that were positioned randomly in the four quadrants. Each cluster had 40000 samples, with randomly varying covariance (2.5-7.5), 2 features ( $\mathrm{x}$ and $\mathrm{y}$ coordinate), and 4/6/8 classes (for 4, 6, and 8 contour intervals). The clusters were then interpolated and reshaped such that the point density plot for each cluster takes the shape of a peak or valley. + +All the clusters of A, B, C, and D at a quadrant overlapped one another making various peak-valley combinations. This also allowed us to obtain scenarios where an attribute value increases or decreases across successive contour lines of A. To visualize all possible trends (increasing or decreasing) of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ , we used all possible peak-valley combinations for these attributes. To imitate real-world topographic map patterns, we varied the cluster overlaps for A. + +Real-World Meteorological Data: To test the designs on real-world data, we used real meteorological datasets ${}^{1}$ . For our study, we extracted 4 attributes from the dataset: Temperature, Pressure, Soil Moisture and Albedo from different geolocations. + +Evaluation: A careful choice for various design parameters is important to achieve optimal readability for the designs. Two major factors that can influence the designs are width (the space allocated along the base contour line for encoding $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ ), and the number of contouring thresholds for A. Therefore, we first conducted two studies to examine these factors and choose appropriate parameter values for the designs. In the final study, we evaluated the designs based on viewers' task performance. The first two studies were conducted on synthetic data and the final study included both synthetic and real-world data. + +## 5 STUDY 1 (CONTOUR WIDTH) + +Our first study investigated the effect of contour width on design interpretability, as well as on tasks that use the underlying map. + +Intuitively, increasing contour width should make the encoded variables easier to see and interpret, but will also increase occlusion of the map; in addition, wide contours may also overlap each other, depending on the density and shape of the contours. To investigate this trade-off, we set the number of contour intervals to 8 (i.e., 7 contouring thresholds), and then determined a range of widths to explore for each design. We used 8 contour intervals because this allows designers a reasonable spectrum of design choices, and gives us a reasonable range to investigate in Study 2 (described below). + +Table 1 illustrates the width ranges used in Study 1. We chose minimum and maximum widths for each design based on informal testing with each design's encoding. The minimum width is determined by the number of pixels required to create the design and make the variation in the attributes noticeable. For example, Parallel Lines requires 9 pixels to encode the variation (low, mid, high) for each of the three attributes. The maximum width corresponds to the case when the successive linear elements are about to overlap. + +--- + +${}^{1}$ anonymized + +--- + +![01963e93-e7e7-7080-98a3-76d4428f2722_3_175_163_1449_293_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_3_175_163_1449_293_0.jpg) + +Figure 5: Illustration for the implementation details for different designs. + +Table 1: Different widths for Study 1 + +
DesignWidth Alternatives
MinimumMedianMaximum
Parallel Lines9-12
Color Blending4710
Pie8-10
Thickness-Shade8-10
Side-by-Side124
+ +We also selected a median width if there was enough difference between minimum and maximum that the median would have at least two pixel units from the extremes. Hence we only have the median width for Color Blending and Side-by-Side (e.g., for Parallel Lines, the encoding for $\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ needs to be the same, so the next possible width choice after 9 is 12). + +### 5.1 S1: Participants, Data, and Tasks + +We ran a crowdsourced study on Amazon Mechanical Turk (AMT) [12]. To be eligible, a participant needed to pass a color perception test (Ishihara test [11]), run the study on a desktop computer, reside in North America, and have at least an ${80}\%$ approval rate in AMT. We recorded 63 complete responses (32 male, 31 female, median age range 30-39). + +All experimental tasks were created using synthetic data. We tested the 5 designs described earlier, with the width choices determined for that design (12 in total, see Table 1). Participants completed 57 tasks ( 4 tasks for each width that involved interpreting the variables encoded in the design, plus one additional task for three of the widths that involved reading the background map). + +Rationale for Tasks: We chose the tasks to be general enough so that they can be applied in a variety of scenarios, and by considering use cases illustrated in Section 1. We deliberately designed 3-variable tasks since our knowledge of line stylization is most limited in this case. In our four interpretation tasks (Table 2), one involved identifying values, two involved looking for trends, and one involved comparing trends (e.g., Figure 2). For tasks 1 and 3, we marked four contour sections on the design (Figure 6 (left)), and participants selected the option that matched the requested combination or trend. For tasks 2 and 4, we drew four lines across the contours, and participants selected the option that best identified a specific trend (e.g., Figure 6 (right)). + +The background map reading task used the Color Blending design with 3 of the widths(4,8, and 12). In this task, icons were placed in the underlying map, and participants were asked to count the number of icons and select an answer from 4 options. The reason for choosing Color Blending as a representative design for the background task is that it has three different width choices that are substantially different from each other. The icons were $8 \times 8$ pixels. The number of icons ranged between 18 to 22 , and were placed randomly on the map. Therefore, in some cases the icons were partially hidden by the contour lines. + +Table 2: Tasks with domains for Study 1 + +
IDTaskDomain
1Select the marked contour region that best represents the following combination: high B, low C, and high DCompare different marked contour regions
2Consider the contour regions intersected by the lines. Select the directed line that best represents the following trend: $\mathrm{B}$ and $\mathrm{D}$ both increase, and $\mathrm{C}$ decreasesInterpret trends across contour lines
3Select the marked contour region that, when moving clockwise, best represents the following trend: B decreases, $\mathrm{D}$ increases, and $\mathrm{C}$ stays the sameInterpret trends along a contour line
4Count the number of lines that show the following: B and $\mathrm{C}$ have the opposite trend to $\mathrm{D}$Identify similar/ opposite trends across contour lines
+ +![01963e93-e7e7-7080-98a3-76d4428f2722_3_926_1161_719_367_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_3_926_1161_719_367_0.jpg) + +Figure 6: Contour-region task (left); Trend task (right) + +S1 Hypotheses: We hypothesized that as width increases, accuracy will increase and completion time will decrease $\left( {h}_{1}\right)$ . We also hypothesized that the influence of width will be more noticeable for Parallel Lines, Pie, and Side-by-Side than for other designs $\left( {h}_{2}\right)$ (because Color Blending and Thickness-Shade could be more difficult to interpret for inexperienced users). Finally, we hypothesized that for the background task, increasing contour width will lead to lower accuracy and higher completion time $\left( {h}_{3}\right)$ . + +### 5.2 S1: Procedure + +Participants completed an informed consent form and then were shown a description of the designs and given a set of practice tasks to complete. After each practice task, the participant was told whether the response was correct or not, and were given an explanation of the correct answer with a brief justification. Participants then completed the 57 tasks as described above. After each design, participants completed a NASA-TLX-style effort questionnaire [19] and at the end of the study, they rated their familiarity with visualization interfaces and their preferences for each of the widths. Participants were asked to complete the tasks as quickly and accurately as possible. Each task started with a 'start' button, and ended when the participant selected one of the multiple-choice answer options and pressed 'next'. Before starting a new task, participants were shown a reminder that they could rest before continuing. + +The study used a within-participants design, with contour width as the independent variable (considered separately for each design); dependent variables were accuracy, completion time, and subjective effort scores. Designs and tasks were presented in random order (sampling without replacement). + +### 5.3 S1: Results + +We applied additional filters to test whether participants were legitimately attempting the tasks (e.g., answering inconsistently and large time gaps in their surveys). After filtering, we had 44 participants (20 male, 24 female) with a median age range of 31-39. + +S1: Interpretation tasks: For the four interpretation tasks involving the B, C, and D attributes, data were analyzed using repeated-measures ANOVAs for each design (because each design used a different set of widths); Bonferroni corrected paired t-tests were used for follow-up comparisons. Figure 7 (left) shows that accuracies increased slightly as contour width increased, and Figure 7 (middle) shows that completion times decreased overall as width increased. + +We found significant effects of width on accuracy for the Parallel Lines design, and on completion time for Color Blending, Pie, and Side-by-Side. No effect of width was found for Thickness-Shade. For Parallel Lines (using widths 9 and 12), we found a significant effect of width on accuracy $\left( {{F}_{1,{43}} = {5.4}, p < {.05}}\right)$ , with width 12 having higher accuracy than width 9. For Color Blending (widths $4,7,{10})$ , we found a significant effect of width on completion time $\left( {{F}_{2,{86}} = {8.09}, p < {.05}}\right)$ ; Figure 7 (middle). Post-hoc t-tests showed that width 10 was faster than both width 7 and width 4 (all $p < {.05}$ ). For ${Pie}$ (widths 8 and 10), we found a significant effect of width on completion time $\left( {{F}_{1,{43}} = {9.46}, p < {.05}}\right)$ . Width 10 was faster than width 8. For Side-by-Side (widths 1, 2, 4), we found a significant effect of width on completion time $\left( {{F}_{2,{86}} = {13.46}, p < {.05}}\right)$ . Post-hoc t-tests showed widths 2 and 4 to be faster than width $1\left( {p < {.05}}\right)$ . + +S1: Background Task: The background icon-counting task used design Color Blending with widths 4, 8, and 12. We found a significant effect of width on accuracy $\left( {{F}_{2,{86}} = {121.76}, p < {.05}}\right)$ . Post-hoc t-tests showed significant differences among all 3 widths $\left( {p < {.05}}\right)$ . As shown in Figure 7 (right), the mean accuracy for width 4 was higher than 8 , which was higher than that of 12 . There was no effect of width on completion time. + +S1: Effort and Preference: We asked participants to rate their amount of mental effort, overall effort, frustration, and perceived success with each design. Friedman tests showed significant differences in all questions (all $p < {.005}$ ), with Parallel Lines and Side-by-Side rated better than the other designs. The width preference question reveals higher user preferences for the maximum (50% of participants) and median (41%) widths. + +### 5.4 S1: Discussion + +Increased contour widths for interpretation tasks led to improved completion time in three of the designs, and improved accuracy for one design, partially supporting hypothesis ${h}_{1}$ . We did not find any significant effect of width for Thickness-Shade, which partially supports our hypothesis ${h}_{2}$ that suggested the effects of width would be more obvious for some designs. Our results for the background task (effect of width on accuracy, but not on completion time) partially support hypothesis ${h}_{3}$ . Overall, the fact that there was only a minor effect of reduced width on interpretability (particularly for accuracy) means that width can often be safely reduced in scenarios where the visibility of the background is critical. + +Based on these findings we chose to use the maximum width for each design in further studies. In the following section, we explore the influence of contour intervals, which is another important element of a contour plot. + +## 6 Study 2 (Contour Intervals) + +A higher number of contour intervals increases both the number of visual elements in the design and the degree of background occlusion Increasing the number of contours, however, also provides more data points for the other variables visualized on the contour, and so may increase the interpretability of these variables. Our study explores this trade-off using a study design similar to that used above. + +### 6.1 S2: Participants, Data, and Tasks + +We ran the study on Amazon Mechanical Turk with the same eligibility criteria as in Study 1. We recorded 68 complete responses (40 male, 26 female, 1 non-binary, 1 preferred not to answer), aged 21-60 (median age range 21-29). None of the participants took part in Study 1. The study used the 5 designs described above, each with 3 contour interval alternatives (4,6, or 8 contours). Participants completed 60 tasks: the same 4 interpretation tasks from Study 1 for each combination of design and contour interval, and the background icon-counting task. + +For analysing the interpretation tasks, the study used a within-participants design with three factors: Design (the five designs described above), Task (the four interpretation tasks from Study 1), and Number of Intervals (4, 6, or 8). The main dependent measures were accuracy and completion time; we also collected subjective effort and preference scores. + +S2 Hypotheses: We hypothesized that more contour levels will result in better performance for the tasks that require analysis across contour lines $\left( {h}_{4}\right)$ . For the background task, we hypothesized that more contour levels will lead to lower accuracy and higher completion time $\left( {h}_{5}\right)$ , due to increased occlusion. + +### 6.2 S2: Procedure + +Similar to Study 1, participants went through the eligibility tests, design demonstration, and practice tasks. Then they completed the main study tasks and filled out the TLX-style effort surveys and overall preference questions. The data and tasks were the same as in Study 1, and designs and tasks were presented in random order. + +### 6.3 S2: Results + +After filtering the participants based on response consistency, we had 46 participants (28 male, 1 non-binary, 17 female) with median age range of 31-39. + +S2: Interpretation tasks: We carried out $5 \times 4 \times 3$ RM-ANOVAs (Design $\times$ Task $\times$ Number of Intervals) for both accuracy and completion time, with Bonferroni-corrected t-tests as followup. There was a significant main effect of Intervals $\left( {{F}_{2,{90}} = {3.69}}\right.$ , $p < {.05}$ ) on accuracy. Post-hoc tests showed that 4 and 6 contour intervals had higher accuracy than 8 intervals (see Figure 8). There was also an interaction between Design and Task $\left( {{F}_{{12},{540}} = {2.09}}\right.$ , $p < {.05})$ . As can be seen in Figure 9, the Pie design had higher accuracy in Tasks 2 and 3 compared to the other designs. There was no main effect of Design on accuracy $\left( {{F}_{4,{180}} = {1.58}, p = {0.18}}\right)$ . For completion time, there were no main effects of Design or Intervals $\left( {p > {.05}}\right)$ , but there was an interaction between Intervals and Task $\left( {{F}_{6,{270}} = {3.21}, p < {.05}}\right)$ . + +S2: Background Task: We found no main effects of the number of contour intervals on either completion time or accuracy for the icon-counting task. + +S2: Subjective Effort and Preferences: Participants rated mental effort, overall effort, frustration, and perceived success with each design. Responses were similar across all designs, and Friedman tests showed a significant difference only for overall effort $\left( {p < {.05}}\right)$ , with the Pie design seen as requiring more effort than the others. We asked participants about their preference: 4 intervals (36%) and 6 intervals (39%) were preferred to 8 (25%). + +![01963e93-e7e7-7080-98a3-76d4428f2722_5_259_152_1219_340_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_5_259_152_1219_340_0.jpg) + +Figure 7: Study 1: (left and middle) Performances of the designs at different width choices for Tasks 1-4. Lines are connected to group the designs, but not to denote continuity of the width. (right) Accuracy and completion time for the icon-counting task. + +![01963e93-e7e7-7080-98a3-76d4428f2722_5_204_629_617_291_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_5_204_629_617_291_0.jpg) + +Figure 8: Study 2: Task performance with different contour intervals. + +![01963e93-e7e7-7080-98a3-76d4428f2722_5_152_983_716_513_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_5_152_983_716_513_0.jpg) + +Figure 9: Study 2: Task accuracy for the five designs. + +### 6.4 S2: Discussion + +Overall, 4 and 6 intervals performed better than 8 . One main reason for this result is that fewer contour intervals result in less visual clutter. Although there was a significant effect of contour levels on accuracy only for Task 3, there were significant interactions between contour intervals and task for completion time, which partially supports ${h}_{4}$ . We observed that higher contour levels slightly reduced mean accuracy for Tasks 1 and 3, where users may have found it difficult to follow along a line where there were other parallel lines in close proximity. + +The interactions show that task performance is dependent on design, contour intervals, and task combinations. The significant interaction (for accuracy) between contour levels and tasks suggests that the association between contour level and designs depends on tasks. Similarly, for completion time, the association between design and contour level depends on tasks. + +In the next study, we focus on how performance varies by design in both synthetic and real-world datasets. We used 4 contour intervals for all the designs. + +## 7 Study 3 (DESIGN COMPARISON) + +### 7.1 S3: Participants, Data, and Tasks + +We ran the study on Amazon Mechanical Turk with the same eligibility criteria as in Study 1. We recorded 78 complete responses (41 male, 33 female, 2 non-binary, 1 preferred not to answer), median age range 30-39. None of them participated in Study 1 or 2. + +We used two datasets for this study (one synthetic and one from a real-world scenario). Participants completed 6 different tasks for each design and dataset combination, which resulted in 60 tasks. Of the 6 tasks, 4 were similar to studies 1 and 2 . We added 2 additional tasks (Table 3) to examine whether users are able to interpret the extent of value changes of a single attribute (Task 5), and to estimate the difference between two attributes (Task 6). + +Table 3: Tasks with domains for Study 3 + +
IDTaskDomain
5Select the marked contour region that has the maximum change in (a given attribute)Estimate value changes along a contour line
6Select the marked contour region that has the minimum difference between (a given pair of attributes)Estimate value difference along a contour line
+ +Similar to Study 1, participants went through the eligibility tests, design demonstrations, and practice tasks. Then they completed the main study, the effort survey, and preference questions (in this study, participants stated their preference for one of the designs after each task, as well as overall). + +### 7.2 S3: Procedure + +The study used a within-participants design with three factors: Design (the five designs described above), Task (the six interpretation tasks), and Dataset (Synthetic or Real-world). The main dependent measures were accuracy and completion time; we also collected subjective effort and preference scores. Designs and tasks were presented in random order (sampling without replacement). + +### 7.3 S3: Results + +After filtering based on response consistency, we had 54 participants (30 male, 22 female, 2 preferred not to answer) with a median age range of 31-39; 45 participants reported familiarity with data visualization interfaces. + +![01963e93-e7e7-7080-98a3-76d4428f2722_6_156_152_689_339_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_6_156_152_689_339_0.jpg) + +Figure 10: Study 3: Overall performance of the different designs. + +We carried out $5 \times 6 \times 2$ RM-ANOVAs (Design $\times$ Task $\times$ Dataset) for both accuracy and completion time, with Bonferroni-corrected t-tests as follow-up. There was a significant main effect of Design $\left( {{F}_{4,{212}} = {19.2}, p < {.001}}\right)$ on accuracy. Post-hoc t-tests showed that Parallel Lines, Pie, and Side-by-Side were significantly more accurate than Color Blending and Thickness-Shade (Figure 10). There was also a main effect of Dataset $\left( {{F}_{1,{53}} = {13.8}, p < {.001}}\right)$ : participants were more accurate with the real-world dataset (66%) than with the synthetic dataset (59%). + +There were also significant interactions between Design and other factors. First, there was a Design $\times$ Dataset interaction $\left( {{F}_{4,{212}} = }\right.$ ${10.7}, p < {.001})$ . As shown in Figure 11, the Color Blending design had substantially lower accuracy for the synthetic data compared to the other designs, and only Parallel Lines was equally accurate with both datasets. Second, there was a Design $\times$ Task interaction $\left( {{F}_{{20},{1060}} = {5.01}, p < {.001}}\right)$ . Figure 11 shows substantial differences in the tasks depending on the design: for example, the accuracy of the Color Blending design was substantially lower in Tasks 1 and 6, and the accuracy of Thickness-Shade was lower for Task 6. + +For completion time, there were no main effects of Design $\left( {p > {.05}}\right)$ , but there were interactions between Design and Dataset $\left( {{F}_{4,{212}} = {2.91}, p < {.005}}\right)$ and between Design and Task $\left( {{F}_{{20},{1060}} = }\right.$ ${2.17}, p < {.001})$ . + +Table 4: Study 3: Design Preference Survey + +
DesignTask 1Average Preference ScoresOverall
Task 2Task 3Task 4Task 5Task 6
Parallel Lines2.282.52.482.372.572.742.85
Color Blending1.851.831.871.571.761.931.93
Pie1.781.872.241.722.062.022.15
Thickness- Shade2.042.092.091.872.232.032.17
Side-by-Side2.692.742.852.372.692.592.93
+ +S3: Subjective Effort and Preferences: We again asked participants to rate mental effort, overall effort, frustration, and perceived success with each design. Friedman tests showed significant differences for all questions $\left( {p < {.05}}\right)$ , with Parallel Lines and Side-by-Side scoring better than the other designs. We also asked users to rate the designs on a 0-4 scale (0: 'not preferred' to 4: 'highly preferred'). Participants rated the designs for each task as well as their overall preference. Mean participant scores are presented in Table 4 (higher values are better). The table highlights the top scores for each task and the top two designs for overall preference (Parallel Lines and Side-by-Side were the most-preferred designs). + +### 7.4 Overall Discussion + +The main finding of the third study is that all of the designs were successful for at least some of the tasks, and that the designs were similar in their performance - with the exception of Color Blending, which showed reduced accuracy compared to the other designs for Tasks 1 and 6. The study also clearly showed that integrating multiple variables into a single contour line results in visualizations that users can interpret successfully - as successfully as separate individual presentations (Side-by-Side). This is an important result for situations where designers need to provide a single larger view rather than divide the available space into pieces, as is needed for a side-by-side presentation. + +Overall, two designs-Parallel Lines and Side-by-Side-performed best and had high preference scores. Both designs have separate encoding space for all the variables; in addition, the encoding for different attributes was similar, and they are intuitive to read without any close inspection of the legend. Side-by-Side had high preference scores for all tasks except Task 6: in this task, a higher mean preference for Parallel Lines is likely due to its symmetric encoding for all the attributes, which makes the value difference easier to estimate, whereas for Side-by-Side users need to compute the difference inspecting two separate views. + +Interestingly, however, both the Parallel Lines and Side-by-Side designs have space constraints (Parallel Lines in terms of contour width, and Side-by-Side in terms of display space). For scenarios where a single view is required and the visibility of the background is important, neither of these designs may be feasible. In these cases, the Pie design appears to be a reasonable compromise because it had good accuracy and takes less space. + +The Color Blending and Thickness-Shade designs both had poor performance on at least one task. This could be due to participants' unfamiliarity with the encoding, but the visual variables used in these designs may be more difficult to interpret overall. In addition, the encoding in these two designs was not symmetric compared to the other designs. Problems in Task 1 (for Color Blending) may have resulted from the need to estimate attribute value combinations, where the interpretation of the designs likely demanded a close inspection of the legend. In addition to the possibility of misinterpretation, this may have led to increased cognitive load or reduced effort for tasks using these designs. The same reasoning holds for the poor performance of Thickness-Shade for Task 6, where estimating the value difference between a pair of attributes encoded in different features such as thickness and color shade requires a careful reading of the legend. + +Our results showed better performance with the real-world data than with the synthetic dataset (Figure 11), which may be due to differences in the underlying data distributions. The synthetic dataset consisted of almost all possible trend combinations for various attributes, so the corresponding visualizations consisted of highly varied feature combinations; in contrast, the visualizations generated from real-world data had fewer variations. It is likely that visualizations with real-world datasets are visually simpler, leading to improved accuracy and task completion time. These differences partially explain the significant interactions among design, dataset, and task combinations in accuracy, and interaction between design and dataset in completion time. + +Based on the study results, we formulated a table of design recommendations (Table 5) that summarizes the preferred design choices for various tasks over three environments-general use, time-sensitive interpretation, and high accuracy requirements. The table shows some strengths for particular designs that are different from the overall discussion above: for example, if the task requires quick estimation for trends along a contour line, then Color Blending and Thickness-Shade may be the best design options. + +![01963e93-e7e7-7080-98a3-76d4428f2722_7_291_149_1215_873_0.jpg](images/01963e93-e7e7-7080-98a3-76d4428f2722_7_291_149_1215_873_0.jpg) + +Figure 11: Task performances in Study3, for (top) different designs, and (bottom) different datasets. + +Table 5: Design Recommendation Table + +
DomainTime SensitiveEnvironment Accuracy SensitiveGeneral
Compare different contour partsParallel Lines, PieAll except Color BlendingAll except Color Blending
Search for a trend across contour linesParallel Lines, Pie, Side- by-SidePie, Side-by-SideParallel Lines, Side-by- Side, Pie
Search for a trend along a contour lineColor Blend- ing, Thickness- ShadeAllAll
Identify rate of change of a variable along a contour lineAll except ${Pie}$AllAll
Identify the value difference on a contour partParallel Lines, Side-by- SideParallel Lines, Side-by- SideParallel Lines, Side-by- Side, Pie
+ +## 8 LIMITATIONS AND FUTURE WORK + +Our experience with the designs indicate some limitations in the research that provide opportunities for additional study. First, since we encode the variables at the pixel level, our current implementation does not scale well with large maps. However, rendering techniques using GPU acceleration can be used to overcome such an obstacle. Second, as the number of contour intervals grows, the contour lines may sometimes overlap. Therefore, finding an adaptive choice of contouring thresholds or allowing users to interactively choose the base attribute can be a valuable avenue of future research. Third, using existing contours as the basis for additional variables is limited by the density of these contours. Further work is needed on how to represent variables in areas with few contours: for example, adaptive sampling could be used to achieve minimum contour density across the map, or glyph-based techniques could be used to show important changes that occur between contours. Fourth, since we used colors in many of our designs, the interpretability of the designs could depend on the background map colour and texture. Therefore, real-world deployments of our technique will benefit from methods that tune color choices to the background map, or by implementing controls so that users can choose the opacity of the background map. + +In addition, while the crowdsourced surveys have been found to be useful in gaining insights about our designs, additional controlled studies in the lab as well as focus groups with meteorological experts could provide more information. For example, the use of eye tracking for our approach would give more detail on how users interact visually with the different designs. Given the complex interaction among various factors that we observed, in-depth observation of the use of the designs in realistic tasks could help better understand some of the effects and interactions. Finally, we plan to apply our designs to different real-world datasets and explore different contour-based tasks and scenarios that can be used in real geospatial settings. + +## 9 CONCLUSION + +Contour plots are widely used, but standard techniques for adding multivariate visualizations onto these plots can clutter the display. To address this problem, we explored how contour lines on a geospatial map can be stylized to encode other attributes in the data. Such a multivariate representation can reduce visual clutter by leveraging the existing contour line space. We designed five types of visual encoding, and examined how various contour parameters such as width and contour levels influence task performances. Our crowdsourced study results showed that participants were able to perform several types of multivariate data analysis tasks with reasonable accuracy, which reveals the potential of our approach. + +[1] N. Adrienko and G. Adrienko. Spatial generalization and aggregation + +of massive movement data. IEEE Transactions on visualization and computer graphics, 17(2):205-219, 2010. + +[2] B. Bach, C. Perin, Q. Ren, and P. Dragicevic. Ways of Visualizing Data on Curves. In In Proc. of TransImage, pp. 1-14. Edinburgh, United Kingdom, Apr. 2018. + +[3] R. Borgo, J. Kehrer, D. H. Chung, E. Maguire, R. S. Laramee, H. Hauser, M. Ward, and M. Chen. Glyph-based visualization: Foundations, design guidelines, techniques and applications. In Eurographics (STARs), pp. 39-63, 2013. + +[4] L. A. Bruckner. On chernoff faces. In Graphical representation of multivariate data, pp. 93-121. Elsevier, 1978. + +[5] T. G. Burton, H. S. Rifai, Z. L. Hildenbrand, D. D. Carlton Jr, B. E. Fontenot, and K. A. Schug. Elucidating hydraulic fracturing impacts on groundwater quality using a regional geospatial statistical modeling approach. Science of the Total Environment, 545:114-126, 2016. + +[6] J. Cao, Y. Yue, K. Zhang, J. Yang, and X. Zhang. Subsurface channel detection using color blending of seismic attribute volumes. International Journal of Signal Processing, Image Processing and Pattern Recognition, 8(12):157-170, 2015. + +[7] A. Cedilnik and P. Rheingans. Procedural annotation of uncertain information. In In Proc. of Visualization, pp. 77-84. IEEE, 2000. + +[8] S. Christophe, B. Duménieu, J. Turbet, C. Hoarau, N. Mellado, J. Ory, H. Loi, A. Masse, B. Arbelot, R. Vergne, et al. Map style formalization: Rendering techniques extension for cartography. In In Proc. of Expressive, pp. 59-68. The Eurographics Association, 2016. + +[9] D. H. Chung. High-dimensional glyph-based visualization and interactive techniques. Swansea University (United Kingdom), 2014. + +[10] D. H. S. Chung. High-dimensional glyph-based visualization and interactive techniques. PhD thesis, Swansea University, UK, 2014. + +[11] J. Clark. The ishihara test for color blindness. American Journal of Physiological Optics, 1924. + +[12] K. Crowston. Amazon mechanical turk: A research tool for organizations and information systems scholars. In A. Bhattacherjee and B. Fitzgerald, eds., Shaping the Future of ICT Research. Methods and Approaches, pp. 210-221. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. + +[13] M. J. De Smith, M. F. Goodchild, and P. Longley. Geospatial analysis: a comprehensive guide to principles, techniques and software tools. Troubador publishing ltd, 2007. + +[14] D. Dorling, A. Barford, and M. Newman. Worldmapper: the world as you've never seen it before. IEEE transactions on visualization and computer graphics, 12(5):757-764, 2006. + +[15] D. A. Ellsworth, C. E. Henze, and B. C. Nelson. Interactive visualization of high-dimensional petascale ocean data. In 2017 IEEE 7th Symposium on Large Data Analysis and Visualization (LDAV), pp. 36-44. IEEE, 2017. + +[16] G. Fuchs and H. Schumann. Visualizing abstract data on maps. In Proc.. Eighth International Conference on Information Visualisation, 2004. IV 2004., pp. 139-144. IEEE, 2004. + +[17] R. Fuchs and H. Hauser. Visualization of multi-variate scientific data. In Computer Graphics Forum, vol. 28, pp. 1670-1690. Wiley Online Library, 2009. + +[18] J. Görtler, C. Schulz, D. Weiskopf, and O. Deussen. Bubble treemaps for uncertainty visualization. IEEE transactions on visualization and computer graphics, 24(1):719-728, 2017. + +[19] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in psychology, vol. 52, pp. 139-183. Elsevier, 1988. + +[20] C. G. Healey and J. T. Enns. Large datasets at a glance: Combining textures and colors in scientific visualization. IEEE transactions on visualization and computer graphics, 5(2):145-167, 1999. + +[21] C. G. Healey and J. T. Enns. Attention and visual memory in visualization and computer graphics. IEEE Trans. Vis. Comput. Graph., 18(7):1170-1188, 2012. doi: 10.1109/TVCG.2011.127 + +[22] C. Hoarau, S. Christophe, and S. Mustière. Mixing, blending, merging or scrambling topographic maps and orthoimagery in geovisualizations. In International Cartographic Conference, 2013. + +[23] X. Huang, Y. Zhao, C. Ma, J. Yang, X. Ye, and C. Zhang. Trajgraph: A graph-based visual analytics approach to studying urban network centralities using taxi trajectory data. IEEE transactions on visualization and computer graphics, 22(1):160-169, 2015. + +[24] S. Kim, R. Maciejewski, A. Malik, Y. Jang, D. S. Ebert, and T. Isenberg. Bristle maps: A multivariate abstraction technique for geovisualiza-tion. IEEE Transactions on Visualization and Computer Graphics, 19(9):1438-1454, 2013. + +[25] A. Leonowicz. Two-variable choropleth maps as a useful tool for visualization of geographical relationship. Geografija, 42:33-37, 2006. + +[26] F. Liu and R. W. Picard. Periodicity, directionality, and randomness: Wold features for image modeling and retrieval. IEEE transactions on pattern analysis and machine intelligence, 18(7):722-733, 1996. + +[27] L. Lu and H. Guo. Visualization of a digital elevation model. Data Science Journal, 6:481-484, 2007. + +[28] A. M. MacEachren. Visualizing uncertain information. Cartographic perspectives, (13):10-19, 1992. + +[29] L. McNabb and R. S. Laramee. Multivariate mapsa glyph-placement algorithm to support multivariate geospatial visualization. Information, 10(10):302, 2019. + +[30] S. Y. Mhaske and D. Choudhury. Geospatial contour mapping of shear wave velocity for mumbai city. Natural Hazards, 59(1):317-327, 2011. + +[31] A. Pang. Visualizing uncertainty in geospatial data. In In Proc. of the workshop on the intersections between geospatial information and information technology, vol. 10, p. 3823, 2001. + +[32] M. S. Patterson. Multivariate Spatio-temporal Visualization of Socioeconomic Indicators Using Geographic Maps. University of California, Santa Cruz, 2008. + +[33] C. Perin, T. Wun, R. Pusch, and S. Carpendale. Assessing the graphical perception of time and speed on $2\mathrm{\;d} +$ time trajectories. IEEE trans. on visualization and computer graphics, 24(1):698-708, 2017. + +[34] P. Shanbhag, P. Rheingans, et al. Temporal visualization of planning polygons for efficient partitioning of geo-spatial data. In IEEE Symposium on Information Visualization (INFOVIS), pp. 211-218. IEEE, 2005. + +[35] G. Sharma and R. Bala. Digital color imaging handbook. CRC press, 2017. + +[36] H. H. Shenas and V. Interrante. Compositing color with texture for multi-variate visualization. In Proc. of the 3rd international conference on computer graphics and interactive techniques in Australasia and South East Asia, pp. 443-446, 2005. + +[37] W. Shi, P. Fisher, and M. Goodchild. Spatial Data Quality. CRC Press, 2002. + +[38] H. Tamura, S. Mori, and T. Yamawaki. Textural features corresponding to visual perception. IEEE Transactions on Systems, man, and cybernetics, 8(6):460-473, 1978. + +[39] X. Tong, C. Li, and H.-W. Shen. Glyphlens: View-dependent occlusion management in the interactive glyph visualization. IEEE transactions on visualization and computer graphics, 23(1):891-900, 2016. + +[40] Z. Wang, M. Lu, X. Yuan, J. Zhang, and H. Van De Wetering. Visual traffic jam analysis based on trajectory data. IEEE transactions on visualization and computer graphics, 19(12):2159-2168, 2013. + +[41] M. O. Ward. Multivariate data glyphs: Principles and practice. In Handbook of data visualization, pp. 179-198. Springer, 2008. + +[42] C. Ware. Information Visualization: Perception for Design. Morgan Kaufmann Publishers Inc., San Francisco, 2nd ed., 2004. + +[43] C. Ware. Information Visualization, Second Edition: Perception for Design (Interactive Technologies). 04 2004. + +[44] K. Wu and S. Zhang. Visualizing 2d scalar fields with hierarchical topology. In S. Liu, G. Scheuermann, and S. Takahashi, eds., 2015 IEEE Pacific Visualization Symposium, PacificVis 2015, Hangzhou, China, April 14-17, 2015, pp. 141-145. IEEE Computer Society, 2015. + +[45] J. Zhang, A. Malik, B. Ahlbrand, N. Elmqvist, R. Maciejewski, and D. S. Ebert. Topogroups: Context-preserving visual illustration of multi-scale spatial aggregates. In Proc. of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2940-2951, 2017. + +[46] J. Zhang, C. Surakitbanharn, N. Elmqvist, R. Maciejewski, Z. Qian, and D. S. Ebert. Topotext: Context-preserving text data exploration across multiple spatial scales. In Proc. of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2018. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..56a430f5801dc947835336bf94bd583624490330 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,356 @@ +Contour Line Stylization to Visualize Multivariate Information + +Category: n/a + + < g r a p h i c s > + +Figure 1: (left) A geographic map, and the contour plots of four climatic parameters A (albedo), B (soil moisture), C (pressure), and D (temperature) on a part of the map. (right) Four of our five designs that encode B, C, and D along the contour lines of A. + +§ ABSTRACT + +Contour plots are widely used in geospatial data visualization as they provide natural interpretation of information across spatial scales. To compare a geospatial attribute against others, contour plots for the base attribute (e.g., elevation) are often overlaid, blended, or examined side by side with other attributes (e.g., temperature or pressure). Such visual inspection is challenging since overlay and color blending both clutter the visualization, and a side-by-side arrangement requires users to mentally integrate the information from different plots. Therefore, these approaches become less efficient as the number of attributes grows. + +In this paper we examine the fundamental question of whether the base contour lines, which are already present in the map space, can be leveraged to visualize how other attributes relate to the base attribute. We present five different designs for stylizing contour lines, and investigate their interpretability using three crowdsourced studies. Our first two studies examined how contour width and number of contour intervals affect interpretability, using synthetic datasets where we controlled the underlying data distribution. We then compared the designs in a third study that used both synthetic and real-world meteorological data. Our studies show the effectiveness of stylizing contour lines to enrich the understanding of how different attributes relate to the reference contour plot, reveal trade-offs among design parameters, and provide designers with important insights into the factors that influence interpretability. + +Index Terms: Human-centered computing-Visualization-Visualization techniques; Human-centered computing-Visualization-Visualization design and evaluation methods + +§ 1 INTRODUCTION + +Contour plots are widely used to visualize geospatial information on two-dimensional maps. Contour lines and contour intervals are two important features of a contour plot. A contour line (isoline) represents a fixed threshold value and connects map points having that value. A contour interval corresponds to a range of values within the bounds indicated by two successive threshold values. + +The simplicity and rich information found in contour plots make them a popular choice for infographic posters and in geospatial data analysis $\left\lbrack {5,{13},{30}}\right\rbrack$ . Contour lines provide us with a potentially-useful visualization resource - a set of points that are already on the map. We can leverage these points to show other data attributes along the contour line, which can provide insights into how other geospatial attributes relate to the base attribute. To the best of our knowledge, effectiveness of contour line stylization and the boundaries of human perception to interpret them are not well understood. + +In this paper, we examine how to stylize contour lines to provide useful additional information to the viewer (Figure 1). We do not entwine ourselves with any domain specific application, but rather attempt to improve our understanding of various facets of contour line stylization. The contour plots may result from geospatial datasets, mathematical surfaces, or even scatterplot densities. However, there exist several motivating scenarios (e.g., analyzing historical change in contour lines or understanding correlation) where contour stylization may be useful. Figure 2 shows such a motivating example based on front prediction in meteorological analysis. The development of a front depends on several factors such as temperature, moisture, wind direction and pressure. Figure 2 (left) shows front prediction by the National Oceanic and Atmospheric Administration (NOAA) Weather Prediction Center (WPC) archive, where the curved lines (red, blue or mixed) correspond to various types of fronts (warm, cold or stationary fronts, respectively). Note that such fronts can be derived using software or by painstaking inspection of the numbers plotted on the map representing various weather parameters. Figure 2 (right) shows our contour stylization for 4 weather variables (pressure, temperature, relative humidity and precipitable water). The contour lines represent isobars (pressure). The temperature, relative humidity and precipitable water are encoded in red, blue, and white lines respectively. Contour line stylization can readily reveal some potential cold fronts (yellow curves 1,3, and 5) and warm fronts (curves 2 and 4), which shows the potential of using contour line stylization alongside traditional visualizations. + +Multivariate visualizations that encode data attributes into different preattentive perceptual features of a visual element (glyph) [3, 34,41] such as size, shape, color, and texture, are typical ways to visualize geospatial information on a map. A well-known limitation of a glyph-based visualization is that it clutters the map [10]. While a dense overlay occludes the view of the base map (Figure 3 (left)), a sparse overlay compromises perception of geospatial connectedness and lacks the gradient information that naturally comes from a contour plot (Figure 3 (right)). + +Our Contribution: We consider geospatial data with four attributes $- \mathrm{A},\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ - and encode $\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ along the contour lines of A. We design five visual encodings and investigate whether users can interpret the attribute values (high, low), trends (increasing or decreasing), and relationships (similar or opposite trends) along a contour line, or across a set of contour lines. Since the encoding position of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ is determined by $\mathrm{A}$ ’s contour line, users may want to vary the number of contouring thresholds for A, or use a different base contour plot. Therefore, we describe how to design a synthetic dataset to examine the influence of various design parameters through controlled experiments. + + < g r a p h i c s > + +Figure 2: (left) Front detection by NOAA WPC. (right) detection using one of our five visualization techniques. + + < g r a p h i c s > + +Figure 3: Multivariate visualization with (left) glyphs occludes the map, and (right) grid stylization lacks the gradient information. + +We conducted three crowdsourced studies that evaluate our designs. The first two studies reveal how contour width and the number of contour intervals influence the visual interpretability of our designs. The third study used both synthetic and real-world meteorological datasets to assess how the designs and datasets compared in terms of task completion time and accuracy, for common geospatial data analysis tasks. In addition to revealing insights into our designs, our experimental results also suggest that results obtained using synthetic datasets generalize to real-world datasets. + +§ 2 RELATED WORK + +§ 2.1 MULTIVARIATE VISUALIZATION ON A MAP + +Geospatial data are often shown using choropleth maps [25, 28], and contour plots [27]. While choropleth maps and cartograms [14] reveal properties of a region, a contour plot helps to understand the data distribution on a map and find regions with similar properties. Data analysts often use color blending for finding probable correlations between two geospatial variables [15]. However, creating a high-quality bivariate choropleth or contour map requires careful choice of blending colors and textures [24]. + +Researchers have also attempted to construct trivariate choropleth maps using the CMY color model [6, 35]. Wu and Zhang [44] examined a 4-variate map that captures the contour band information for each variable in thin visual ribbons, and then overlays the ribbons for all four variables using four different colors. Overlaying glyphs $\left\lbrack {{29},{39}}\right\rbrack$ or charts $\left\lbrack {4,{16}}\right\rbrack$ on a map is a popular way to visualize geospatial information. Glyphs are often designed to encode data into features that can be perceived through preattentive visual channels [43]. A rich body of visualization design research examines how humans perceive various combinations of geometric, optical, relational, and semantic channels. We refer readers to recent surveys $\left\lbrack {9,{17},{41}}\right\rbrack$ for a detailed review of glyph design. Glyph based visualizations often must use a careful glyph positioning technique $\left\lbrack {{29},{44}}\right\rbrack$ , as creating glyphs for many data points on a map creates overlapping. + +Various texture metrics such as contrast, coarseness, periodicity, and directionality $\left\lbrack {{26},{38}}\right\rbrack$ have been used to visualize multivariate data. Healey and Enns [20] introduced pexels that encode multidimensional datasets into multi-colored perceptual textures with height, density, and regularity properties. Shenas and Interrante [36] showed that color and texture can be combined to meaningfully convey multivariate information with four or more variables on a choropleth map. + +§ 2.2 STYLIZATION OF LINES AND BOUNDARIES + +Stylized lines naturally appear in the visualization of trajectory data. For example, traffic flow data are often color-coded on road networks as heatmaps $\left\lbrack {{23},{40}}\right\rbrack$ . Andrienko et al. $\left\lbrack 1\right\rbrack$ extracted characteristic points from car trajectories and aggregated them to create flows between cellular areas to reveal movement patterns in a city. They used stylization to depict various information about the aggregated flows. Huang et al. [23] modeled taxi trajectories using a graph. They stylized the streets based on node centrality and overlaid rose charts to visualize other traffic information. Perin et al. $\left\lbrack {2,{33}}\right\rbrack$ investigated combinations of thickness, monochromatic color scheme, and tick mark frequency on a line to encode time and speed on a two-dimensional line. They observed that encoding speed with a color scheme and time using one of the other two features improved user perception. + +Geographic cluster visualization and map generation techniques have also considered line stylization. Christophe et al. [8] proposed a pipeline for generating artistic and cartographic maps that integrates linear stylization, patch-based region filling and vector texture generation. Kim et al. [24] created Bristle Maps that put bristles perpendicular to the linear elements (streets, subway lines) of the map and then encoded multivariate information into the length, density, color, orientation, and transparency of the bristles. Zhang et al. [45] introduced TopoGroups that aggregate spatial data into hierarchical clusters, and show information about geographic clusters on the cluster boundaries. Although TopoGroups summarizes cluster information along the boundary, Zhang et al. noted that users may mistakenly see the visualization as representing local statistics near the boundary. In subsequent work, Zhang et al. [46] proposed TopoText that replaces the boundaries using oriented text. + +Visual encoding of lines and boundaries has been widely used in visualizing data uncertainty [7, 18]. Cedilnik and Rheingans [7] overlaid a regular grid on the map and then stylized the grid edges using blur, jitter, and wave. Data uncertainty has also been mapped to contour lines, where uncertainty is mapped to line color, thickness, and dash frequency [31]. Line stylization has also been used in cartograms. Görtler [18] proposed bubble treemap that represents uncertainty information using wavy circle boundaries, and varied wave frequency and amplitude based on the uncertainty. Patterson and Lodha [32] encoded five socio-economic variables simultaneously on a world map using country fill color, glyph fill color, glyph size, country boundary color, and cartogram distortion. + +§ 3 VISUAL ENCODING + +In this section we describe five contour-based designs (Figures 4) for encoding geospatial information with four-attributes: $\mathrm{A},\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ . We assume that all the attributes are numeric and positive. We create a set of contour lines using the A, and then encode the attributes B, $\mathrm{C}$ , and $\mathrm{D}$ along the contour lines of $\mathrm{A}$ using visual features. + + < g r a p h i c s > + +Figure 4: Encoding multivariate information using Parallel Lines, Color Blending, Pie, Thickness-Shade and Side-by-Side. + +Rationale: For encoding, we used visual features that are preat-tentive [42] and intuitive to interpret, or has been used in prior research [33], e.g., line thickness, monochromatic color scheme, and pie slice. Most of our designs are based on the notion of channel separability [21], but we also kept color blending as it has often been used in the context for correlation analysis in geospatial data $\left\lbrack {{15},{22},{37}}\right\rbrack$ . + +Design 1 (Parallel Lines): This design maps B, C, and D into three lines with distinct colors. The lines for $\mathrm{B}$ and $\mathrm{C}$ lie on opposite sides of the contour line of $\mathrm{A}$ , and the line for $\mathrm{D}$ follows the contour line of A. The data values are encoded using line width (between 0 and $w$ ), and the value of the attribute is linearly mapped to the range $\left\lbrack {0,w}\right\rbrack$ . If D’s value is 0, the base contour line A becomes visible. + +Design 2 (Color Blending): This design encodes B and C with distinct colors, and then blends them on the contour line of A. The attribute $\mathrm{D}$ is mapped to the width of the contour line. Note that since the contour line of $\mathrm{A}$ has a non-zero width $u$ , the values of $\mathrm{D}$ are mapped to the linewidth range $\left\lbrack {u,w}\right\rbrack$ . Consequently, B and $\mathrm{C}$ remain visible even when $\mathrm{D}$ is 0 . + +Design 3 (Pie): This design encodes B, C, and D using pie slices of distinct colors, and puts them together to create a pie icon. The only difference from a pie chart is that the sum of the values of $\mathrm{B}$ , $\mathrm{C}$ , and $\mathrm{D}$ may not be equal to the total pie area. The pie icons are placed successively along the contour line of A. The pie slices for B, C, D start at ${0}^{ \circ },{120}^{ \circ }$ and ${240}^{ \circ }$ (assuming the top as ${0}^{ \circ }$ ), and can grow clockwise to cover an angle of ${120}^{ \circ }$ . An attribute value is encoded into the angle covered by the corresponding pie slice. + +Design 4 (Thickness-Shade): This design represents B and D using two distinct lines. The lines of B and D lie on opposite sides of the contour lines of $\mathrm{A}$ , with values encoded using line width. The values of $\mathrm{C}$ are encoded using a monochromatic color scheme, where the color appears on B’s line. A low C value corresponds to a lighter shade, and a high value to a darker shade. The minimum line width for $\mathrm{B}$ is set to a positive threshold $u$ , making a range of $\left\lbrack {u,w}\right\rbrack$ so that $\mathrm{C}$ remains visible even when $\mathrm{B}$ is 0 . + +Design 5 (Side-by-Side): This design shows B, C, and D in separate side-by-side views. Each of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ is encoded using a distinct monochromatic color scheme. The color appears on the contour lines of A. We ensured that the width and height of each of the Side-by-Side views to be $\lceil \sqrt{A/3}\rceil$ , where $A$ is the total pixel area of any other design, assuming a square display. + +§ 4 IMPLEMENTATION DETAILS AND DATASETS + +The choice for contouring thresholds are application specific. But in our controlled experiment, we used $k$ -quantiles as the thresholds. This allows us to reduce visual clutter and to examine the design across a large number of contouring thresholds. We first computed the contour lines for A, and then further processed these polylines by dividing long line segments uniformly to create fine-grained polygonal chains. We then encoded the attributes by interpolating the values at the endpoints of these tiny segments. Figure 5 illustrates the parameters used for the design. Here $b,c$ and $d$ denote the normalized $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ values, respectively, and $t$ is a thickness factor that was used to linearly map the attribute values to the input line-thickness range. The number of discrete shades in the perceptual color scale depends on the number of contour intervals. For blending, we used CSS mix-blend-mode, where the scheme Multiply was chosen in a pilot study comparing 3 possible candidate schemes: Multiply, Darken and Difference. + +Synthetic Data: For each of the four attributes, we created scatterplots consisting of four Gaussian clusters that were positioned randomly in the four quadrants. Each cluster had 40000 samples, with randomly varying covariance (2.5-7.5), 2 features ( $\mathrm{x}$ and $\mathrm{y}$ coordinate), and 4/6/8 classes (for 4, 6, and 8 contour intervals). The clusters were then interpolated and reshaped such that the point density plot for each cluster takes the shape of a peak or valley. + +All the clusters of A, B, C, and D at a quadrant overlapped one another making various peak-valley combinations. This also allowed us to obtain scenarios where an attribute value increases or decreases across successive contour lines of A. To visualize all possible trends (increasing or decreasing) of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ , we used all possible peak-valley combinations for these attributes. To imitate real-world topographic map patterns, we varied the cluster overlaps for A. + +Real-World Meteorological Data: To test the designs on real-world data, we used real meteorological datasets ${}^{1}$ . For our study, we extracted 4 attributes from the dataset: Temperature, Pressure, Soil Moisture and Albedo from different geolocations. + +Evaluation: A careful choice for various design parameters is important to achieve optimal readability for the designs. Two major factors that can influence the designs are width (the space allocated along the base contour line for encoding $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ ), and the number of contouring thresholds for A. Therefore, we first conducted two studies to examine these factors and choose appropriate parameter values for the designs. In the final study, we evaluated the designs based on viewers' task performance. The first two studies were conducted on synthetic data and the final study included both synthetic and real-world data. + +§ 5 STUDY 1 (CONTOUR WIDTH) + +Our first study investigated the effect of contour width on design interpretability, as well as on tasks that use the underlying map. + +Intuitively, increasing contour width should make the encoded variables easier to see and interpret, but will also increase occlusion of the map; in addition, wide contours may also overlap each other, depending on the density and shape of the contours. To investigate this trade-off, we set the number of contour intervals to 8 (i.e., 7 contouring thresholds), and then determined a range of widths to explore for each design. We used 8 contour intervals because this allows designers a reasonable spectrum of design choices, and gives us a reasonable range to investigate in Study 2 (described below). + +Table 1 illustrates the width ranges used in Study 1. We chose minimum and maximum widths for each design based on informal testing with each design's encoding. The minimum width is determined by the number of pixels required to create the design and make the variation in the attributes noticeable. For example, Parallel Lines requires 9 pixels to encode the variation (low, mid, high) for each of the three attributes. The maximum width corresponds to the case when the successive linear elements are about to overlap. + +${}^{1}$ anonymized + + < g r a p h i c s > + +Figure 5: Illustration for the implementation details for different designs. + +Table 1: Different widths for Study 1 + +max width= + +2*Design 3|c|Width Alternatives + +2-4 + Minimum Median Maximum + +1-4 +Parallel Lines 9 - 12 + +1-4 +Color Blending 4 7 10 + +1-4 +Pie 8 - 10 + +1-4 +Thickness-Shade 8 - 10 + +1-4 +Side-by-Side 1 2 4 + +1-4 + +We also selected a median width if there was enough difference between minimum and maximum that the median would have at least two pixel units from the extremes. Hence we only have the median width for Color Blending and Side-by-Side (e.g., for Parallel Lines, the encoding for $\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ needs to be the same, so the next possible width choice after 9 is 12). + +§ 5.1 S1: PARTICIPANTS, DATA, AND TASKS + +We ran a crowdsourced study on Amazon Mechanical Turk (AMT) [12]. To be eligible, a participant needed to pass a color perception test (Ishihara test [11]), run the study on a desktop computer, reside in North America, and have at least an ${80}\%$ approval rate in AMT. We recorded 63 complete responses (32 male, 31 female, median age range 30-39). + +All experimental tasks were created using synthetic data. We tested the 5 designs described earlier, with the width choices determined for that design (12 in total, see Table 1). Participants completed 57 tasks ( 4 tasks for each width that involved interpreting the variables encoded in the design, plus one additional task for three of the widths that involved reading the background map). + +Rationale for Tasks: We chose the tasks to be general enough so that they can be applied in a variety of scenarios, and by considering use cases illustrated in Section 1. We deliberately designed 3-variable tasks since our knowledge of line stylization is most limited in this case. In our four interpretation tasks (Table 2), one involved identifying values, two involved looking for trends, and one involved comparing trends (e.g., Figure 2). For tasks 1 and 3, we marked four contour sections on the design (Figure 6 (left)), and participants selected the option that matched the requested combination or trend. For tasks 2 and 4, we drew four lines across the contours, and participants selected the option that best identified a specific trend (e.g., Figure 6 (right)). + +The background map reading task used the Color Blending design with 3 of the widths(4,8, and 12). In this task, icons were placed in the underlying map, and participants were asked to count the number of icons and select an answer from 4 options. The reason for choosing Color Blending as a representative design for the background task is that it has three different width choices that are substantially different from each other. The icons were $8 \times 8$ pixels. The number of icons ranged between 18 to 22, and were placed randomly on the map. Therefore, in some cases the icons were partially hidden by the contour lines. + +Table 2: Tasks with domains for Study 1 + +max width= + +ID Task Domain + +1-3 +1 Select the marked contour region that best represents the following combination: high B, low C, and high D Compare different marked contour regions + +1-3 +2 Consider the contour regions intersected by the lines. Select the directed line that best represents the following trend: $\mathrm{B}$ and $\mathrm{D}$ both increase, and $\mathrm{C}$ decreases Interpret trends across contour lines + +1-3 +3 Select the marked contour region that, when moving clockwise, best represents the following trend: B decreases, $\mathrm{D}$ increases, and $\mathrm{C}$ stays the same Interpret trends along a contour line + +1-3 +4 Count the number of lines that show the following: B and $\mathrm{C}$ have the opposite trend to $\mathrm{D}$ Identify similar/ opposite trends across contour lines + +1-3 + + < g r a p h i c s > + +Figure 6: Contour-region task (left); Trend task (right) + +S1 Hypotheses: We hypothesized that as width increases, accuracy will increase and completion time will decrease $\left( {h}_{1}\right)$ . We also hypothesized that the influence of width will be more noticeable for Parallel Lines, Pie, and Side-by-Side than for other designs $\left( {h}_{2}\right)$ (because Color Blending and Thickness-Shade could be more difficult to interpret for inexperienced users). Finally, we hypothesized that for the background task, increasing contour width will lead to lower accuracy and higher completion time $\left( {h}_{3}\right)$ . + +§ 5.2 S1: PROCEDURE + +Participants completed an informed consent form and then were shown a description of the designs and given a set of practice tasks to complete. After each practice task, the participant was told whether the response was correct or not, and were given an explanation of the correct answer with a brief justification. Participants then completed the 57 tasks as described above. After each design, participants completed a NASA-TLX-style effort questionnaire [19] and at the end of the study, they rated their familiarity with visualization interfaces and their preferences for each of the widths. Participants were asked to complete the tasks as quickly and accurately as possible. Each task started with a 'start' button, and ended when the participant selected one of the multiple-choice answer options and pressed 'next'. Before starting a new task, participants were shown a reminder that they could rest before continuing. + +The study used a within-participants design, with contour width as the independent variable (considered separately for each design); dependent variables were accuracy, completion time, and subjective effort scores. Designs and tasks were presented in random order (sampling without replacement). + +§ 5.3 S1: RESULTS + +We applied additional filters to test whether participants were legitimately attempting the tasks (e.g., answering inconsistently and large time gaps in their surveys). After filtering, we had 44 participants (20 male, 24 female) with a median age range of 31-39. + +S1: Interpretation tasks: For the four interpretation tasks involving the B, C, and D attributes, data were analyzed using repeated-measures ANOVAs for each design (because each design used a different set of widths); Bonferroni corrected paired t-tests were used for follow-up comparisons. Figure 7 (left) shows that accuracies increased slightly as contour width increased, and Figure 7 (middle) shows that completion times decreased overall as width increased. + +We found significant effects of width on accuracy for the Parallel Lines design, and on completion time for Color Blending, Pie, and Side-by-Side. No effect of width was found for Thickness-Shade. For Parallel Lines (using widths 9 and 12), we found a significant effect of width on accuracy $\left( {{F}_{1,{43}} = {5.4},p < {.05}}\right)$ , with width 12 having higher accuracy than width 9. For Color Blending (widths $4,7,{10})$ , we found a significant effect of width on completion time $\left( {{F}_{2,{86}} = {8.09},p < {.05}}\right)$ ; Figure 7 (middle). Post-hoc t-tests showed that width 10 was faster than both width 7 and width 4 (all $p < {.05}$ ). For ${Pie}$ (widths 8 and 10), we found a significant effect of width on completion time $\left( {{F}_{1,{43}} = {9.46},p < {.05}}\right)$ . Width 10 was faster than width 8. For Side-by-Side (widths 1, 2, 4), we found a significant effect of width on completion time $\left( {{F}_{2,{86}} = {13.46},p < {.05}}\right)$ . Post-hoc t-tests showed widths 2 and 4 to be faster than width $1\left( {p < {.05}}\right)$ . + +S1: Background Task: The background icon-counting task used design Color Blending with widths 4, 8, and 12. We found a significant effect of width on accuracy $\left( {{F}_{2,{86}} = {121.76},p < {.05}}\right)$ . Post-hoc t-tests showed significant differences among all 3 widths $\left( {p < {.05}}\right)$ . As shown in Figure 7 (right), the mean accuracy for width 4 was higher than 8, which was higher than that of 12 . There was no effect of width on completion time. + +S1: Effort and Preference: We asked participants to rate their amount of mental effort, overall effort, frustration, and perceived success with each design. Friedman tests showed significant differences in all questions (all $p < {.005}$ ), with Parallel Lines and Side-by-Side rated better than the other designs. The width preference question reveals higher user preferences for the maximum (50% of participants) and median (41%) widths. + +§ 5.4 S1: DISCUSSION + +Increased contour widths for interpretation tasks led to improved completion time in three of the designs, and improved accuracy for one design, partially supporting hypothesis ${h}_{1}$ . We did not find any significant effect of width for Thickness-Shade, which partially supports our hypothesis ${h}_{2}$ that suggested the effects of width would be more obvious for some designs. Our results for the background task (effect of width on accuracy, but not on completion time) partially support hypothesis ${h}_{3}$ . Overall, the fact that there was only a minor effect of reduced width on interpretability (particularly for accuracy) means that width can often be safely reduced in scenarios where the visibility of the background is critical. + +Based on these findings we chose to use the maximum width for each design in further studies. In the following section, we explore the influence of contour intervals, which is another important element of a contour plot. + +§ 6 STUDY 2 (CONTOUR INTERVALS) + +A higher number of contour intervals increases both the number of visual elements in the design and the degree of background occlusion Increasing the number of contours, however, also provides more data points for the other variables visualized on the contour, and so may increase the interpretability of these variables. Our study explores this trade-off using a study design similar to that used above. + +§ 6.1 S2: PARTICIPANTS, DATA, AND TASKS + +We ran the study on Amazon Mechanical Turk with the same eligibility criteria as in Study 1. We recorded 68 complete responses (40 male, 26 female, 1 non-binary, 1 preferred not to answer), aged 21-60 (median age range 21-29). None of the participants took part in Study 1. The study used the 5 designs described above, each with 3 contour interval alternatives (4,6, or 8 contours). Participants completed 60 tasks: the same 4 interpretation tasks from Study 1 for each combination of design and contour interval, and the background icon-counting task. + +For analysing the interpretation tasks, the study used a within-participants design with three factors: Design (the five designs described above), Task (the four interpretation tasks from Study 1), and Number of Intervals (4, 6, or 8). The main dependent measures were accuracy and completion time; we also collected subjective effort and preference scores. + +S2 Hypotheses: We hypothesized that more contour levels will result in better performance for the tasks that require analysis across contour lines $\left( {h}_{4}\right)$ . For the background task, we hypothesized that more contour levels will lead to lower accuracy and higher completion time $\left( {h}_{5}\right)$ , due to increased occlusion. + +§ 6.2 S2: PROCEDURE + +Similar to Study 1, participants went through the eligibility tests, design demonstration, and practice tasks. Then they completed the main study tasks and filled out the TLX-style effort surveys and overall preference questions. The data and tasks were the same as in Study 1, and designs and tasks were presented in random order. + +§ 6.3 S2: RESULTS + +After filtering the participants based on response consistency, we had 46 participants (28 male, 1 non-binary, 17 female) with median age range of 31-39. + +S2: Interpretation tasks: We carried out $5 \times 4 \times 3$ RM-ANOVAs (Design $\times$ Task $\times$ Number of Intervals) for both accuracy and completion time, with Bonferroni-corrected t-tests as followup. There was a significant main effect of Intervals $\left( {{F}_{2,{90}} = {3.69}}\right.$ , $p < {.05}$ ) on accuracy. Post-hoc tests showed that 4 and 6 contour intervals had higher accuracy than 8 intervals (see Figure 8). There was also an interaction between Design and Task $\left( {{F}_{{12},{540}} = {2.09}}\right.$ , $p < {.05})$ . As can be seen in Figure 9, the Pie design had higher accuracy in Tasks 2 and 3 compared to the other designs. There was no main effect of Design on accuracy $\left( {{F}_{4,{180}} = {1.58},p = {0.18}}\right)$ . For completion time, there were no main effects of Design or Intervals $\left( {p > {.05}}\right)$ , but there was an interaction between Intervals and Task $\left( {{F}_{6,{270}} = {3.21},p < {.05}}\right)$ . + +S2: Background Task: We found no main effects of the number of contour intervals on either completion time or accuracy for the icon-counting task. + +S2: Subjective Effort and Preferences: Participants rated mental effort, overall effort, frustration, and perceived success with each design. Responses were similar across all designs, and Friedman tests showed a significant difference only for overall effort $\left( {p < {.05}}\right)$ , with the Pie design seen as requiring more effort than the others. We asked participants about their preference: 4 intervals (36%) and 6 intervals (39%) were preferred to 8 (25%). + + < g r a p h i c s > + +Figure 7: Study 1: (left and middle) Performances of the designs at different width choices for Tasks 1-4. Lines are connected to group the designs, but not to denote continuity of the width. (right) Accuracy and completion time for the icon-counting task. + + < g r a p h i c s > + +Figure 8: Study 2: Task performance with different contour intervals. + + < g r a p h i c s > + +Figure 9: Study 2: Task accuracy for the five designs. + +§ 6.4 S2: DISCUSSION + +Overall, 4 and 6 intervals performed better than 8 . One main reason for this result is that fewer contour intervals result in less visual clutter. Although there was a significant effect of contour levels on accuracy only for Task 3, there were significant interactions between contour intervals and task for completion time, which partially supports ${h}_{4}$ . We observed that higher contour levels slightly reduced mean accuracy for Tasks 1 and 3, where users may have found it difficult to follow along a line where there were other parallel lines in close proximity. + +The interactions show that task performance is dependent on design, contour intervals, and task combinations. The significant interaction (for accuracy) between contour levels and tasks suggests that the association between contour level and designs depends on tasks. Similarly, for completion time, the association between design and contour level depends on tasks. + +In the next study, we focus on how performance varies by design in both synthetic and real-world datasets. We used 4 contour intervals for all the designs. + +§ 7 STUDY 3 (DESIGN COMPARISON) + +§ 7.1 S3: PARTICIPANTS, DATA, AND TASKS + +We ran the study on Amazon Mechanical Turk with the same eligibility criteria as in Study 1. We recorded 78 complete responses (41 male, 33 female, 2 non-binary, 1 preferred not to answer), median age range 30-39. None of them participated in Study 1 or 2. + +We used two datasets for this study (one synthetic and one from a real-world scenario). Participants completed 6 different tasks for each design and dataset combination, which resulted in 60 tasks. Of the 6 tasks, 4 were similar to studies 1 and 2 . We added 2 additional tasks (Table 3) to examine whether users are able to interpret the extent of value changes of a single attribute (Task 5), and to estimate the difference between two attributes (Task 6). + +Table 3: Tasks with domains for Study 3 + +max width= + +ID Task Domain + +1-3 +5 Select the marked contour region that has the maximum change in (a given attribute) Estimate value changes along a contour line + +1-3 +6 Select the marked contour region that has the minimum difference between (a given pair of attributes) Estimate value difference along a contour line + +1-3 + +Similar to Study 1, participants went through the eligibility tests, design demonstrations, and practice tasks. Then they completed the main study, the effort survey, and preference questions (in this study, participants stated their preference for one of the designs after each task, as well as overall). + +§ 7.2 S3: PROCEDURE + +The study used a within-participants design with three factors: Design (the five designs described above), Task (the six interpretation tasks), and Dataset (Synthetic or Real-world). The main dependent measures were accuracy and completion time; we also collected subjective effort and preference scores. Designs and tasks were presented in random order (sampling without replacement). + +§ 7.3 S3: RESULTS + +After filtering based on response consistency, we had 54 participants (30 male, 22 female, 2 preferred not to answer) with a median age range of 31-39; 45 participants reported familiarity with data visualization interfaces. + + < g r a p h i c s > + +Figure 10: Study 3: Overall performance of the different designs. + +We carried out $5 \times 6 \times 2$ RM-ANOVAs (Design $\times$ Task $\times$ Dataset) for both accuracy and completion time, with Bonferroni-corrected t-tests as follow-up. There was a significant main effect of Design $\left( {{F}_{4,{212}} = {19.2},p < {.001}}\right)$ on accuracy. Post-hoc t-tests showed that Parallel Lines, Pie, and Side-by-Side were significantly more accurate than Color Blending and Thickness-Shade (Figure 10). There was also a main effect of Dataset $\left( {{F}_{1,{53}} = {13.8},p < {.001}}\right)$ : participants were more accurate with the real-world dataset (66%) than with the synthetic dataset (59%). + +There were also significant interactions between Design and other factors. First, there was a Design $\times$ Dataset interaction $\left( {{F}_{4,{212}} = }\right.$ ${10.7},p < {.001})$ . As shown in Figure 11, the Color Blending design had substantially lower accuracy for the synthetic data compared to the other designs, and only Parallel Lines was equally accurate with both datasets. Second, there was a Design $\times$ Task interaction $\left( {{F}_{{20},{1060}} = {5.01},p < {.001}}\right)$ . Figure 11 shows substantial differences in the tasks depending on the design: for example, the accuracy of the Color Blending design was substantially lower in Tasks 1 and 6, and the accuracy of Thickness-Shade was lower for Task 6. + +For completion time, there were no main effects of Design $\left( {p > {.05}}\right)$ , but there were interactions between Design and Dataset $\left( {{F}_{4,{212}} = {2.91},p < {.005}}\right)$ and between Design and Task $\left( {{F}_{{20},{1060}} = }\right.$ ${2.17},p < {.001})$ . + +Table 4: Study 3: Design Preference Survey + +max width= + +2*Design 2*Task 1 4|c|Average Preference Scores X 2*Overall + +3-7 + Task 2 Task 3 Task 4 Task 5 Task 6 + +1-8 +Parallel Lines 2.28 2.5 2.48 2.37 2.57 2.74 2.85 + +1-8 +Color Blending 1.85 1.83 1.87 1.57 1.76 1.93 1.93 + +1-8 +Pie 1.78 1.87 2.24 1.72 2.06 2.02 2.15 + +1-8 +Thickness- Shade 2.04 2.09 2.09 1.87 2.23 2.03 2.17 + +1-8 +Side-by-Side 2.69 2.74 2.85 2.37 2.69 2.59 2.93 + +1-8 + +S3: Subjective Effort and Preferences: We again asked participants to rate mental effort, overall effort, frustration, and perceived success with each design. Friedman tests showed significant differences for all questions $\left( {p < {.05}}\right)$ , with Parallel Lines and Side-by-Side scoring better than the other designs. We also asked users to rate the designs on a 0-4 scale (0: 'not preferred' to 4: 'highly preferred'). Participants rated the designs for each task as well as their overall preference. Mean participant scores are presented in Table 4 (higher values are better). The table highlights the top scores for each task and the top two designs for overall preference (Parallel Lines and Side-by-Side were the most-preferred designs). + +§ 7.4 OVERALL DISCUSSION + +The main finding of the third study is that all of the designs were successful for at least some of the tasks, and that the designs were similar in their performance - with the exception of Color Blending, which showed reduced accuracy compared to the other designs for Tasks 1 and 6. The study also clearly showed that integrating multiple variables into a single contour line results in visualizations that users can interpret successfully - as successfully as separate individual presentations (Side-by-Side). This is an important result for situations where designers need to provide a single larger view rather than divide the available space into pieces, as is needed for a side-by-side presentation. + +Overall, two designs-Parallel Lines and Side-by-Side-performed best and had high preference scores. Both designs have separate encoding space for all the variables; in addition, the encoding for different attributes was similar, and they are intuitive to read without any close inspection of the legend. Side-by-Side had high preference scores for all tasks except Task 6: in this task, a higher mean preference for Parallel Lines is likely due to its symmetric encoding for all the attributes, which makes the value difference easier to estimate, whereas for Side-by-Side users need to compute the difference inspecting two separate views. + +Interestingly, however, both the Parallel Lines and Side-by-Side designs have space constraints (Parallel Lines in terms of contour width, and Side-by-Side in terms of display space). For scenarios where a single view is required and the visibility of the background is important, neither of these designs may be feasible. In these cases, the Pie design appears to be a reasonable compromise because it had good accuracy and takes less space. + +The Color Blending and Thickness-Shade designs both had poor performance on at least one task. This could be due to participants' unfamiliarity with the encoding, but the visual variables used in these designs may be more difficult to interpret overall. In addition, the encoding in these two designs was not symmetric compared to the other designs. Problems in Task 1 (for Color Blending) may have resulted from the need to estimate attribute value combinations, where the interpretation of the designs likely demanded a close inspection of the legend. In addition to the possibility of misinterpretation, this may have led to increased cognitive load or reduced effort for tasks using these designs. The same reasoning holds for the poor performance of Thickness-Shade for Task 6, where estimating the value difference between a pair of attributes encoded in different features such as thickness and color shade requires a careful reading of the legend. + +Our results showed better performance with the real-world data than with the synthetic dataset (Figure 11), which may be due to differences in the underlying data distributions. The synthetic dataset consisted of almost all possible trend combinations for various attributes, so the corresponding visualizations consisted of highly varied feature combinations; in contrast, the visualizations generated from real-world data had fewer variations. It is likely that visualizations with real-world datasets are visually simpler, leading to improved accuracy and task completion time. These differences partially explain the significant interactions among design, dataset, and task combinations in accuracy, and interaction between design and dataset in completion time. + +Based on the study results, we formulated a table of design recommendations (Table 5) that summarizes the preferred design choices for various tasks over three environments-general use, time-sensitive interpretation, and high accuracy requirements. The table shows some strengths for particular designs that are different from the overall discussion above: for example, if the task requires quick estimation for trends along a contour line, then Color Blending and Thickness-Shade may be the best design options. + + < g r a p h i c s > + +Figure 11: Task performances in Study3, for (top) different designs, and (bottom) different datasets. + +Table 5: Design Recommendation Table + +max width= + +Domain Time Sensitive Environment Accuracy Sensitive General + +1-4 +Compare different contour parts Parallel Lines, Pie All except Color Blending All except Color Blending + +1-4 +Search for a trend across contour lines Parallel Lines, Pie, Side- by-Side Pie, Side-by-Side Parallel Lines, Side-by- Side, Pie + +1-4 +Search for a trend along a contour line Color Blend- ing, Thickness- Shade All All + +1-4 +Identify rate of change of a variable along a contour line All except ${Pie}$ All All + +1-4 +Identify the value difference on a contour part Parallel Lines, Side-by- Side Parallel Lines, Side-by- Side Parallel Lines, Side-by- Side, Pie + +1-4 + +§ 8 LIMITATIONS AND FUTURE WORK + +Our experience with the designs indicate some limitations in the research that provide opportunities for additional study. First, since we encode the variables at the pixel level, our current implementation does not scale well with large maps. However, rendering techniques using GPU acceleration can be used to overcome such an obstacle. Second, as the number of contour intervals grows, the contour lines may sometimes overlap. Therefore, finding an adaptive choice of contouring thresholds or allowing users to interactively choose the base attribute can be a valuable avenue of future research. Third, using existing contours as the basis for additional variables is limited by the density of these contours. Further work is needed on how to represent variables in areas with few contours: for example, adaptive sampling could be used to achieve minimum contour density across the map, or glyph-based techniques could be used to show important changes that occur between contours. Fourth, since we used colors in many of our designs, the interpretability of the designs could depend on the background map colour and texture. Therefore, real-world deployments of our technique will benefit from methods that tune color choices to the background map, or by implementing controls so that users can choose the opacity of the background map. + +In addition, while the crowdsourced surveys have been found to be useful in gaining insights about our designs, additional controlled studies in the lab as well as focus groups with meteorological experts could provide more information. For example, the use of eye tracking for our approach would give more detail on how users interact visually with the different designs. Given the complex interaction among various factors that we observed, in-depth observation of the use of the designs in realistic tasks could help better understand some of the effects and interactions. Finally, we plan to apply our designs to different real-world datasets and explore different contour-based tasks and scenarios that can be used in real geospatial settings. + +§ 9 CONCLUSION + +Contour plots are widely used, but standard techniques for adding multivariate visualizations onto these plots can clutter the display. To address this problem, we explored how contour lines on a geospatial map can be stylized to encode other attributes in the data. Such a multivariate representation can reduce visual clutter by leveraging the existing contour line space. We designed five types of visual encoding, and examined how various contour parameters such as width and contour levels influence task performances. Our crowdsourced study results showed that participants were able to perform several types of multivariate data analysis tasks with reasonable accuracy, which reveals the potential of our approach. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..13ddc538874b666d1b33c4acc169dc8a225d9253 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,299 @@ +# AmbiTeam: Providing Team Awareness Through Ambient Displays + +Category: Research + +## Abstract + +Due to the COVID-19 pandemic, research is increasingly conducted remotely without the benefit of informal interactions that help maintain awareness of each collaborator's work progress. We developed AmbiTeam, an ambient display that shows activity related to the files of a team project, to help collaborations preserve a sense of the team's involvement while working remotely. We found that using AmbiTeam did have a quantifiable effect on researchers' perceptions of their collaborators' project prioritization. We also found that the use of the system motivated researchers to work on their collaborative projects. This effect is known as "the motivational presences of others," one of the key challenges that make distance work difficult. We discuss how ambient displays can support remote collaborative work by recreating the motivational presence of others. + +Keywords: Collaboration; remote work; awareness; ambient display + +Index Terms: Human-centered computing-Human-Computer Interaction-Empirical studies in HCI- + +## 1 INTRODUCTION + +With the advent of the COVID-19 pandemic, research is increasingly conducted remotely without the affordances of informal interactions that enhance fluidity and interactivity in teams. Remote collaboration has always faced numerous challenges, such as decreased awareness of colleagues and their context [33] and limited motivational sense of the presence of others [33]. Awareness of one's collaborators is necessary for ensuring that each teammate's contributions are compatible with the collaboration's collective activity [13]. It also plays an essential role in determining whether an individual's actions mesh with the group's goals and progress [13]. The motivational sense of the presence of others complements awareness by producing "social facilitation" effects, like driving people to work more when they are not alone [33]. + +Similarly, a researcher's perception of their collaborator's effort in a project can profoundly impact collaboration [10]. In particular, researchers tend to feel anxious about the success of their collaboration when they are concerned that competing priorities result in less commitment to the project [10]. The shift to remote work likely exacerbates this challenge since remote researchers lack the awareness of their collaborators' activities. + +Together, these challenges pose a significant challenge to collaboration. It is essential that we address these challenges, given that the efficacy of science significantly improves when researchers from diverse backgrounds collaborate on a project [9]. We hypothesize that since a heightened awareness of a collaborator's research activities might reveal project prioritization, improved awareness could lessen the anxiety caused by uncertainty regarding a collaborator's investment. While various existing systems improve awareness in remote teams $\left\lbrack {6,7,{17},{18},{26},{28},{34}}\right\rbrack$ , no solution exists that solves the challenge of perceived prioritization. + +To this end, we developed a system, AmbiTeam (shown in Figure 1) to improve a researcher's awareness of their collaborator's project-related activity. The system tracks and visualizes file changes in user-specified project directories to indicate how much effort or work a collaborator has put in on the project. We performed a user evaluation of the system with ten researchers in co-located and remote collaborations to investigate the effect of ambiently providing project-related activity information on a researcher's work behavior and perception of effort. We found that AmbiTeam had some impact on a researcher's motivation to work on the project as well as perceptions of their collaborators' effort. The key contributions of this paper are: + +![01963e93-304b-7631-8c09-cafc6b2629b8_0_926_360_713_399_0.jpg](images/01963e93-304b-7631-8c09-cafc6b2629b8_0_926_360_713_399_0.jpg) + +Figure 1: Example visualization of a team's work-related activity which was featured on a tablet with an ambient display in each of our user's workplaces. The visualization shows activity from five fictional teammates using randomly generated data. Each team member has an area graph where each point represents their activity for that day. + +- Increased understanding of how to facilitate team awareness + +- A deeper understanding of the motivating effect of awareness on work behavior + +- New insights into the impact of increased awareness on perceptions of remote collaborators' effort + +## 2 PRIOR WORK + +We examine studies on awareness-based systems for supporting collaboration as well as existing solutions for unobtrusively providing information via ambient displays. + +### 2.1 Awareness-Based Systems + +Several technologies were developed to help remote workers become aware of their collaborator's research activities. For example, tools that inform members of remote teams about the timing of each other's activities and contributions have been shown to affect team coordination and learning [7]. Furthermore, systems that provide real-time, often visual, feedback about team behavior can mitigate "process-loss" (e.g., effort) in teams [18]. Some early technology (e.g., $\left\lbrack {6,{17},{28}}\right\rbrack$ ) featured permanently open audiovisual connections between locations, with the idea that providing unrestricted face-to-face communication would enable collaborative work as if the researchers were in the same room. + +Recently, Glikson et al. [18] created a tool that visualizes effort, which is determined by measuring the number of keystrokes that members of a collaboration make in a task collaboration space. They found that this tool improved both team effort and performance [18]. A number of modern systems have been developed that typically focus on notifications to provide awareness [27] which are generally considered disruptive [2]. Given the importance of reducing "dramatic changes in work habits" [32], it is likely that an effective system needs to be as unobtrusive as possible. + +### 2.2 Ambient Displays + +In contrast to the methods employed by existing awareness systems, ambient displays are information sources designed to communicate contextual or background information in the periphery of the user's awareness and only require the user's attention when it is appropriate or desired [19]. Methods for conveying information via ambient displays include the use of light levels [11, 22], wind [29], temperature [41], music [4], and art [19]. For example, one of the earliest ambient systems, "ambientRoom", used visual displays of water ripples to convey information about the activities of a laboratory hamster and light patches to indicate the amount of human movement in an atrium [22]. Ambient displays are not limited to immersive environments and can also take the form of standalone media displays that allow multiple people to simultaneously receive information [11]. Applications of ambient displays include educating users about resource (e.g., water $\left\lbrack {{24},{26}}\right\rbrack$ and power $\left\lbrack {20}\right\rbrack$ ) consumption, improving driving $\left\lbrack {{12},{35}}\right\rbrack$ , monitoring finances $\left\lbrack {37}\right\rbrack$ , and assisting time management during meetings [31]. + +Some ambient systems have been developed to support collaboration by tackling the issues of determining availability $\left\lbrack {1,8}\right\rbrack$ . One system, "Nimio," used a series of physical toys to indicate the presence and availability of collaborators in separate offices [8]. Toys in one office would cause associated toys in other offices to light up with colored lights when they detected sound and movement, indicating that a collaborator was in their office and communicating whether the collaborator appeared to be busy. Alavi and Dillen-bourg [1] placed colored light boxes on tables in a student space that allowed students to indicate their presence, availability, and the coursework they were currently working on so that any given student could be aware of other students with whom they could collaborate. + +Streng et al. [38] used ambient displays to convey information about the quality of collaboration between students working on a group task. In this paper, collaboration performance was measured by evaluating student adherence to a collaboration script that specified different phases and tasks to be carried out by individual team members. Performance information was communicated to the student participants either via a diagram featuring charts and numbers or an ambient art display showing a nature scene featuring trees, the sun or moon, and sometimes clouds and rain. + +### 2.3 Research Questions and Study Goals + +We hypothesize that promoting awareness by providing up-to-date information about a collaborator's project activities will affect a researcher's perception of their collaborator's effort. To avoid dramatically changing work habits, we pursue an ambient-based approach where information is conveyed without requiring the user's attention. To pursue these goals, we sought to answer the following questions: + +RQ1. Can tracking file activity give teammates a sense of their teammates' efforts? + +RQ2. Will ambient information on team project activities affect perceptions of collaborators' effort? + +RQ3. What effects will providing team project activity information have on work behavior? + +## 3 System Design + +### 3.1 Privacy and Scope + +Project effort is difficult to characterize as it includes activities that are impossible to track (e.g. thinking about a project) or are potentially sensitive (e.g. emails, phone calls). In order to respect the privacy of users, we avoid monitoring activities such as phone calls and emails and instead focus on the activity of files in user-specified project directories. This allows AmbiTeam to observe project activities related to the various stages of the research life-cycle identified by prior work [30]. For example, during experimentation, the system will be able to detect changes in electronic lab notebooks and cheat sheets used by researchers [30] as well as data. AmbiTeam will also observe data analysis by tracking changes in analysis code or scripts (also discussed in [30]) as well as generated output. Furthermore, the system will be able to monitor publication preparation by detecting changes in writing-related materials. + +### 3.2 Activity Tracking + +Activity is detected using a desktop application that monitors specified directories for file creation, deletion, and change events. Am-biTeam first prompts the user to select a directory to be watched, and on the back end, monitors the meta-data of the directory's files without viewing the file's contents. Once a file or directory in the watched directory is created, deleted, or changed, the user's ID and the time of the file event is encrypted and sent to a server. + +### 3.3 Displaying Activity + +The number of activities occurring each day for each user is visualized in the form of a point on an area graph. An area graph for each collaborator is displayed on a tablet, showing each day's cumulative activity in real time. The height of the graph on each day indicates the total amount of activity at that time and the area of the graph shows the total amount of activity over the course of a two week window. Activity is normalized across the team to facilitate comparisons between team members. Figure 1 shows an example. + +## 4 METHOD + +### 4.1 Participants + +To determine whether AmbiTeam facilitates team awareness, we recruited 10 scientists who are part of four existing collaborations across four institutions in the United States aged 21 to ${33}(\mu = {27.3}$ , $\sigma = {3.5}$ , three females). Each of the collaborations is labeled A-D. The research area, title, and group of each participant is presented in Table 1. Participants were recruited inter-departmental email and our methodology was approved by our institutional review board. The configuration of the teams participating in this study ranged from fully remote (team A) to fully co-located (teams C and D). Team B had a mixed composition where participants B2 and B3 were co-located while B1 and B4 were each at different locations. All co-located teams worked in the same offices as their collaborators and reported working closely together. + +Table 1: Participant backgrounds. + +
IDResearch AreaTitle
A1Biological AnthropologyPost-Doc
A2Vertebrate PaleontologyPh.D. Student
B1Computer Vision and Machine LearningMaster's Student
B2Computational LinguisticsPost-Doc
B3Computer Vision and Human- Computer InteractionMaster's Student
B4Human-Computer InteractionPh.D. Student
C1CyberSecurityPh.D. Student
C2CyberSecurityPh.D. Student
D1CyberSecurityPh.D. Student
D2CyberSecurityPh.D. Student
+ +Our participants sought to answer a variety of scientific questions, which can be broadly summarized as: + +- Understanding Faunal Change: identifying what happens to animals during the major climate events called the paleocene-eocene thermal maximum. (Team A). + +- Enable Communicative Mechanisms Between Humans and Computers: bringing together human's natural language capability and computers' data processing capability to allow peer-to-peer collaboration between humans and computers. (Team B). + +- Personalized Computer Security: using personal information to accomplish security tasks like authentication. This includes extracting nuanced personal information (e.g., vocal characteristics) from easily obtained information, such as pictures of people's faces. (Teams C & D). + +### 4.2 Procedure + +Participants were each given a tablet with AmbiTeam's display, had the activity monitor installed on their work computers, and were instructed on how both the activity monitor and the visualization worked. Participants then completed a pre-test where they estimated the amount of effort that each participating researcher is putting into the project, including themselves, on a scale from 1 to 9 with 1 being "very low" and 9 being "very high." Participants were also asked to explain the reasoning behind their rankings. Over the course of four weeks, on two randomly chosen days a week, participants were asked to repeat this assessment via email. During this time, AmbiTeam's visualization was turned off in order to prevent participants from consulting the visualization, since the goal was to determine whether the system's use affected their perception, not whether they could read the chart. To minimize visualization downtime, participants were given up to 24 hours to respond with their assessment. + +At the end of the study, we conducted semi-structured interviews with the participants. By using the semi-structured interview technique, we were able to cover additional topics as they were encountered, reducing the likelihood that important issues were overlooked [25]. When possible, interviews took place at each of the participant's primary workspaces (offices or labs). Participants located at remote locations participated in the interviews over Zoom [21]. Interviews were approximately 30 minutes in duration and were recorded in audio format, then transcribed. + +Participants were first asked to educate us about the collaborative research that they participated in during the study including their roles on the project(s) and the goal(s) of the research. We then asked participants to discuss their experiences using AmbiTeam as well as any changes they would propose and their likelihood of using the system in the future. + +### 4.3 Qualitative Data Analysis + +We performed a bottom-up analysis of participants' responses by constructing an affinity diagram (a.k.a. the KJ method) [5, 39] to expose prevailing themes. This approach is similar to qualitative coding and follows the same steps for qualitative analysis via coding as outlined by Auerback and Silverstein [3]. This is an appropriate method for semi-structured interviews as qualitative coding results in the possibility of applying the same code to different sections of the interview [23]. Moreover, affinity diagramming has had widespread use for qualitative data analysis over the last 50 years [36]. + +## 5 RESULTS + +Participant's responses to interview questions and bi-weekly assessments provided insight into their experiences regarding AmbiTeam. + +### 5.1 Interactions with the System + +Most participants reported briefly looking at the visualization multiple times a day, often because the visualization was placed within their general field of view (although care was taken to ensure that the visualization did not obstruct the view of the participant's workstation). However, participants did not intentionally check the visualization for updates, indicating that the information generally stayed in the background. + +![01963e93-304b-7631-8c09-cafc6b2629b8_2_926_150_720_418_0.jpg](images/01963e93-304b-7631-8c09-cafc6b2629b8_2_926_150_720_418_0.jpg) + +Figure 2: AmbiTeam's components shown in A1's workspace. The visualization was placed in a different location in the periphery of A1's attention during the study. + +"It wasn't like I checked it intentionally several times a day. It was more of that I leaned back in the chair to think about something and while looking at other things in my desk. I would see it." Cl + +The information gleaned from the visualization was typically combined with information gathered during communications with collaborators. This information included knowledge about circumstances (e.g., job interviews, other papers and projects), project deadlines and updates, and each researcher's role in the project. In some instances the fact that collaborators were communicating at all was enough of an indication that those researchers were prioritizing the project. Participant B3, however, based their ratings solely on their communications with their collaborators because they did not trust AmbiTeam. + +"I couldn't place enough trust in the system yet to factor in positively or negatively into my perception of prioriti-zation." B3 + +Most participants explicitly stated that using the system did not interrupt their workflow. This was partly due to the placement of the visualization within the user's workspace. Furthermore, the file tracking software was passive in nature such that once the user had selected their directories, no further action was needed. Participant C1 also remarked that the passive nature of the data collection resulted in more information than their usual workflow, because their usual workflow (GIT) relies on user to push information. + +### 5.2 Determining Engagement + +To determine whether tracking file activity can give teammates a sense of their teammates efforts (RQ1), we asked open-ended questions during each bi-weekly assessment and conducted a follow-up interview at the end of the study. We found that participants felt Am-biTeam's monitoring method gave a measure of user engagement. + +"Tracking over time as you change it, it's simple so it does give you a measure of whether or not the person is engaged. Or not engaged. So I think it's a good measurement of that" Cl + +However, participants reported several activities that were not tracked by the system that were integral to their work. In general, these activities were related to collaboration, idea development, and management. Some of the suggested activities are likely fairly easy to take into account, such as tracking the number of files in a directory (e.g., a library of literature for a project), the size of files (e.g., as figures get made, manuscript and code gets written), written meeting minutes, and the number of times a program is run. Others could be tracked by the existing software if the users change their behavior, such as making handwritten notes in a digital notebook as opposed to on physical pieces of paper. + +However, many of the suggested activities (e.g., tracking emails, phone calls, internet searches, time spent on the top window of a computer) are difficult to take into account without invading privacy. Several participants stated that they wouldn't want personal data to be tracked unless it's somehow necessary for the team. Even then, participants requested caution when setting up AmbiTeam in order to prevent project-sensitive data from being tracked. For example, during the set up of group D, participants deliberately chose directories that contained metadata and statistics about the participants in their studies but did not contain identifiable data. + +Finally, participants believed that for optimal use, the files and activities chosen for monitoring depend on the context of the user's work. They suggested that some metrics would be more suited to some roles than others. For example, since B4 was running user studies, the length of their files represents the amount of data collected and is more indicative of work than the number of files, which merely reflects the number of participants. Certain file types, such as those automatically created by ArcGIS [16] (a Geographic Information System Mapping Technology used by A1) and TensorFlow [40] models (a tool for building machine learning models used by B1) are automatically generated in bulk and don't necessarily indicate massive amounts of effort. + +### 5.3 Perceptions of Effort + +We wanted to know if AmbiTeam affected researchers' perceptions of their collaborators' effort on a project (RQ2). To do this we test whether there is a correlation between the average activity levels of their collaboration (as measured by our system) and the researchers' perception of how much effort their collaborator was putting in. We performed a Pearson's product-moment correlation test on participant's average displayed activity (activity) and the change in personal ratings (personal ratings). We found no correlation $(r = {0.09}$ , $p > {0.05}$ ) between personal ratings and activity. We also performed a Pearson's product-moment correlation test on activity and the change in ratings assigned to them by their collaborators (collaborator ratings). We found a weak positive correlation $(r = {0.22}$ , $p = {0.011}$ ) between collaborator ratings and the activity-as each participant's apparent activity increased, their collaborator's ratings of them increased. In summary, using AmbiTeam did not affect user's reported perception of their own effort; however, it did affect the user's perceptions of their collaborator's effort. + +### 5.4 User Behaviors + +To answer what affects the provision of team project activity information had on work behavior (RQ3), we asked open-ended questions during each bi-weekly assessment and conducted a follow-up interview at the end of the study. We found that on the whole, participants did not believe that using the system changed their collaborators' behaviors. However, many reported changing their own behaviors. In some cases, participants changed the way that their work was conducted to boost visibility and ensure that their collaborators knew that they were involved. For example, participant A2 described a time when they were creating a wiki for their project online. However, since AmbiTeam was unable to track the changes made to their online wiki, A2 wrote much of the text for the wiki on a text editor that saved changes to a file tracked by the system before uploading the text to the wiki. This ensured that their efforts to update the wiki appeared on the visualization. In addition to this, several participants mentioned saving their files more frequently so that their changes would register as activity and appear on the visualization. + +Many participants reported that AmbiTeam made them feel more motivated to work on their projects. Sometimes this was due to participants noticing a lull in their own activity, which reminded them to work on the project. Motivation was also often attributed to seeing their collaborator's activity. + +"Having a view of other people are working hard and then you don't want to be the last one. It's like a challenge." D2 + +Participant A2 noted that the system as had a positive impact due to its effect on motivation and a desire to work effectively. + +"Positive, because it helped motivate me to make the project a priority even though it's not the most fun thing to work on." A2 + +### 5.5 Future Directions and Applications + +All participants stated that they would be willing to use AmbiTeam, or a refined version of AmbiTeam, in the future for either professional or casual use. Several participants mentioned a desire to use the system in research collaborations to keep abreast of what their collaborators were up to. For example, participant C1 mentioned using the prior day's activity "I could glance at as sort of like a morning statistics for yesterday." Another use of the system would be for a project manager to balance the workload across researchers on a project, as described by participant B3 "I probably would want to use it just to see how much work my each of my teammates is doing so that the load is balanced out evenly." + +Other participants reported that they would use AmbiTeam in a classroom setting both as a student working with group-mates that they don't know well or didn't pick and as professors managing class groups. + +"I've had problems in the past ... they didn't do anything until the last week and even then in the last week, you know. I may have built the vast majority of it. They still get the same amount of credit." Cl. + +Several participants also stated that they would use AmbiTeam for personal use. Participant A1 described not being interested in worrying about their collaborator's productivity, but was interested in using the system to take a "long term perspective" and revisit their own project-related activity. The goal would be to have a better understanding of the work that they had done in the past. In a similar vein, participant B2, a self-proclaimed "data junky" expressed an interest in using AmbiTeam to gain a deeper insight into their workflow. A1 also disclosed a belief that AmbiTeam could be useful for recent Ph.D. graduates who have transitioned from working solely on their dissertation to managing multiple projects and needing to have a better grasp of their priorities. Finally, A2 expressed an interest in using the system with a friend to stay motivated to work. + +"In the same way that it's better to go to the gym with a friend because it motivates you because even on that one day when you really don't feel like going they'll go and then they'll help you get over that hump." A2 + +Participants also expressed a desire to extend AmbiTeam to support additional tasks. For example, participants conveyed an interest in integrating AmbiTeam with task management systems, allowing users to connect the activity shown on the visualization with specific tasks and goals. Participant C2 also suggested incorporating a messaging system that would allow a user to contact a collaborator when they notice a lull in activity. + +"[If] I made some changes that we needed to discuss that I could just look look at my collaborator and just tap ... saying hey, there's something that needs to be discussed." C2 + +## 6 Discussion + +### 6.1 Motivational presence of others + +Many of the participants reported feeling more motivated and productive while using AmbiTeam. These feelings can likely be attributed to the motivational presence of others [33]. Our participants' responses indicated they were aware of being watched by their teammates, which changed their behavior, as described by B1: + +"Because I know we are being tracked, I want to make use of time to work efficiently." B1 + +Researchers often use the presence of specific teammates in a shared space to guide their work [15]. Similarly, our participants also reported feeling motivated by seeing their collaborators work on the project, as stated by $\mathrm{C}2$ : + +"Every single time that happened I was like, oh he's working, I should probably work on it too." C2 + +Unfortunately, these effects often dissipate once the participant no longer has a sense of the presence of their collaborators. Depending on the scientific questions that they seek to answer, researchers may spend time away from their desks where AmbiTeam is set up to perform fieldwork. More investigation is necessary to determine whether the increased motivation facilitated by the system is sustained when researchers are unable to access AmbiTeam. + +### 6.2 Remote vs. Co-located Projects + +Given the difficulties that researchers have maintaining awareness of their collaborators' work progress at remote locations without the ability to casually "look over their shoulder" [33], we expected that AmbiTeam would have a smaller effect on co-located participants' perceptions of their collaborators. In fact, participants from the co-located teams reported having an easier time determining their co-located collaborators' effort and reported having a smaller effect on their perception of their collaborator's priorities. + +However, we found that AmbiTeam sometimes provided similar benefits to co-located participants as it did to remote participants. One co-located participant (C1) indicated that using AmbiTeam provided more information about their collaborator's effort than they got from their frequent communications with their collaborator - despite sitting next to each other. In this case, the information provided by AmbiTeam caused this co-located participant to change their expectations to take their collaborator's conflicting priorities into account. It's important to note that neither participant on Team C reported experiencing any negative effects from AmbiTeam's use. This finding indicates that AmbiTeam can be an effective tool even in co-located projects. + +### 6.3 Privacy vs. Accurate Activity Tracking + +During the post-study interviews, participants mentioned several activities that are part of their workflow that were not tracked by AmbiTeam during the study. However, tracking several of these activities would involve significant privacy violations, namely tracking in-person conversations, emails, and internet browsing history. This leads to the question of how to balance accurate activity tracking with maintaining user's privacy. It is possible that tracking additional, less-sensitive information (e.g., file length, degree to which a file has been changed) paired with customized tracking on a per-project and per-user basis may provide enough information that monitoring more-sensitive information like communications between collaborators is unnecessary. Further research is necessary to determine whether this is the case. + +### 6.4 Future Work + +One of the many dangers of remote work is loss of motivation. In co-located work, the presence of others has a large and important impact on teammates motivation [33]. We believe AmbiTeam was able to capture some of the motivational presence of others in remote work using an ambient display. In future work, we will explore other ways in which ambient displays can increase motivation. + +Although tracking file activity allows us to gain some measure of effort, it does not encompass many important steps of work (thinking, discussing etc.). Future work can explore the use of different metrics for providing team awareness, such as the amount of progress on given tasks. In addition, future work can also explore long term effects of systems like AmbiTeam to determine whether the immediate increase in productivity due to being watched decreases over long periods of time and see if tensions arise due to the limited display of team member's contributions. + +We evaluated AmbiTeam with collaborations of academic researchers who, while pursuing different research questions, had similar workflows. It is likely that all knowledge workers (workers who apply knowledge acquired through formal training to develop services and products [14]) can benefit from a system like Am-biTeam given that they generally have high amounts of screen time. However, it is less clear whether ambient displays work for all types of workers, including those whose jobs are very different from that of a knowledge worker (e.g., service work). In organizations with a clear hierarchy, does the role of the user affect the usefulness of AmbiTeam? Are there types of ambient data from a CEO that would motivate workers? For this reason, future work includes exploring the use of AmbiTeam in a variety of contexts of work. + +It is also unclear how well ambient displays work for providing activity information in large teams. Our assessment of AmbiTeam was with small teams of 2-4 people. How well will a system like AmbiTeam work for an entire organization? Given that organizations are frequently divided into smaller teams, is there even a need for systems like AmbiTeam to work with large collaborations? + +Many collaborations are highly temporally dispersed, sometimes operating across extreme time zone differences. In these situations, such as with a 12 hour time zone difference, people aren't working at the same time. Can we still effectively summarize progress from their work? Is the provision of activity information about a coworker who is not working at the same time still motivating? + +## 7 CONCLUSION + +In this paper, we described and evaluated a system meant to assist researchers experiencing the problem of perceived prioritization. We found that, despite shortcomings with regards to activity tracking, AmbiTeam had some effect on user's perceptions of their collaborators' effort as well as their motivation to work on their collaborative project. This work has implications for creating effective awareness-based technology for supporting collaborative work, particularly the recommendation that future awareness systems consider (a) using file activity to measure effort and (b) implementing ambient displays that do not interrupt the user's workflow. + +## REFERENCES + +[1] H. S. Alavi and P. Dillenbourg. Flag: An ambient awareness tool to support informal collaborative learning. In Proceedings of the 16th ACM International Conference on Supporting Group Work, GROUP '10, p. 315-316. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1880071.1880127 + +[2] L. Ardissono and G. Bosio. Context-dependent awareness support in open collaboration environments. User Modeling and User-Adapted Interaction, 22(3):223-254, 2012. + +[3] C. Auerbach and L. B. Silverstein. Qualitative data: An introduction to coding and analysis, vol. 21. NYU press, 2003. + +[4] L. Barrington, M. J. Lyons, D. Diegmann, and S. Abe. Ambient display using musical effects. In Proceedings of the 11th International Conference on Intelligent User Interfaces, IUI '06, p. 372-374. Asso- + +ciation for Computing Machinery, New York, NY, USA, 2006. doi: 10. 1145/1111449.1111541 + +[5] H. Beyer and K. Holtzblatt. Contextual Design: Defining Customer-centered Systems. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1998. + +[6] S. A. Bly, S. R. Harrison, and S. Irwin. Media spaces: bringing people together in a video, audio, and computing environment. Communications of the ACM, 36(1):28-46, 1993. + +[7] D. Bodemer and J. Dehler. Group awareness in cscl environments. Computers in Human Behavior, 27(3):1043-1045, 2011. + +[8] J. Brewer, A. Williams, and P. Dourish. A handle on what's going on: Combining tangible interfaces and ambient displays for collaborative groups. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction, TEI '07, p. 3-10. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1226969. 1226971 + +[9] D. T. Campbell. Ethnocentrism of disciplines and the fish-scale model of omniscience. Interdisciplinary relationships in the social sciences, 328:348, 1969. + +[10] E. Chung, N. Kwon, and J. Lee. Understanding scientific collaboration in the research life cycle: Bio-and nanoscientists' motivations, information-sharing and communication practices, and barriers to collaboration. Journal of the association for information science and technology, 67(8):1836-1848, 2016. + +[11] A. Dahley, C. Wisneski, and H. Ishii. Water lamp and pinwheels: Ambient projection of digital information into architectural space. In CHI 98 Conference Summary on Human Factors in Computing Systems, CHI '98, p. 269-270. Association for Computing Machinery, New York, NY, USA, 1998. doi: 10.1145/286498.286750 + +[12] M. De Marchi, J. Eriksson, and A. G. Forbes. Transittrace: Route planning using ambient displays. In Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, SIGSPATIAL '15. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2820783.2820857 + +[13] P. Dourish and V. Bellotti. Awareness and coordination in shared workspaces. In Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work, CSCW '92, p. 107-114. Association for Computing Machinery, New York, NY, USA, 1992. doi: 10.1145/ 143457.143468 + +[14] P. Drucker and P. Drucker. Landmarks of Tomorrow: A Report on the New "post-modern" World. Transaction Publishers, New Brunswick, USA, 1996. + +[15] T. Erickson, D. N. Smith, W. A. Kellogg, M. Laff, J. T. Richards, and E. Bradner. Socially translucent systems: Social proxies, persistent conversation, and the design of "babble". In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '99, p. 72-79. Association for Computing Machinery, New York, NY, USA, 1999. doi: 10.1145/302979.302997 + +[16] Esri. Arcgis online, 2020. + +[17] W. W. Gaver, A. Sellen, C. Heath, and P. Luff. One is not enough: Multiple views in a media space. In Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems, CHI '93, p. 335-341. Association for Computing Machinery, New York, NY, USA, 1993. doi: 10.1145/169059.169268 + +[18] E. Glikson, A. W. Wolley, P. Gupta, and Y. J. Kim. Visualized automatic feedback in virtual teams. Frontiers in psychology, 10:814, 2019. + +[19] J. M. Heiner, S. E. Hudson, and K. Tanaka. The information percolator: Ambient information display in a decorative object. In Proceedings of the 12th Annual ACM Symposium on User Interface Software and Technology, UIST '99, p. 141-148. Association for Computing Machinery, New York, NY, USA, 1999. doi: 10.1145/320719.322595 + +[20] F. Heller and J. Borchers. Powersocket: Towards on-outlet power consumption visualization. In CHI '11 Extended Abstracts on Human Factors in Computing Systems, CHI EA '11, p. 1981-1986. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/ 1979742.1979901 + +[21] Z. V. C. Inc. Video conferencing, web conferencing, webinars, screen + +sharing, 2020. + +[22] H. Ishii, C. Wisneski, S. Brave, A. Dahley, M. Gorbet, B. Ullmer, and P. Yarin. Ambientroom: Integrating ambient media with architec- + +tural space. In CHI 98 Conference Summary on Human Factors in Computing Systems, CHI '98, p. 173-174. Association for Computing Machinery, New York, NY, USA, 1998. doi: 10.1145/286498.286652 + +[23] E. Jun, B. A. Jo, N. Oliveira, and K. Reinecke. Digestif: Promoting science communication in online experiments. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1-26, 2018. + +[24] K. Kappel and T. Grechenig. "show-me": Water consumption at a glance to promote water conservation in the shower. In Proceedings of the 4th International Conference on Persuasive Technology, Persuasive '09. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1541948.1541984 + +[25] J. Lazar, J. H. Feng, and H. Hochheiser. Research Methods in Human-Computer Interaction. Wiley Publishing, Chichester, United Kingdom, 2010. + +[26] G. López and L. A. Guerrero. Notifications for collaborative documents editing. In R. Hervás, S. Lee, C. Nugent, and J. Bravo, eds., Ubiquitous Computing and Ambient Intelligence. Personalisation and User Adapted Services, pp. 80-87. Springer International Publishing, Cham, 2014. + +[27] G. Lopez and L. A. Guerrero. Awareness supporting technologies used in collaborative systems: A systematic literature review. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW '17, pp. 808-820. ACM, New York, NY, USA, 2017. doi: 10.1145/2998181.2998281 + +[28] M. M. Mantei, R. M. Baecker, A. J. Sellen, W. A. S. Buxton, T. Milligan, and B. Wellman. Experiences in the use of a media space. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '91, p. 203-208. Association for Computing Machinery, New York, NY, USA, 1991. doi: 10.1145/108844.108888 + +[29] M. Minakuchi and S. Nakamura. Collaborative ambient systems by blow displays. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction, TEI '07, p. 105-108. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/ 1226969.1226992 + +[30] S. Morrison-Smith, C. Boucher, A. Bunt, and J. Ruiz. Elucidating the role and use of bioinformatics software in life science research. In Proceedings of the 2015 British HCI Conference, British HCI '15, p. 230-238. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2783446.2783581 + +[31] V. Occhialini, H. van Essen, and B. Eggen. Design and evaluation of an ambient display to support time management during meetings. In P. Campos, N. Graham, J. Jorge, N. Nunes, P. Palanque, and M. Winck-ler, eds., Human-Computer Interaction - INTERACT 2011, pp. 263- 280. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. + +[32] K. Olesen and M. D. Myers. Trying to improve communication and collaboration with information technology: an action research project which failed. Information Technology & People, 12(4):317-332, 1999. + +[33] J. S. Olson and G. M. Olson. Bridging Distance: Empirical studies of distributed teams. Human-Computer Interaction in Management Information Systems, 2:27-30, 2006. + +[34] B. Otjacques, R. McCall, and F. Feltz. An ambient workplace for raising awareness of internet-based cooperation. In Y. Luo, ed., Cooperative Design, Visualization, and Engineering, pp. 275-286. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006. + +[35] M. D. Rodríguez, R. R. Roa, J. E. Ibarra, and C. M. Curlango. In-car ambient displays for safety driving gamification. In Proceedings of the 5th Mexican Conference on Human-Computer Interaction, MexIHC '14, p. 26-29. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2676690.2676701 + +[36] R. Scupin. The kj method: A technique for analyzing data derived from japanese ethnology. Human organization, pp. 233-237, 1997. + +[37] X. Shen and P. Eades. Using $\mathfrak{j}i\mathcal{j}$ moneycolor $i\mathcal{l}i$ , to represent financial data. In Proceedings of the 2005 Asia-Pacific Symposium on Information Visualisation - Volume 45, APVis '05, p. 125-129. Australian Computer Society, Inc., AUS, 2005. + +[38] S. Streng, K. Stegmann, H. Hußmann, and F. Fischer. Metaphor or diagram? comparing different representations for group mirrors. In + +Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group: Design: Open 24/7, OZCHI '09, p. 249-256. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1738826.1738866 + +[39] H. Subramonyam, S. M. Drucker, and E. Adar. Affinity lens: data-assisted affinity diagramming with augmented reality. In Proceedings of the 2019 CHI conference on human factors in computing systems, pp. 1-13, 2019. + +[40] G. B. Team. Tensorflow, 2021. + +[41] R. Wettach, C. Behrens, A. Danielsson, and T. Ness. A thermal information display for mobile applications. In Proceedings of the 9th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI '07, p. 182-185. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/ 1377999.1378004 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..388a7295dd32b3a37b042028829cbbb1b39727a4 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,241 @@ +§ AMBITEAM: PROVIDING TEAM AWARENESS THROUGH AMBIENT DISPLAYS + +Category: Research + +§ ABSTRACT + +Due to the COVID-19 pandemic, research is increasingly conducted remotely without the benefit of informal interactions that help maintain awareness of each collaborator's work progress. We developed AmbiTeam, an ambient display that shows activity related to the files of a team project, to help collaborations preserve a sense of the team's involvement while working remotely. We found that using AmbiTeam did have a quantifiable effect on researchers' perceptions of their collaborators' project prioritization. We also found that the use of the system motivated researchers to work on their collaborative projects. This effect is known as "the motivational presences of others," one of the key challenges that make distance work difficult. We discuss how ambient displays can support remote collaborative work by recreating the motivational presence of others. + +Keywords: Collaboration; remote work; awareness; ambient display + +Index Terms: Human-centered computing-Human-Computer Interaction-Empirical studies in HCI- + +§ 1 INTRODUCTION + +With the advent of the COVID-19 pandemic, research is increasingly conducted remotely without the affordances of informal interactions that enhance fluidity and interactivity in teams. Remote collaboration has always faced numerous challenges, such as decreased awareness of colleagues and their context [33] and limited motivational sense of the presence of others [33]. Awareness of one's collaborators is necessary for ensuring that each teammate's contributions are compatible with the collaboration's collective activity [13]. It also plays an essential role in determining whether an individual's actions mesh with the group's goals and progress [13]. The motivational sense of the presence of others complements awareness by producing "social facilitation" effects, like driving people to work more when they are not alone [33]. + +Similarly, a researcher's perception of their collaborator's effort in a project can profoundly impact collaboration [10]. In particular, researchers tend to feel anxious about the success of their collaboration when they are concerned that competing priorities result in less commitment to the project [10]. The shift to remote work likely exacerbates this challenge since remote researchers lack the awareness of their collaborators' activities. + +Together, these challenges pose a significant challenge to collaboration. It is essential that we address these challenges, given that the efficacy of science significantly improves when researchers from diverse backgrounds collaborate on a project [9]. We hypothesize that since a heightened awareness of a collaborator's research activities might reveal project prioritization, improved awareness could lessen the anxiety caused by uncertainty regarding a collaborator's investment. While various existing systems improve awareness in remote teams $\left\lbrack {6,7,{17},{18},{26},{28},{34}}\right\rbrack$ , no solution exists that solves the challenge of perceived prioritization. + +To this end, we developed a system, AmbiTeam (shown in Figure 1) to improve a researcher's awareness of their collaborator's project-related activity. The system tracks and visualizes file changes in user-specified project directories to indicate how much effort or work a collaborator has put in on the project. We performed a user evaluation of the system with ten researchers in co-located and remote collaborations to investigate the effect of ambiently providing project-related activity information on a researcher's work behavior and perception of effort. We found that AmbiTeam had some impact on a researcher's motivation to work on the project as well as perceptions of their collaborators' effort. The key contributions of this paper are: + + < g r a p h i c s > + +Figure 1: Example visualization of a team's work-related activity which was featured on a tablet with an ambient display in each of our user's workplaces. The visualization shows activity from five fictional teammates using randomly generated data. Each team member has an area graph where each point represents their activity for that day. + + * Increased understanding of how to facilitate team awareness + + * A deeper understanding of the motivating effect of awareness on work behavior + + * New insights into the impact of increased awareness on perceptions of remote collaborators' effort + +§ 2 PRIOR WORK + +We examine studies on awareness-based systems for supporting collaboration as well as existing solutions for unobtrusively providing information via ambient displays. + +§ 2.1 AWARENESS-BASED SYSTEMS + +Several technologies were developed to help remote workers become aware of their collaborator's research activities. For example, tools that inform members of remote teams about the timing of each other's activities and contributions have been shown to affect team coordination and learning [7]. Furthermore, systems that provide real-time, often visual, feedback about team behavior can mitigate "process-loss" (e.g., effort) in teams [18]. Some early technology (e.g., $\left\lbrack {6,{17},{28}}\right\rbrack$ ) featured permanently open audiovisual connections between locations, with the idea that providing unrestricted face-to-face communication would enable collaborative work as if the researchers were in the same room. + +Recently, Glikson et al. [18] created a tool that visualizes effort, which is determined by measuring the number of keystrokes that members of a collaboration make in a task collaboration space. They found that this tool improved both team effort and performance [18]. A number of modern systems have been developed that typically focus on notifications to provide awareness [27] which are generally considered disruptive [2]. Given the importance of reducing "dramatic changes in work habits" [32], it is likely that an effective system needs to be as unobtrusive as possible. + +§ 2.2 AMBIENT DISPLAYS + +In contrast to the methods employed by existing awareness systems, ambient displays are information sources designed to communicate contextual or background information in the periphery of the user's awareness and only require the user's attention when it is appropriate or desired [19]. Methods for conveying information via ambient displays include the use of light levels [11, 22], wind [29], temperature [41], music [4], and art [19]. For example, one of the earliest ambient systems, "ambientRoom", used visual displays of water ripples to convey information about the activities of a laboratory hamster and light patches to indicate the amount of human movement in an atrium [22]. Ambient displays are not limited to immersive environments and can also take the form of standalone media displays that allow multiple people to simultaneously receive information [11]. Applications of ambient displays include educating users about resource (e.g., water $\left\lbrack {{24},{26}}\right\rbrack$ and power $\left\lbrack {20}\right\rbrack$ ) consumption, improving driving $\left\lbrack {{12},{35}}\right\rbrack$ , monitoring finances $\left\lbrack {37}\right\rbrack$ , and assisting time management during meetings [31]. + +Some ambient systems have been developed to support collaboration by tackling the issues of determining availability $\left\lbrack {1,8}\right\rbrack$ . One system, "Nimio," used a series of physical toys to indicate the presence and availability of collaborators in separate offices [8]. Toys in one office would cause associated toys in other offices to light up with colored lights when they detected sound and movement, indicating that a collaborator was in their office and communicating whether the collaborator appeared to be busy. Alavi and Dillen-bourg [1] placed colored light boxes on tables in a student space that allowed students to indicate their presence, availability, and the coursework they were currently working on so that any given student could be aware of other students with whom they could collaborate. + +Streng et al. [38] used ambient displays to convey information about the quality of collaboration between students working on a group task. In this paper, collaboration performance was measured by evaluating student adherence to a collaboration script that specified different phases and tasks to be carried out by individual team members. Performance information was communicated to the student participants either via a diagram featuring charts and numbers or an ambient art display showing a nature scene featuring trees, the sun or moon, and sometimes clouds and rain. + +§ 2.3 RESEARCH QUESTIONS AND STUDY GOALS + +We hypothesize that promoting awareness by providing up-to-date information about a collaborator's project activities will affect a researcher's perception of their collaborator's effort. To avoid dramatically changing work habits, we pursue an ambient-based approach where information is conveyed without requiring the user's attention. To pursue these goals, we sought to answer the following questions: + +RQ1. Can tracking file activity give teammates a sense of their teammates' efforts? + +RQ2. Will ambient information on team project activities affect perceptions of collaborators' effort? + +RQ3. What effects will providing team project activity information have on work behavior? + +§ 3 SYSTEM DESIGN + +§ 3.1 PRIVACY AND SCOPE + +Project effort is difficult to characterize as it includes activities that are impossible to track (e.g. thinking about a project) or are potentially sensitive (e.g. emails, phone calls). In order to respect the privacy of users, we avoid monitoring activities such as phone calls and emails and instead focus on the activity of files in user-specified project directories. This allows AmbiTeam to observe project activities related to the various stages of the research life-cycle identified by prior work [30]. For example, during experimentation, the system will be able to detect changes in electronic lab notebooks and cheat sheets used by researchers [30] as well as data. AmbiTeam will also observe data analysis by tracking changes in analysis code or scripts (also discussed in [30]) as well as generated output. Furthermore, the system will be able to monitor publication preparation by detecting changes in writing-related materials. + +§ 3.2 ACTIVITY TRACKING + +Activity is detected using a desktop application that monitors specified directories for file creation, deletion, and change events. Am-biTeam first prompts the user to select a directory to be watched, and on the back end, monitors the meta-data of the directory's files without viewing the file's contents. Once a file or directory in the watched directory is created, deleted, or changed, the user's ID and the time of the file event is encrypted and sent to a server. + +§ 3.3 DISPLAYING ACTIVITY + +The number of activities occurring each day for each user is visualized in the form of a point on an area graph. An area graph for each collaborator is displayed on a tablet, showing each day's cumulative activity in real time. The height of the graph on each day indicates the total amount of activity at that time and the area of the graph shows the total amount of activity over the course of a two week window. Activity is normalized across the team to facilitate comparisons between team members. Figure 1 shows an example. + +§ 4 METHOD + +§ 4.1 PARTICIPANTS + +To determine whether AmbiTeam facilitates team awareness, we recruited 10 scientists who are part of four existing collaborations across four institutions in the United States aged 21 to ${33}(\mu = {27.3}$ , $\sigma = {3.5}$ , three females). Each of the collaborations is labeled A-D. The research area, title, and group of each participant is presented in Table 1. Participants were recruited inter-departmental email and our methodology was approved by our institutional review board. The configuration of the teams participating in this study ranged from fully remote (team A) to fully co-located (teams C and D). Team B had a mixed composition where participants B2 and B3 were co-located while B1 and B4 were each at different locations. All co-located teams worked in the same offices as their collaborators and reported working closely together. + +Table 1: Participant backgrounds. + +max width= + +ID Research Area Title + +1-3 +A1 Biological Anthropology Post-Doc + +1-3 +A2 Vertebrate Paleontology Ph.D. Student + +1-3 +B1 Computer Vision and Machine Learning Master's Student + +1-3 +B2 Computational Linguistics Post-Doc + +1-3 +B3 Computer Vision and Human- Computer Interaction Master's Student + +1-3 +B4 Human-Computer Interaction Ph.D. Student + +1-3 +C1 CyberSecurity Ph.D. Student + +1-3 +C2 CyberSecurity Ph.D. Student + +1-3 +D1 CyberSecurity Ph.D. Student + +1-3 +D2 CyberSecurity Ph.D. Student + +1-3 + +Our participants sought to answer a variety of scientific questions, which can be broadly summarized as: + + * Understanding Faunal Change: identifying what happens to animals during the major climate events called the paleocene-eocene thermal maximum. (Team A). + + * Enable Communicative Mechanisms Between Humans and Computers: bringing together human's natural language capability and computers' data processing capability to allow peer-to-peer collaboration between humans and computers. (Team B). + + * Personalized Computer Security: using personal information to accomplish security tasks like authentication. This includes extracting nuanced personal information (e.g., vocal characteristics) from easily obtained information, such as pictures of people's faces. (Teams C & D). + +§ 4.2 PROCEDURE + +Participants were each given a tablet with AmbiTeam's display, had the activity monitor installed on their work computers, and were instructed on how both the activity monitor and the visualization worked. Participants then completed a pre-test where they estimated the amount of effort that each participating researcher is putting into the project, including themselves, on a scale from 1 to 9 with 1 being "very low" and 9 being "very high." Participants were also asked to explain the reasoning behind their rankings. Over the course of four weeks, on two randomly chosen days a week, participants were asked to repeat this assessment via email. During this time, AmbiTeam's visualization was turned off in order to prevent participants from consulting the visualization, since the goal was to determine whether the system's use affected their perception, not whether they could read the chart. To minimize visualization downtime, participants were given up to 24 hours to respond with their assessment. + +At the end of the study, we conducted semi-structured interviews with the participants. By using the semi-structured interview technique, we were able to cover additional topics as they were encountered, reducing the likelihood that important issues were overlooked [25]. When possible, interviews took place at each of the participant's primary workspaces (offices or labs). Participants located at remote locations participated in the interviews over Zoom [21]. Interviews were approximately 30 minutes in duration and were recorded in audio format, then transcribed. + +Participants were first asked to educate us about the collaborative research that they participated in during the study including their roles on the project(s) and the goal(s) of the research. We then asked participants to discuss their experiences using AmbiTeam as well as any changes they would propose and their likelihood of using the system in the future. + +§ 4.3 QUALITATIVE DATA ANALYSIS + +We performed a bottom-up analysis of participants' responses by constructing an affinity diagram (a.k.a. the KJ method) [5, 39] to expose prevailing themes. This approach is similar to qualitative coding and follows the same steps for qualitative analysis via coding as outlined by Auerback and Silverstein [3]. This is an appropriate method for semi-structured interviews as qualitative coding results in the possibility of applying the same code to different sections of the interview [23]. Moreover, affinity diagramming has had widespread use for qualitative data analysis over the last 50 years [36]. + +§ 5 RESULTS + +Participant's responses to interview questions and bi-weekly assessments provided insight into their experiences regarding AmbiTeam. + +§ 5.1 INTERACTIONS WITH THE SYSTEM + +Most participants reported briefly looking at the visualization multiple times a day, often because the visualization was placed within their general field of view (although care was taken to ensure that the visualization did not obstruct the view of the participant's workstation). However, participants did not intentionally check the visualization for updates, indicating that the information generally stayed in the background. + + < g r a p h i c s > + +Figure 2: AmbiTeam's components shown in A1's workspace. The visualization was placed in a different location in the periphery of A1's attention during the study. + +"It wasn't like I checked it intentionally several times a day. It was more of that I leaned back in the chair to think about something and while looking at other things in my desk. I would see it." Cl + +The information gleaned from the visualization was typically combined with information gathered during communications with collaborators. This information included knowledge about circumstances (e.g., job interviews, other papers and projects), project deadlines and updates, and each researcher's role in the project. In some instances the fact that collaborators were communicating at all was enough of an indication that those researchers were prioritizing the project. Participant B3, however, based their ratings solely on their communications with their collaborators because they did not trust AmbiTeam. + +"I couldn't place enough trust in the system yet to factor in positively or negatively into my perception of prioriti-zation." B3 + +Most participants explicitly stated that using the system did not interrupt their workflow. This was partly due to the placement of the visualization within the user's workspace. Furthermore, the file tracking software was passive in nature such that once the user had selected their directories, no further action was needed. Participant C1 also remarked that the passive nature of the data collection resulted in more information than their usual workflow, because their usual workflow (GIT) relies on user to push information. + +§ 5.2 DETERMINING ENGAGEMENT + +To determine whether tracking file activity can give teammates a sense of their teammates efforts (RQ1), we asked open-ended questions during each bi-weekly assessment and conducted a follow-up interview at the end of the study. We found that participants felt Am-biTeam's monitoring method gave a measure of user engagement. + +"Tracking over time as you change it, it's simple so it does give you a measure of whether or not the person is engaged. Or not engaged. So I think it's a good measurement of that" Cl + +However, participants reported several activities that were not tracked by the system that were integral to their work. In general, these activities were related to collaboration, idea development, and management. Some of the suggested activities are likely fairly easy to take into account, such as tracking the number of files in a directory (e.g., a library of literature for a project), the size of files (e.g., as figures get made, manuscript and code gets written), written meeting minutes, and the number of times a program is run. Others could be tracked by the existing software if the users change their behavior, such as making handwritten notes in a digital notebook as opposed to on physical pieces of paper. + +However, many of the suggested activities (e.g., tracking emails, phone calls, internet searches, time spent on the top window of a computer) are difficult to take into account without invading privacy. Several participants stated that they wouldn't want personal data to be tracked unless it's somehow necessary for the team. Even then, participants requested caution when setting up AmbiTeam in order to prevent project-sensitive data from being tracked. For example, during the set up of group D, participants deliberately chose directories that contained metadata and statistics about the participants in their studies but did not contain identifiable data. + +Finally, participants believed that for optimal use, the files and activities chosen for monitoring depend on the context of the user's work. They suggested that some metrics would be more suited to some roles than others. For example, since B4 was running user studies, the length of their files represents the amount of data collected and is more indicative of work than the number of files, which merely reflects the number of participants. Certain file types, such as those automatically created by ArcGIS [16] (a Geographic Information System Mapping Technology used by A1) and TensorFlow [40] models (a tool for building machine learning models used by B1) are automatically generated in bulk and don't necessarily indicate massive amounts of effort. + +§ 5.3 PERCEPTIONS OF EFFORT + +We wanted to know if AmbiTeam affected researchers' perceptions of their collaborators' effort on a project (RQ2). To do this we test whether there is a correlation between the average activity levels of their collaboration (as measured by our system) and the researchers' perception of how much effort their collaborator was putting in. We performed a Pearson's product-moment correlation test on participant's average displayed activity (activity) and the change in personal ratings (personal ratings). We found no correlation $(r = {0.09}$ , $p > {0.05}$ ) between personal ratings and activity. We also performed a Pearson's product-moment correlation test on activity and the change in ratings assigned to them by their collaborators (collaborator ratings). We found a weak positive correlation $(r = {0.22}$ , $p = {0.011}$ ) between collaborator ratings and the activity-as each participant's apparent activity increased, their collaborator's ratings of them increased. In summary, using AmbiTeam did not affect user's reported perception of their own effort; however, it did affect the user's perceptions of their collaborator's effort. + +§ 5.4 USER BEHAVIORS + +To answer what affects the provision of team project activity information had on work behavior (RQ3), we asked open-ended questions during each bi-weekly assessment and conducted a follow-up interview at the end of the study. We found that on the whole, participants did not believe that using the system changed their collaborators' behaviors. However, many reported changing their own behaviors. In some cases, participants changed the way that their work was conducted to boost visibility and ensure that their collaborators knew that they were involved. For example, participant A2 described a time when they were creating a wiki for their project online. However, since AmbiTeam was unable to track the changes made to their online wiki, A2 wrote much of the text for the wiki on a text editor that saved changes to a file tracked by the system before uploading the text to the wiki. This ensured that their efforts to update the wiki appeared on the visualization. In addition to this, several participants mentioned saving their files more frequently so that their changes would register as activity and appear on the visualization. + +Many participants reported that AmbiTeam made them feel more motivated to work on their projects. Sometimes this was due to participants noticing a lull in their own activity, which reminded them to work on the project. Motivation was also often attributed to seeing their collaborator's activity. + +"Having a view of other people are working hard and then you don't want to be the last one. It's like a challenge." D2 + +Participant A2 noted that the system as had a positive impact due to its effect on motivation and a desire to work effectively. + +"Positive, because it helped motivate me to make the project a priority even though it's not the most fun thing to work on." A2 + +§ 5.5 FUTURE DIRECTIONS AND APPLICATIONS + +All participants stated that they would be willing to use AmbiTeam, or a refined version of AmbiTeam, in the future for either professional or casual use. Several participants mentioned a desire to use the system in research collaborations to keep abreast of what their collaborators were up to. For example, participant C1 mentioned using the prior day's activity "I could glance at as sort of like a morning statistics for yesterday." Another use of the system would be for a project manager to balance the workload across researchers on a project, as described by participant B3 "I probably would want to use it just to see how much work my each of my teammates is doing so that the load is balanced out evenly." + +Other participants reported that they would use AmbiTeam in a classroom setting both as a student working with group-mates that they don't know well or didn't pick and as professors managing class groups. + +"I've had problems in the past ... they didn't do anything until the last week and even then in the last week, you know. I may have built the vast majority of it. They still get the same amount of credit." Cl. + +Several participants also stated that they would use AmbiTeam for personal use. Participant A1 described not being interested in worrying about their collaborator's productivity, but was interested in using the system to take a "long term perspective" and revisit their own project-related activity. The goal would be to have a better understanding of the work that they had done in the past. In a similar vein, participant B2, a self-proclaimed "data junky" expressed an interest in using AmbiTeam to gain a deeper insight into their workflow. A1 also disclosed a belief that AmbiTeam could be useful for recent Ph.D. graduates who have transitioned from working solely on their dissertation to managing multiple projects and needing to have a better grasp of their priorities. Finally, A2 expressed an interest in using the system with a friend to stay motivated to work. + +"In the same way that it's better to go to the gym with a friend because it motivates you because even on that one day when you really don't feel like going they'll go and then they'll help you get over that hump." A2 + +Participants also expressed a desire to extend AmbiTeam to support additional tasks. For example, participants conveyed an interest in integrating AmbiTeam with task management systems, allowing users to connect the activity shown on the visualization with specific tasks and goals. Participant C2 also suggested incorporating a messaging system that would allow a user to contact a collaborator when they notice a lull in activity. + +"[If] I made some changes that we needed to discuss that I could just look look at my collaborator and just tap ... saying hey, there's something that needs to be discussed." C2 + +§ 6 DISCUSSION + +§ 6.1 MOTIVATIONAL PRESENCE OF OTHERS + +Many of the participants reported feeling more motivated and productive while using AmbiTeam. These feelings can likely be attributed to the motivational presence of others [33]. Our participants' responses indicated they were aware of being watched by their teammates, which changed their behavior, as described by B1: + +"Because I know we are being tracked, I want to make use of time to work efficiently." B1 + +Researchers often use the presence of specific teammates in a shared space to guide their work [15]. Similarly, our participants also reported feeling motivated by seeing their collaborators work on the project, as stated by $\mathrm{C}2$ : + +"Every single time that happened I was like, oh he's working, I should probably work on it too." C2 + +Unfortunately, these effects often dissipate once the participant no longer has a sense of the presence of their collaborators. Depending on the scientific questions that they seek to answer, researchers may spend time away from their desks where AmbiTeam is set up to perform fieldwork. More investigation is necessary to determine whether the increased motivation facilitated by the system is sustained when researchers are unable to access AmbiTeam. + +§ 6.2 REMOTE VS. CO-LOCATED PROJECTS + +Given the difficulties that researchers have maintaining awareness of their collaborators' work progress at remote locations without the ability to casually "look over their shoulder" [33], we expected that AmbiTeam would have a smaller effect on co-located participants' perceptions of their collaborators. In fact, participants from the co-located teams reported having an easier time determining their co-located collaborators' effort and reported having a smaller effect on their perception of their collaborator's priorities. + +However, we found that AmbiTeam sometimes provided similar benefits to co-located participants as it did to remote participants. One co-located participant (C1) indicated that using AmbiTeam provided more information about their collaborator's effort than they got from their frequent communications with their collaborator - despite sitting next to each other. In this case, the information provided by AmbiTeam caused this co-located participant to change their expectations to take their collaborator's conflicting priorities into account. It's important to note that neither participant on Team C reported experiencing any negative effects from AmbiTeam's use. This finding indicates that AmbiTeam can be an effective tool even in co-located projects. + +§ 6.3 PRIVACY VS. ACCURATE ACTIVITY TRACKING + +During the post-study interviews, participants mentioned several activities that are part of their workflow that were not tracked by AmbiTeam during the study. However, tracking several of these activities would involve significant privacy violations, namely tracking in-person conversations, emails, and internet browsing history. This leads to the question of how to balance accurate activity tracking with maintaining user's privacy. It is possible that tracking additional, less-sensitive information (e.g., file length, degree to which a file has been changed) paired with customized tracking on a per-project and per-user basis may provide enough information that monitoring more-sensitive information like communications between collaborators is unnecessary. Further research is necessary to determine whether this is the case. + +§ 6.4 FUTURE WORK + +One of the many dangers of remote work is loss of motivation. In co-located work, the presence of others has a large and important impact on teammates motivation [33]. We believe AmbiTeam was able to capture some of the motivational presence of others in remote work using an ambient display. In future work, we will explore other ways in which ambient displays can increase motivation. + +Although tracking file activity allows us to gain some measure of effort, it does not encompass many important steps of work (thinking, discussing etc.). Future work can explore the use of different metrics for providing team awareness, such as the amount of progress on given tasks. In addition, future work can also explore long term effects of systems like AmbiTeam to determine whether the immediate increase in productivity due to being watched decreases over long periods of time and see if tensions arise due to the limited display of team member's contributions. + +We evaluated AmbiTeam with collaborations of academic researchers who, while pursuing different research questions, had similar workflows. It is likely that all knowledge workers (workers who apply knowledge acquired through formal training to develop services and products [14]) can benefit from a system like Am-biTeam given that they generally have high amounts of screen time. However, it is less clear whether ambient displays work for all types of workers, including those whose jobs are very different from that of a knowledge worker (e.g., service work). In organizations with a clear hierarchy, does the role of the user affect the usefulness of AmbiTeam? Are there types of ambient data from a CEO that would motivate workers? For this reason, future work includes exploring the use of AmbiTeam in a variety of contexts of work. + +It is also unclear how well ambient displays work for providing activity information in large teams. Our assessment of AmbiTeam was with small teams of 2-4 people. How well will a system like AmbiTeam work for an entire organization? Given that organizations are frequently divided into smaller teams, is there even a need for systems like AmbiTeam to work with large collaborations? + +Many collaborations are highly temporally dispersed, sometimes operating across extreme time zone differences. In these situations, such as with a 12 hour time zone difference, people aren't working at the same time. Can we still effectively summarize progress from their work? Is the provision of activity information about a coworker who is not working at the same time still motivating? + +§ 7 CONCLUSION + +In this paper, we described and evaluated a system meant to assist researchers experiencing the problem of perceived prioritization. We found that, despite shortcomings with regards to activity tracking, AmbiTeam had some effect on user's perceptions of their collaborators' effort as well as their motivation to work on their collaborative project. This work has implications for creating effective awareness-based technology for supporting collaborative work, particularly the recommendation that future awareness systems consider (a) using file activity to measure effort and (b) implementing ambient displays that do not interrupt the user's workflow. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..7b393d129f9cd5b676af2bde2eedd41852a6026c --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,227 @@ +# Audiovisual AR concepts for laparoscopic subsurface structure navigation + +Authors anonymized for peer review. + +## Abstract + +Background: The identification of subsurface structures during resection wound repair is a challenge during minimally invasive partial nephrectomy. Specifically, major blood vessels and branches of the urinary collecting system need to be localized under time pressure as target or risk structures during suture placement. Methods: This work presents concepts for AR visualization and auditory guidance based on tool position that support this task. We evaluated the concepts in a laboratory user study with a simplified, simulated task: The localization of subsurface target points in a healthy kidney phantom. We evaluated the task time, localization accuracy, and perceived workload for our concepts and a control condition without navigation support. Results: The AR visualization improved the accuracy and perceived workload over the control condition. We observed similar, non-significant trends for the auditory display. Conclusions: Further, clinically realistic evaluation is pending. Our initial results indicate the potential benefits of our concepts in supporting laparoscopic resection wound repair. + +Index Terms: Human-centered computing-Visualization-; Human-centered computing-Human computer interaction (HCI)- Interaction paradigms-Mixed / augmented reality; Human-centered computing-Human computer interaction (HCI)-Interaction devices-Sound-based input / output; Applied computing-Life and medical sciences—— + +## 1 INTRODUCTION + +### 1.1 Motivation + +The field of augmented reality (AR) for laparoscopic surgery has inspired broad research over the past decade [2]. This research aims to alleviate the challenges that are associated with the indirect access in such operations. One challenging operation that has attracted much attention from the research community is laparoscopic or robot-assisted partial nephrectomy (LPN/RPN) [17, 19]. LPN/RPN is the standard treatment for early-stage renal cancer. The operation's objective is to remove the intact (i.e., entire) tumor from the kidney while preserving as much healthy kidney tissue as possible. Three challenging phases in this operation can particularly benefit from image guidance or AR navigation support [19]: i) the management of renal blood vessels before the tumor resection, ii) the intraoperative resection planning and the resection, iii) the repair of the resection wound after the tumor removal. Although numerous solutions have been proposed to support urologists during the first two phases [19], no dedicated AR solutions exist for the third. Specifically, urologists need to identify major blood vessels or branches of the urinary collecting system that have been severed or that lie closely under the resection wound's surface and could be damaged during suturing. One additional challenging factor is that this surgical phase is performed under time pressure due to the risk of ischemic damage or increased blood loss (depending on the vascular clamping strategy). There are some technical challenges in providing correct AR registration and meaningful navigation support during this phase. One challenge that affects the visualization of AR information is the removal of renal tissue volume that leaves an undefined tissue surface that is inside the original organ borders. In this work, we present an AR visualization and an auditory display concept that rely on the position of a tracked surgical tool to support the urologist in identifying and locating subsurface structures. We also report a preliminary proof-of-concept evaluation through a user study with an abstracted task. AR registration and the clinical evaluation of our concepts lie outside the scope of this work. + +### 1.2 Related work + +Multiple reviews provide a comprehensive overview of navigation support approaches for LPN/RPN [3, 17, 19]. Although no dedicated solutions exist to support urologists during the resection wound repair phase, one application has been reported, in which the general AR model of intrarenal structures was used during renorrhaphy [24]. This approach, however, does not address the unknown resection wound surface geometry and potential occlusion issues. Moreover, multiple solutions have been proposed to visualize intrarenal vascular structures. These include solutions in which a preoperative model of the vascular structure is rendered in an AR overlay [23,30]. This may be less informative after an unknown tissue volume has been resected. Other methods rely on real-time detection of subsurface vessels $\left\lbrack {1,{18},{29}}\right\rbrack$ . However, these are unlikely to perform well when the vessels are clamped (suppressing blood flow and pulsation) or when the organ surface is occluded by blood. Outside of LPN/RPN, such as in angiography exploration, visualization methods have been developed to communicate the spatial arrangement of vessels. These include the chromadepth [28] and pseudo-chromadepth methods $\left\lbrack {{20},{26}}\right\rbrack$ , which map vessel depth information to color hue gradients. Kersten-Oertel et al. [21] showed that color hue mapping, along with contrast grading, performs well in conveying depth information for vascular structures. The visualization of structures based on tool position has inspired work both inside and outside of the field of LPN/RPN: Singla et al. [27] proposed visualizing the tool position in relation to the tumor prior to resection in LPN/RPN. Multiple visualizations have been proposed for the spatial relationship between surgical needles and the surrounding vasculature [15]. However, these visualizations explore the application of minimally-invasive needle interventions where the instrument is moving in between the structures of interest. + +In addition to visual approaches to supporting LPN/RPN as well as other navigated applications, recent works have shown that using sound to augment or replace visual cues can be employed to aid task completion. By using so-called auditory display, changes in a set of navigation parameters can be mapped to changes in parameters of a real-time sound synthesizer. This can be found in common automobile parking assistance systems: the distance of the automobile to a surrounding object is mapped to the inter-onset-interval (i.e., the time between tones) of a simple synthesizer. Using auditory display has been motivated by the desire to increase clinician awareness, replacing the lost sense of touch when using teleoperated devices, or help clinicians correctly interpret and follow navigation paths. There have been, however, relatively few applications of auditory display in medical navigation. Evaluations have been performed for radiofrequency ablation [5], temporal bone drilling [8], skull base surgery [9], soft tissue resection [13], and telerobotic surgery [6, 22]. These have shown auditory display to improve recognition of structure distance and accuracy and diminish cognitive workload and rates of clinical complication. Disadvantages have included increased non-target tissue removal and more lengthy task completion times. For a thorough overview of auditory display in medical interventions, see [4]. + +## 2 NAVIGATION METHODS + +We pursued two routes to provide navigation content to the urologist: The first approach is the AR visualization of preoperative anatomical information in a video-see through setting. The second approach is an auditory display. + +### 2.1 AR visualization + +Our AR concept aims to provide information about intrarenal risk structures to the urologists. We, therefore, based our visualization on preoperative three-dimensional (3D) image data of the intrarenal vasculature and collecting system. These were segmented and exported as surface models. We assumed that the resection volume and resulting wound geometry are unknown. Simply overlaying the preoperative models onto the laparoscopic video stream would include all risk structures that were resected with the resection volume. We, therefore, propose a tool-based visualization. In this concept, only information about risk structures in front of a pointing tool are rendered and overlaid onto the video stream. To this end, the urologist can place a spatially tracked pointing tool on the newly created organ surface (i.e., resection ground) and see the risk structures beneath. We placed a virtual circular plane perpendicular to the tool axis with a diameter of ${20}\mathrm{\;{mm}}$ around the tooltip. The structures in front of this plane (following the tool direction) are projected orthogonally onto the plane and rendered accordingly. The two different structure types are visualized with two different color scales (Figure 1a). The scales visualize the distance between a given structure and the plane. The scale ends are equivalent to a minimum and maximum probing depth that can be set for different applications. The scale hues were selected based on two criteria: Firstly, we investigated which hues provide good contrast visibility in front of laparoscopic videos. Secondly, the choice of yellow for urinary tracts and blue-magenta for blood vessels is consistent with conventions in anatomical illustrations and should be intuitive for medical professionals. For the urinary tract, color brightness and transparency are changed across the spectrum. For the blood vessels, color hue, brightness, and transparency are used. These color spectrums aim to combine the color gradient and fog concepts that were identified as promising approaches by Kersten-Oertel et al. [21]. An example for the resulting visualization (using a printed kidney phantom) is provided in Figure 1b. The blue line marks the measured tool axis. + +### 2.2 Audio navigation + +After iterative preliminary designs were evaluated informally with 12 participants, an auditory display consisting of two contrasting sounds was developed to represent the structures. The sound of running water was selected to represent the collecting system, and a synthesized tone was created to represent the vessels. The size and number of the vessels in the scanning area are encoded in a three-level density score. Density is then mapped to the water pressure for the collecting system, and the tone's pitch for vessels, with higher pressure and pitch indicating a denser structure. Finally, the rhythm of each tone is a translation of the distance between the instrument tip and the closest point on the targeted structure, with a faster rhythm representing lesser distance. To express the density of the collecting system, the water pressure is manipulated to produce three conditions, i.e., low, medium, and high pressure; representing low, medium, and high density. The water tone is triggered every 250, 500, and 2000ms, depending on the distance: inside, close, and far. + +0% 75%: 100%: 0/0/102/204 255/0/255/255 75%: 100%: 102/102/0/204 255/255/0/255 0/0/52/0 0% 52/52/0/0 + +(a) color spectrum for blood vessels (top) and urinary tract (bottom). The color values are in RGBA format. + +![01963ea5-83b9-7b94-8aaf-8a6965e184cc_1_923_534_723_356_0.jpg](images/01963ea5-83b9-7b94-8aaf-8a6965e184cc_1_923_534_723_356_0.jpg) + +(b) Laparoscopic view of a printed kidney phantom with the visual AR overlay. + +Figure 1: AR visualization. + +A distant structure resembles an uninterrupted flow of water, and a nearby structure is heard as rhythmic splashes. Inside the structure, a rapid splashing rhythm is accompanied by an alert sound. + +### 2.3 Prototype implementation + +We implemented our overall software prototype and its visualization in Unity 2018 (Unity Software, USA). The auditory display was implemented using Pure Data [25]. + +#### 2.3.1 Augmented reality infrastructure + +The laparoscopic video stream was generated with an EinsteinVision ${}^{\circledR }$ 3.0 laparoscope (B. Braun Melsungen AG, Germany) with a ${30}^{ \circ }$ optic in monoscopic mode. We used standard laparoscopic graspers as a pointing tool. The camera head and the tool were tracked with a NDI Polaris Spectra passive infrared tracking camera (Northern Digital Inc., Canada). We calibrated the laparoscopic camera based on a pinhole model [31] as implemented in the OpenCV library ${}^{1}$ [7]. We used a pattern of ChArUCo markers [12] for the camera calibration. The external camera parameters (i.e., the spatial transformation between the laparoscope's tracking markers and the camera position) were determined with a spatially tracked calibration body. The spatial transformation between the tool's tracking markers and its tip was determined with pivot calibration using the NDI Toolbox software (Northern Digital Inc.). The rotational transformation between the tracking markers and the tool axis was measured with our calibration body. The resulting laparoscopic video stream with or without AR overlay was displayed on a 24 inch screen. AR registration for this surgical phase was outside of scope for this study. The kidney registration was based on the predefined spatial transformation between our kidney phantom and its tracking geometry (see Study setup). + +--- + +${}^{1}$ We used the commercially available OpenCV for Unity package (Enox Software, Japan) + +--- + +#### 2.3.2 AR visualization implementation + +The circular plane was placed at the tooltip and perpendicular to the tool's axis as provided by the real-time tracking data. The registration between the visualization and the camera were provided by the abovementioned tool and camera calibration and the real-time tracking data. The plane was then overlaid with a mesh with a rectangular vertex arrangement. The vertices had a density of 64 pts $/{\mathrm{{mm}}}^{2}$ and served as virtual pixels. We conducted a ray-casting request for each vertex. For each ray that hit the surface mesh of the structures in our virtual model, the respective vertex was colored according to the type and ray collision distance of that structure. The visualization was permanently activated in our study prototype. + +#### 2.3.3 Auditory display implementation + +The synthesized tone contrasts the water sound to ensure distinction between the sounds. The synthesized sound is created from the base frequencies of ${65.4}\mathrm{\;{Hz}},{130.8}\mathrm{\;{Hz}}$ , and ${261.6}\mathrm{\;{Hz}}(\mathrm{C}2,\mathrm{C}3$ , and $\mathrm{C}4$ notes) and harmonized by each frequency's first to eighth harmonics, creating a complex tone. The density of the vessels is measured on ray casting requests that are equivalent to the visual implementation. The number of virtual pixels that would depict a given structure type determin the density for that type. This density is then encoded in the pitch of the tone, meaning that ${65.4}\mathrm{\;{Hz}},{130.8}\mathrm{\;{Hz}}$ , and ${261.6}\mathrm{\;{Hz}}$ (C2, C3, and C4 notes) represent low, medium, and high density, respectively. The repetition time of the tones expresses the distance between the instrument tip and the closest point on the targeted vessel. Similar to the water sound, a continuous tone represents a far-away vessel, while a close vessel is heard as the tone being repeated every ${500}\mathrm{\;{ms}}$ with a duration of ${400}\mathrm{\;{ms}}$ . Being inside the vessel triggers an alert sound played every ${125}\mathrm{\;{ms}}$ accompanied by the tone every ${250}\mathrm{\;{ms}}$ . + +## 3 EVALUATION METHODS + +We conducted a simulated-use proof-of-concept evaluation study with $\mathrm{N} = {11}$ participants to investigate whether our concepts effectively support the urologists in locating subsurface structures in laparoscopic surgery. + +### 3.1 Study task + +The specific challenges of identifying relevant subsurface structures for suture placement in resection wound repair are difficult to replicate in a laboratory setting. We devised a study task that aimed to imitate the identification of specific structures beneath an organ surface: Participants were presented with a printed kidney phantom in a simulated laparoscopic environment. We also displayed a 3D model of the same kidney on a 24 inch screen. This virtual model included surface meshes of the vessel tree and collecting system inside that kidney (Figure 2). Participants could manipulate the view of that model by panning, rotating, and zooming. For each study trial, we marked a point on the internal structures (a blood or urine vessel) in the virtual model with a red dot (Figure 2). The target points were arranged into four clusters to prevent familiarization with the target structures throughout the experiment. The participants were then asked to point the surgical tool at the location of that subsurface point in the physical phantom as accurately and as quickly as possible by placing the tool on the surface and orienting it such that the tool's direction pointed towards the internal target point. + +### 3.2 Study design + +Our study investigated the impact of the visual and auditory support on the performance and perceived workload of the navigation task. We examined two independent variables with two levels each $\left( {2 \times 2\text{design}}\right)$ : The presence or absence of the visual support and the presence or absence of the auditory support. The condition in which neither support modality was present was the control condition. Three dependent variables were measured and analyzed: Firstly, we measured the task completion time. Time started counting when the target point was displayed. It stopped when participants gave a verbal cue that they were confident they were pointing at the target as accurately as possible. Secondly, we measured how accurately they pointed the tool. Accuracy was measured as the closest distance between the tool's axis and the target point (point-to-ray distance). Finally, we used the NASA Task Load Index (NASA-TLX) [14] questionnaire as an indicator for the perceived workload. The NASA-TLX questionnaire is based on six contributing dimensions of subjectively perceived workload. The weighted ratings for each dimension are combined into an overall workload score. + +Upper kidney pole + +Figure 2: Virtual kidney model with the target point clusters. The model is shown from a medial-anterior perspective, corresponding to the participant's position. + +### 3.3 Study sample + +Eleven (11) participants took part in our study (six females, five males). All participants were medical students between their third and fifth year of training. Participants were aged between 24 and 33 years (median $= {25}$ years). All participants were right-handed. Four participants reported between one and five hours of experience with laparoscopic interaction (median $= 3\mathrm{\;h}$ ) and seven participants reported between one and 15 hours of AR experience (median $= 2\mathrm{\;h}$ ). Finally, eight participants reported to be trained in playing a musical instrument. No participants reported any untreated vision or hearing impairments. + +### 3.4 Study setup + +The virtual kidney model and its physical phantom were created from a public database of abdominal computed tomography imaging data [16]. We segmented a healthy left kidney using 3D Slicer [11] and exported the parenchymal surface, the vessel tree, and the urinary collecting system as separate surface models. The parenchymal surface model was printed with the fused deposition modeling method and equipped with an adapter for passive tracking markers (Figure 3a). The phantom was placed in a cardboard box to simulate a laparoscopic working environment (Figure 3b). The screen with the laparoscopic video stream was placed opposite the participant and the screen with the virtual model viewer was placed to the participant's right. A mouse was provided to interact with the model viewer and a standard commercial multimedia speaker was included for the auditory display. The overall study setup is shown in Figure 4. + +(a) Kidney phantom with tracking marker adapter. (b) Cardboard box with tool holes. + +Figure 3: Components of the simulated laparoscopic environment. + +### 3.5 Study procedure + +Participants' written consent and demographic data were collected upon arrival. The participants then received an introduction to the visualization and auditory display of the data. Participants conducted one trial block per navigation method. In each trial block, they were asked to locate the three points of one cluster, with one trial per point. After each trial block, one NASA TLX questionnaire was completed for the respective navigation method. The order of the navigation methods and the assignment between the point clusters and the navigation methods were counterbalanced. The order in which the points had to be located within each trial block was permutated. + +### 3.6 Data analysis + +During initial data exploration, we noticed a trend that participants took more time to complete the task in the first trial they attempted with each method than in the second and third trials. Therefore, the first trial for each method and participant was regarded as a training trial and excluded from the analysis. The data (time and accuracy) from the remaining two trials from each block were averaged and a repeated-measures two-way analysis of variance (ANOVA) was conducted for each dependent variable. + +## 4 RESULTS + +The descriptive results for the three dependent variables are listed in Table 1. We found significant main effects for the presence of visual display onto the accuracy $\left( {\mathrm{p} < {0.001}}\right)$ and the NASA TLX rating $\left( {\mathrm{p} = {0.03}}\right)$ . The ANOVA results are listed in Table 2. The significant effects are plotted in Figure 5. + +![01963ea5-83b9-7b94-8aaf-8a6965e184cc_3_926_147_720_550_0.jpg](images/01963ea5-83b9-7b94-8aaf-8a6965e184cc_3_926_147_720_550_0.jpg) + +Figure 4: Overall study setup. + +## 5 DISCUSSION + +### 5.1 Discussion of results + +The most evident result from our evaluation is that the visual display increases the accuracy and reduces the perceived workload of identifying subsurface vascular and urinary structures in our simplified task. At the same time, the visual display method did not reduce the task completion time. Generally, there were non-significant trends that all tested conditions with visual or auditory display performed more accurately and tended to cause a lower perceived workload. However, the navigation support conditions tended to perform less quickly than the control condition. This may be due to the fact that the required mental spatial transformations are reduced, but a greater amount of information needs to be processed by the participants. + +This explanation is also supported by the result that the combined auditory and visual display performed worse than the visual-only condition within our sample. While this trend is not statistically significant, it poses a question: Were the auditory display designs somewhat misleading or distracting, or is the combination of multimodal channels for the same information in itself potentially hindering in this task? The trend of auditory support performing slightly better than the control condition within our sample (no significance) may indicate that the latter explanation is more likely. Another aspect may be users' lower familiarity with auditory navigation than visual cues. Further training and greater participant experience may also reduce the difference in performance between he visual and auditory navigation aids. + +The AR visualization was well suited for the abstracted task in our proof-of-concept evaluation. In the clinical context, a semitransparent display of our visualization may be better suited to prevent occlusion of the relevant surgical area. This occlusion can further be reduced by providing a means to interactively activate or deactivate the visualization. + +Finally, the absolute values we measured for our dependent variables are less meaningful than the comparative effects we found for our navigation conditions. Multiple design factors limit the clinical validity of our study, including the exclusion of a registration pipeline. This means that the absolute task time or pointing error may well deviate from the reported descriptive results. + +### 5.2 General discussion + +Our tests yielded preliminary and successful proof-of-concept results for the audiovisual AR support for resection wound repair. The results indicate that audio guidance may be helpful but the benefit could not be significantly within our sample. However, there are limitations to the clinical validity of our prototypes and our study setup. + +Table 1: Descriptive results for all dependent variables. All entries are in the format . + +
Navigation conditionTask completion time [s]Accuracy [mm]NASA-TLX
No support29.92 (18.59)12.54 (4.21)14.14 (2.18)
Auditory support41.44 (22.9)9.69 (5.14)12.93 (3.46)
Visual support36.02 (19.21)4.39 (3.49)10.87 (3.86)
Auditory and visual support38.12 (22.48)6.45 (5.08)11.93 (3.3)
+ +Table 2: ANOVA results for all variables. AD: Auditory display, VD: Visual display. All cells are in the format $< \mathrm{F}$ value (degrees of freedom); p value>. + +
Dependent variableMain effect ADMain effect VDInteraction AD:VD
Task completion time1.41 $\left( {1,{10}}\right) ;{0.263}$0.17(1,10); 0.6881.47(1,10); 0.253
Accuracy${0.11}\left( {1,{10}}\right) ;{0.748}$28.01(1,10); <0.001*2.67(1,10); 0.133
NASA TLX0.01(1,10); 0.9116.35(1,10); 0.03*1.47(1,10); 0.253
+ +Pointing error 10 With visual support Condition (a) Visual display main effect on the pointing ac-With visual support Condition (b) Visual display main effect on the NASA TLX mm] 5 0 Without visual support curacy. 15 NASA TLX rating 10 5 Without visual support rating. + +Figure 5: Significant ANOVA main effects. The error bars represent standard errors. + +First and foremost, the study task is an abstraction of the actual surgical task: The surgical task requires not only the identification of major subsurface structures but also the judgment and selection of a suture path. This task limitation went along with an abstract laparoscopic environment and surgical site. Our kidney phantom imitated an in-vivo kidney only in its geometric properties. The color, biomechanical behavior, and surgical surroundings did not resemble their real clinical equivalents. Moreover, the phantom was simplified in that it was based on an intact kidney rather than containing a resection wound. While this simplification is an additional limitation to our study's clinical validity, we believe that introducing a phantom with a resection bed will only be meaningful in combination with a more complex simulated task. This is because, in a realistic setting, the urologist will be familiar with the wound and aware of potential landmarks (like intentionally severed vessels) to help navigate. This would not have been the case in our simplified task and for our participants. One further step in improving the phantom for increased realism may be the simulation of the deformation that occurs. This may be achieved by producing one preoperative phantom and one intraoperative phantom based on simulated intra-operative deformation (e.g., using the Simulation Open Framework Architecture [10]) + +Another aspect to improve the clinical validity of our evaluation could be a more realistic task. The most valid performance parameter, however, would be the frequency of suture setting errors. Because these are not very frequent, the study would require a large sample consisting of experienced urologists. This is logistically challenging. We, therefore, regard our preliminary evaluation as a good first indication for the aptitude of our navigation support methods. + +Future evaluation with a more realistic phantom and task should include the overlay of AR structures on the simulated resection wound as an (additional) reference condition. Moreover, AR registration was excluded from our study's scope to focus the investigation on the tested information presentation methods. A dedicated registration method for post-resection AR has been previously proposed [reference anonymized for peer review]. This could be combined with the dedicated AR concepts reported in this article for future, high-fidelity evaluations. + +The participants were medical students with limited laparoscopic experience: They were less trained in the spatial cognitive processes that are involved in laparoscopic navigation than the experienced urologists who would be the intended users for a support system like ours. Thus, the navigation methods presented in this article will need to be further evaluated in clinically realistic settings. This may include testing on an in-vivo or ex-vivo human or porcine kidney phantom. This, however, requires an effective AR registration that is compromised by the time pressure (for in-vivo phantoms) or postmortem deformation in ex-vivo phantoms. + +Beyond more clinically valid evaluation, some other research questions arise from our work: Firstly, some design iteration and comparison should be implemented to evaluate whether the limited success of our auditory display was due to the specific designs or due to a limited aptitude of the auditory modality for such information. Secondly, further visualizations should be developed and compared with our first proposal to identify an ideal information visualization. Finally, it should be investigated whether other procedures with soft tissue resection (e.g., liver or brain surgery) may benefit from similar navigation support systems for the resection wound repair. + +## 6 CONCLUSION + +This work introduces and tests an audiovisual AR concept to support urologists during the resection wound repair phase in LPN/RPN. To our knowledge, these are the first dedicated solutions that have been proposed for this particular challenge. These concepts have been preliminarily evaluated in a laboratory-based study with an abstracted task. Although the results only represent a proof-of-concept evaluation, we believe that they indicate the potential of our concepts. The next steps for this work include the integration of a targeted AR registration solution and the integrated prototype's evaluation in a clinically realistic setting. Pending this work, we believe that the concepts presented in this article sketch a promising path to a clinically meaningful AR navigation system for minimally invasive, oncological resection wound repair. + +## ACKNOWLEDGMENTS + +Funding information anonymized for peer review. + +## REFERENCES + +[1] A. Amir-Khalili, G. Hamarneh, J.-M. Peyrat, J. Abinahed, O. Al-Alao, A. Al-Ansari, and R. Abugharbieh. Automatic segmentation of occluded vasculature via pulsatile motion analysis in endoscopic robot-assisted partial nephrectomy video. Medical Image Analysis, 25(1):103-110, 2015. doi: 10.1016/j.media. 2015.04.010 + +[2] S. Bernhardt, S. Nicolau, L. Soler, and C. Doignon. The status of augmented reality in laparoscopic surgery as of 2016. Medical Image Analysis, 37:66-90, 2017. + +[3] R. Bertolo, A. Hung, F. Porpiglia, P. Bove, M. Schleicher, and P. Das-gupta. Systematic review of augmented reality in urological interventions: the evidences of an impact on surgical outcomes are yet to come. World journal of urology, 38(9):2167-2176, 2019. doi: 10. 1007/s00345-019-02711-z + +[4] D. Black, C. Hansen, A. Nabavi, R. Kikinis, and H. Hahn. A Survey of auditory display in image-guided interventions. International journal of computer assisted radiology and surgery, 12(10):1665-1676, 2017. doi: 10.1007/s11548-017-1547-z + +[5] D. Black, J. Hettig, M. Luz, C. Hansen, R. Kikinis, and H. Hahn. Auditory feedback to support image-guided medical needle placement. International journal of computer assisted radiology and surgery, 12(9):1655-1663, 2017. doi: 10.1007/s11548-017-1537-1 + +[6] D. Black, S. Lilge, C. Fellmann, A. V. Reinschluessel, L. Kreuer, A. Nabavi, H. K. Hahn, R. Kikinis, and J. Burgner-Kahrs. Auditory Display for Telerobotic Transnasal Surgery Using a Continuum Robot. Journal of Medical Robotics Research, 04(02):1950004, 2019. doi: 10 .1142/S2424905X19500041 + +[7] G. Bradski. The OpenCV Library. Dr. Dobb's Journal of Software Tools, (25):120-125, 2000. + +[8] B. Cho, M. Oka, N. Matsumoto, R. Ouchida, J. Hong, and M. Hashizume. Warning navigation system using real-time safe region monitoring for otologic surgery. International journal of computer assisted radiology and surgery, 8(3):395-405, 2013. doi: 10.1007/s11548-012-0797-z + +[9] B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish. Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery. The Laryngoscope, 124(4):853-859, 2014. doi: 10.1002/lary.24385 + +[10] F. c. Faure, C. Duriez, H. Delingette, J. Allard, B. Gilles, S. Marchesseau, H. Talbot, H. Courtecuisse, G. Bousquet, I. Peterlik, and S. Cotin. SOFA: A Multi-Model Framework for Interactive Physical Simulation. In Yohan Payan, ed., Soft Tissue Biomechanical Modeling for Computer Assisted Surgery, vol. 11 of Studies in Mechanobiology, Tissue Engineering and Biomaterials, pp. 283-321. Springer, 2012. doi: 10.1007/ 8415_2012_125 + +[11] A. Fedorov, R. Beichel, J. Kalpathy-Cramer, J. Finet, J.-C. Fillion-Robin, S. Pujol, C. Bauer, D. Jennings, F. Fennessy, M. Sonka, J. Buatti, S. Aylward, J. V. Miller, S. Pieper, and R. Kikinis. 3D Slicer as an image computing platform for the Quantitative Imaging Network. + +Magnetic resonance imaging, 30(9):1323-1341, 2012. doi: 10.1016/j. mri.2012.05.001 + +[12] S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. + +Marín-Jiménez. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47(6):2280- 2292, 2014. doi: 10.1016/j.patcog.2014.01.005 + +[13] C. Hansen, D. Black, C. Lange, F. Rieber, W. Lamadé, M. Donati, K. J. Oldhafer, and H. K. Hahn. Auditory support for resection guidance in navigated liver surgery. The international journal of medical robotics + computer assisted surgery : MRCAS, 9(1):36-43, 2013. doi: 10.1002/rcs. 1466 + +[14] S. G. Hart and L. E. Staveland. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In P. A. Hancock and N. Meshkati, eds., Human mental workload, vol. 52 of Advances in Psychology, pp. 139-183. North-Holland, Amsterdam and New York and New York, N.Y., U.S.A, 1988. doi: 10.1016/S0166 $- {4115}\left( {08}\right) {62386} - 9$ + +[15] F. Heinrich, G. Schmidt, F. Jungmann, and C. Hansen. Augmented Reality Visualisation Concepts to Support Intraoperative Distance Estimation. In Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technolog VRST '19. ACM, New York, NY, USA, 2019. doi: 10.1145/3359996. 3364818 + +[16] N. Heller, N. Sathianathen, A. Kalapara, E. Walczak, K. Moore, H. Kaluzniak, J. Rosenberg, P. Blake, Z. Rengel, M. Oestreich, J. Dean, M. Tradewell, A. Shah, R. Tejpaul, Z. Edgerton, M. Peterson, S. Raza, S. Regmi, N. Papanikolopoulos, and C. Weight. The KiTS 19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT Semantic Segmentations, and Surgical Outcomes. + +[17] A. Hughes-Hallett, P. Pratt, E. Mayer, S. Martin, A. Darzi, and J. Vale. Image guidance for all-TilePro display of 3-dimensionally reconstructed images in robotic partial nephrectomy. Urology, 84(1):237- 242, 2014. doi: 10.1016/j.urology.2014.02.051 + +[18] E. S. Hyams, M. Perlmutter, and M. D. Stifelman. A prospective evaluation of the utility of laparoscopic Doppler technology during minimally invasive partial nephrectomy. Urology, 77(3):617-620, 2011. doi: 10.1016/j.urology.2010.05.011 + +[19] F. Joeres, D. Schindele, M. Luz, S. Blaschke, N. Russwinkel, M. Schostak, and C. Hansen. How well do software assistants for minimally invasive partial nephrectomy meet surgeon information needs? A cognitive task analysis and literature review study. PloS one, 14(7):e0219920, 2019. doi: 10.1371/journal.pone. 0219920 + +[20] A. Joshi, X. Qian, D. P. Dione, K. R. Bulsara, C. K. Breuer, A. J. Sinusas, and X. Papademetris. Effective visualization of complex vascular structures using a non-parametric vessel detection method. IEEE transactions on visualization and computer graphics, 14(6):1603-1610, 2008. doi: 10.1109/TVCG.2008.123 + +[21] M. Kersten-Oertel, S. J.-S. Chen, and D. L. Collins. An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery. IEEE transactions on visualization and computer graphics,20(3):391- 403, 2014. doi: 10.1109/TVCG.2013.240 + +[22] M. Kitagawa, D. Dokko, A. M. Okamura, and D. D. Yuh. Effect of sensory substitution on suture-manipulation forces for robotic surgical systems. The Journal of thoracic and cardiovascular surgery, 129(1):151- 158, 2005. doi: 10.1016/j.jtevs.2004.05.029 + +[23] K. Nakamura, Y. Naya, S. Zenbutsu, K. Araki, S. Cho, S. Ohta, N. Ni-hei, H. Suzuki, T. Ichikawa, and T. Igarashi. Surgical navigation using three-dimensional computed tomography images fused intraoperatively with live video. Journal of endourology, 24(4):521-524, 2010. doi: 10 .1089/end.2009.0365 + +[24] F. Porpiglia, E. Checcucci, D. Amparore, F. Piramide, G. Volpi, S. Granato, P. Verri, M. Manfredi, A. Bellin, P. Piazzolla, R. Autorino, I. Morra, C. Fiori, and A. Mottrie. Three-dimensional Augmented Reality Robot-assisted Partial Nephrectomy in Case of Complex Tumours (PADUA $\geq {10}$ ): A New Intraoperative Tool Overcoming the Ultrasound Guidance. European urology, 78(2):229-238, 2019. doi: 10.1016/j.eururo.2019.11.024 + +[25] M. Puckette. Pure Data: Another integrated computer music environment. + +Proceedings of the second intercollege computer music concerts, (1):37-41, 1996. + +[26] T. Ropinski, F. Steinicke, and K. Hinrichs. Visually Supporting Depth Perception in Angiography Imaging. In A. Butz, ed., Smart graphics, vol. 4073 of Lecture Notes in Computer Science, pp. 93-104. Springer, Berlin, op. 2006. doi: 10.1007/11795018_9 + +[27] R. Singla, P. Edgcumbe, P. Pratt, C. Nguan, and R. Rohling. Intra-operative ultrasound-based augmented reality guidance for laparo-scopic surgery. Healthcare technology letters, 4(5):204-209, 2017. doi: 10.1049/htl.2017.0063 + +[28] R. A. Steenblik. The Chromostereoscopic Process: A Novel Single Image Stereoscopic Process. SPIE Proceedings, p. 27. SPIE, 1987. doi: 10.1117/12.940117 + +[29] S. Tobis, J. Knopf, C. Silvers, J. Yao, H. Rashid, G. Wu, and D. Goli-janin. Near infrared fluorescence imaging with robotic assisted laparo-scopic partial nephrectomy: initial clinical experience for renal cortical tumors. The Journal of Urology, 186(1):47-52, 2011. doi: 10.1016/j. juro.2011.02.2701 + +[30] D. Wang, B. Zhang, X. Yuan, X. Zhang, and C. Liu. Preoperative planning and real-time assisted navigation by three-dimensional individual digital model in partial nephrectomy with three-dimensional laparoscopic system. International Journal of Computer Assisted Radiology and Surgery, 10(9):1461-1468, 2015. doi: 10.1007/s11548-015-1148-7 + +[31] Z. Zhang. A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22:1330-1334, 2000. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..7cf185232c9fe0ce256a55b67b4a9e648a596746 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,182 @@ +§ AUDIOVISUAL AR CONCEPTS FOR LAPAROSCOPIC SUBSURFACE STRUCTURE NAVIGATION + +Authors anonymized for peer review. + +§ ABSTRACT + +Background: The identification of subsurface structures during resection wound repair is a challenge during minimally invasive partial nephrectomy. Specifically, major blood vessels and branches of the urinary collecting system need to be localized under time pressure as target or risk structures during suture placement. Methods: This work presents concepts for AR visualization and auditory guidance based on tool position that support this task. We evaluated the concepts in a laboratory user study with a simplified, simulated task: The localization of subsurface target points in a healthy kidney phantom. We evaluated the task time, localization accuracy, and perceived workload for our concepts and a control condition without navigation support. Results: The AR visualization improved the accuracy and perceived workload over the control condition. We observed similar, non-significant trends for the auditory display. Conclusions: Further, clinically realistic evaluation is pending. Our initial results indicate the potential benefits of our concepts in supporting laparoscopic resection wound repair. + +Index Terms: Human-centered computing-Visualization-; Human-centered computing-Human computer interaction (HCI)- Interaction paradigms-Mixed / augmented reality; Human-centered computing-Human computer interaction (HCI)-Interaction devices-Sound-based input / output; Applied computing-Life and medical sciences—— + +§ 1 INTRODUCTION + +§ 1.1 MOTIVATION + +The field of augmented reality (AR) for laparoscopic surgery has inspired broad research over the past decade [2]. This research aims to alleviate the challenges that are associated with the indirect access in such operations. One challenging operation that has attracted much attention from the research community is laparoscopic or robot-assisted partial nephrectomy (LPN/RPN) [17, 19]. LPN/RPN is the standard treatment for early-stage renal cancer. The operation's objective is to remove the intact (i.e., entire) tumor from the kidney while preserving as much healthy kidney tissue as possible. Three challenging phases in this operation can particularly benefit from image guidance or AR navigation support [19]: i) the management of renal blood vessels before the tumor resection, ii) the intraoperative resection planning and the resection, iii) the repair of the resection wound after the tumor removal. Although numerous solutions have been proposed to support urologists during the first two phases [19], no dedicated AR solutions exist for the third. Specifically, urologists need to identify major blood vessels or branches of the urinary collecting system that have been severed or that lie closely under the resection wound's surface and could be damaged during suturing. One additional challenging factor is that this surgical phase is performed under time pressure due to the risk of ischemic damage or increased blood loss (depending on the vascular clamping strategy). There are some technical challenges in providing correct AR registration and meaningful navigation support during this phase. One challenge that affects the visualization of AR information is the removal of renal tissue volume that leaves an undefined tissue surface that is inside the original organ borders. In this work, we present an AR visualization and an auditory display concept that rely on the position of a tracked surgical tool to support the urologist in identifying and locating subsurface structures. We also report a preliminary proof-of-concept evaluation through a user study with an abstracted task. AR registration and the clinical evaluation of our concepts lie outside the scope of this work. + +§ 1.2 RELATED WORK + +Multiple reviews provide a comprehensive overview of navigation support approaches for LPN/RPN [3, 17, 19]. Although no dedicated solutions exist to support urologists during the resection wound repair phase, one application has been reported, in which the general AR model of intrarenal structures was used during renorrhaphy [24]. This approach, however, does not address the unknown resection wound surface geometry and potential occlusion issues. Moreover, multiple solutions have been proposed to visualize intrarenal vascular structures. These include solutions in which a preoperative model of the vascular structure is rendered in an AR overlay [23,30]. This may be less informative after an unknown tissue volume has been resected. Other methods rely on real-time detection of subsurface vessels $\left\lbrack {1,{18},{29}}\right\rbrack$ . However, these are unlikely to perform well when the vessels are clamped (suppressing blood flow and pulsation) or when the organ surface is occluded by blood. Outside of LPN/RPN, such as in angiography exploration, visualization methods have been developed to communicate the spatial arrangement of vessels. These include the chromadepth [28] and pseudo-chromadepth methods $\left\lbrack {{20},{26}}\right\rbrack$ , which map vessel depth information to color hue gradients. Kersten-Oertel et al. [21] showed that color hue mapping, along with contrast grading, performs well in conveying depth information for vascular structures. The visualization of structures based on tool position has inspired work both inside and outside of the field of LPN/RPN: Singla et al. [27] proposed visualizing the tool position in relation to the tumor prior to resection in LPN/RPN. Multiple visualizations have been proposed for the spatial relationship between surgical needles and the surrounding vasculature [15]. However, these visualizations explore the application of minimally-invasive needle interventions where the instrument is moving in between the structures of interest. + +In addition to visual approaches to supporting LPN/RPN as well as other navigated applications, recent works have shown that using sound to augment or replace visual cues can be employed to aid task completion. By using so-called auditory display, changes in a set of navigation parameters can be mapped to changes in parameters of a real-time sound synthesizer. This can be found in common automobile parking assistance systems: the distance of the automobile to a surrounding object is mapped to the inter-onset-interval (i.e., the time between tones) of a simple synthesizer. Using auditory display has been motivated by the desire to increase clinician awareness, replacing the lost sense of touch when using teleoperated devices, or help clinicians correctly interpret and follow navigation paths. There have been, however, relatively few applications of auditory display in medical navigation. Evaluations have been performed for radiofrequency ablation [5], temporal bone drilling [8], skull base surgery [9], soft tissue resection [13], and telerobotic surgery [6, 22]. These have shown auditory display to improve recognition of structure distance and accuracy and diminish cognitive workload and rates of clinical complication. Disadvantages have included increased non-target tissue removal and more lengthy task completion times. For a thorough overview of auditory display in medical interventions, see [4]. + +§ 2 NAVIGATION METHODS + +We pursued two routes to provide navigation content to the urologist: The first approach is the AR visualization of preoperative anatomical information in a video-see through setting. The second approach is an auditory display. + +§ 2.1 AR VISUALIZATION + +Our AR concept aims to provide information about intrarenal risk structures to the urologists. We, therefore, based our visualization on preoperative three-dimensional (3D) image data of the intrarenal vasculature and collecting system. These were segmented and exported as surface models. We assumed that the resection volume and resulting wound geometry are unknown. Simply overlaying the preoperative models onto the laparoscopic video stream would include all risk structures that were resected with the resection volume. We, therefore, propose a tool-based visualization. In this concept, only information about risk structures in front of a pointing tool are rendered and overlaid onto the video stream. To this end, the urologist can place a spatially tracked pointing tool on the newly created organ surface (i.e., resection ground) and see the risk structures beneath. We placed a virtual circular plane perpendicular to the tool axis with a diameter of ${20}\mathrm{\;{mm}}$ around the tooltip. The structures in front of this plane (following the tool direction) are projected orthogonally onto the plane and rendered accordingly. The two different structure types are visualized with two different color scales (Figure 1a). The scales visualize the distance between a given structure and the plane. The scale ends are equivalent to a minimum and maximum probing depth that can be set for different applications. The scale hues were selected based on two criteria: Firstly, we investigated which hues provide good contrast visibility in front of laparoscopic videos. Secondly, the choice of yellow for urinary tracts and blue-magenta for blood vessels is consistent with conventions in anatomical illustrations and should be intuitive for medical professionals. For the urinary tract, color brightness and transparency are changed across the spectrum. For the blood vessels, color hue, brightness, and transparency are used. These color spectrums aim to combine the color gradient and fog concepts that were identified as promising approaches by Kersten-Oertel et al. [21]. An example for the resulting visualization (using a printed kidney phantom) is provided in Figure 1b. The blue line marks the measured tool axis. + +§ 2.2 AUDIO NAVIGATION + +After iterative preliminary designs were evaluated informally with 12 participants, an auditory display consisting of two contrasting sounds was developed to represent the structures. The sound of running water was selected to represent the collecting system, and a synthesized tone was created to represent the vessels. The size and number of the vessels in the scanning area are encoded in a three-level density score. Density is then mapped to the water pressure for the collecting system, and the tone's pitch for vessels, with higher pressure and pitch indicating a denser structure. Finally, the rhythm of each tone is a translation of the distance between the instrument tip and the closest point on the targeted structure, with a faster rhythm representing lesser distance. To express the density of the collecting system, the water pressure is manipulated to produce three conditions, i.e., low, medium, and high pressure; representing low, medium, and high density. The water tone is triggered every 250, 500, and 2000ms, depending on the distance: inside, close, and far. + + < g r a p h i c s > + +(a) color spectrum for blood vessels (top) and urinary tract (bottom). The color values are in RGBA format. + + < g r a p h i c s > + +(b) Laparoscopic view of a printed kidney phantom with the visual AR overlay. + +Figure 1: AR visualization. + +A distant structure resembles an uninterrupted flow of water, and a nearby structure is heard as rhythmic splashes. Inside the structure, a rapid splashing rhythm is accompanied by an alert sound. + +§ 2.3 PROTOTYPE IMPLEMENTATION + +We implemented our overall software prototype and its visualization in Unity 2018 (Unity Software, USA). The auditory display was implemented using Pure Data [25]. + +§ 2.3.1 AUGMENTED REALITY INFRASTRUCTURE + +The laparoscopic video stream was generated with an EinsteinVision ${}^{\circledR }$ 3.0 laparoscope (B. Braun Melsungen AG, Germany) with a ${30}^{ \circ }$ optic in monoscopic mode. We used standard laparoscopic graspers as a pointing tool. The camera head and the tool were tracked with a NDI Polaris Spectra passive infrared tracking camera (Northern Digital Inc., Canada). We calibrated the laparoscopic camera based on a pinhole model [31] as implemented in the OpenCV library ${}^{1}$ [7]. We used a pattern of ChArUCo markers [12] for the camera calibration. The external camera parameters (i.e., the spatial transformation between the laparoscope's tracking markers and the camera position) were determined with a spatially tracked calibration body. The spatial transformation between the tool's tracking markers and its tip was determined with pivot calibration using the NDI Toolbox software (Northern Digital Inc.). The rotational transformation between the tracking markers and the tool axis was measured with our calibration body. The resulting laparoscopic video stream with or without AR overlay was displayed on a 24 inch screen. AR registration for this surgical phase was outside of scope for this study. The kidney registration was based on the predefined spatial transformation between our kidney phantom and its tracking geometry (see Study setup). + +${}^{1}$ We used the commercially available OpenCV for Unity package (Enox Software, Japan) + +§ 2.3.2 AR VISUALIZATION IMPLEMENTATION + +The circular plane was placed at the tooltip and perpendicular to the tool's axis as provided by the real-time tracking data. The registration between the visualization and the camera were provided by the abovementioned tool and camera calibration and the real-time tracking data. The plane was then overlaid with a mesh with a rectangular vertex arrangement. The vertices had a density of 64 pts $/{\mathrm{{mm}}}^{2}$ and served as virtual pixels. We conducted a ray-casting request for each vertex. For each ray that hit the surface mesh of the structures in our virtual model, the respective vertex was colored according to the type and ray collision distance of that structure. The visualization was permanently activated in our study prototype. + +§ 2.3.3 AUDITORY DISPLAY IMPLEMENTATION + +The synthesized tone contrasts the water sound to ensure distinction between the sounds. The synthesized sound is created from the base frequencies of ${65.4}\mathrm{\;{Hz}},{130.8}\mathrm{\;{Hz}}$ , and ${261.6}\mathrm{\;{Hz}}(\mathrm{C}2,\mathrm{C}3$ , and $\mathrm{C}4$ notes) and harmonized by each frequency's first to eighth harmonics, creating a complex tone. The density of the vessels is measured on ray casting requests that are equivalent to the visual implementation. The number of virtual pixels that would depict a given structure type determin the density for that type. This density is then encoded in the pitch of the tone, meaning that ${65.4}\mathrm{\;{Hz}},{130.8}\mathrm{\;{Hz}}$ , and ${261.6}\mathrm{\;{Hz}}$ (C2, C3, and C4 notes) represent low, medium, and high density, respectively. The repetition time of the tones expresses the distance between the instrument tip and the closest point on the targeted vessel. Similar to the water sound, a continuous tone represents a far-away vessel, while a close vessel is heard as the tone being repeated every ${500}\mathrm{\;{ms}}$ with a duration of ${400}\mathrm{\;{ms}}$ . Being inside the vessel triggers an alert sound played every ${125}\mathrm{\;{ms}}$ accompanied by the tone every ${250}\mathrm{\;{ms}}$ . + +§ 3 EVALUATION METHODS + +We conducted a simulated-use proof-of-concept evaluation study with $\mathrm{N} = {11}$ participants to investigate whether our concepts effectively support the urologists in locating subsurface structures in laparoscopic surgery. + +§ 3.1 STUDY TASK + +The specific challenges of identifying relevant subsurface structures for suture placement in resection wound repair are difficult to replicate in a laboratory setting. We devised a study task that aimed to imitate the identification of specific structures beneath an organ surface: Participants were presented with a printed kidney phantom in a simulated laparoscopic environment. We also displayed a 3D model of the same kidney on a 24 inch screen. This virtual model included surface meshes of the vessel tree and collecting system inside that kidney (Figure 2). Participants could manipulate the view of that model by panning, rotating, and zooming. For each study trial, we marked a point on the internal structures (a blood or urine vessel) in the virtual model with a red dot (Figure 2). The target points were arranged into four clusters to prevent familiarization with the target structures throughout the experiment. The participants were then asked to point the surgical tool at the location of that subsurface point in the physical phantom as accurately and as quickly as possible by placing the tool on the surface and orienting it such that the tool's direction pointed towards the internal target point. + +§ 3.2 STUDY DESIGN + +Our study investigated the impact of the visual and auditory support on the performance and perceived workload of the navigation task. We examined two independent variables with two levels each $\left( {2 \times 2\text{ design }}\right)$ : The presence or absence of the visual support and the presence or absence of the auditory support. The condition in which neither support modality was present was the control condition. Three dependent variables were measured and analyzed: Firstly, we measured the task completion time. Time started counting when the target point was displayed. It stopped when participants gave a verbal cue that they were confident they were pointing at the target as accurately as possible. Secondly, we measured how accurately they pointed the tool. Accuracy was measured as the closest distance between the tool's axis and the target point (point-to-ray distance). Finally, we used the NASA Task Load Index (NASA-TLX) [14] questionnaire as an indicator for the perceived workload. The NASA-TLX questionnaire is based on six contributing dimensions of subjectively perceived workload. The weighted ratings for each dimension are combined into an overall workload score. + + < g r a p h i c s > + +Figure 2: Virtual kidney model with the target point clusters. The model is shown from a medial-anterior perspective, corresponding to the participant's position. + +§ 3.3 STUDY SAMPLE + +Eleven (11) participants took part in our study (six females, five males). All participants were medical students between their third and fifth year of training. Participants were aged between 24 and 33 years (median $= {25}$ years). All participants were right-handed. Four participants reported between one and five hours of experience with laparoscopic interaction (median $= 3\mathrm{\;h}$ ) and seven participants reported between one and 15 hours of AR experience (median $= 2\mathrm{\;h}$ ). Finally, eight participants reported to be trained in playing a musical instrument. No participants reported any untreated vision or hearing impairments. + +§ 3.4 STUDY SETUP + +The virtual kidney model and its physical phantom were created from a public database of abdominal computed tomography imaging data [16]. We segmented a healthy left kidney using 3D Slicer [11] and exported the parenchymal surface, the vessel tree, and the urinary collecting system as separate surface models. The parenchymal surface model was printed with the fused deposition modeling method and equipped with an adapter for passive tracking markers (Figure 3a). The phantom was placed in a cardboard box to simulate a laparoscopic working environment (Figure 3b). The screen with the laparoscopic video stream was placed opposite the participant and the screen with the virtual model viewer was placed to the participant's right. A mouse was provided to interact with the model viewer and a standard commercial multimedia speaker was included for the auditory display. The overall study setup is shown in Figure 4. + + < g r a p h i c s > + +Figure 3: Components of the simulated laparoscopic environment. + +§ 3.5 STUDY PROCEDURE + +Participants' written consent and demographic data were collected upon arrival. The participants then received an introduction to the visualization and auditory display of the data. Participants conducted one trial block per navigation method. In each trial block, they were asked to locate the three points of one cluster, with one trial per point. After each trial block, one NASA TLX questionnaire was completed for the respective navigation method. The order of the navigation methods and the assignment between the point clusters and the navigation methods were counterbalanced. The order in which the points had to be located within each trial block was permutated. + +§ 3.6 DATA ANALYSIS + +During initial data exploration, we noticed a trend that participants took more time to complete the task in the first trial they attempted with each method than in the second and third trials. Therefore, the first trial for each method and participant was regarded as a training trial and excluded from the analysis. The data (time and accuracy) from the remaining two trials from each block were averaged and a repeated-measures two-way analysis of variance (ANOVA) was conducted for each dependent variable. + +§ 4 RESULTS + +The descriptive results for the three dependent variables are listed in Table 1. We found significant main effects for the presence of visual display onto the accuracy $\left( {\mathrm{p} < {0.001}}\right)$ and the NASA TLX rating $\left( {\mathrm{p} = {0.03}}\right)$ . The ANOVA results are listed in Table 2. The significant effects are plotted in Figure 5. + + < g r a p h i c s > + +Figure 4: Overall study setup. + +§ 5 DISCUSSION + +§ 5.1 DISCUSSION OF RESULTS + +The most evident result from our evaluation is that the visual display increases the accuracy and reduces the perceived workload of identifying subsurface vascular and urinary structures in our simplified task. At the same time, the visual display method did not reduce the task completion time. Generally, there were non-significant trends that all tested conditions with visual or auditory display performed more accurately and tended to cause a lower perceived workload. However, the navigation support conditions tended to perform less quickly than the control condition. This may be due to the fact that the required mental spatial transformations are reduced, but a greater amount of information needs to be processed by the participants. + +This explanation is also supported by the result that the combined auditory and visual display performed worse than the visual-only condition within our sample. While this trend is not statistically significant, it poses a question: Were the auditory display designs somewhat misleading or distracting, or is the combination of multimodal channels for the same information in itself potentially hindering in this task? The trend of auditory support performing slightly better than the control condition within our sample (no significance) may indicate that the latter explanation is more likely. Another aspect may be users' lower familiarity with auditory navigation than visual cues. Further training and greater participant experience may also reduce the difference in performance between he visual and auditory navigation aids. + +The AR visualization was well suited for the abstracted task in our proof-of-concept evaluation. In the clinical context, a semitransparent display of our visualization may be better suited to prevent occlusion of the relevant surgical area. This occlusion can further be reduced by providing a means to interactively activate or deactivate the visualization. + +Finally, the absolute values we measured for our dependent variables are less meaningful than the comparative effects we found for our navigation conditions. Multiple design factors limit the clinical validity of our study, including the exclusion of a registration pipeline. This means that the absolute task time or pointing error may well deviate from the reported descriptive results. + +§ 5.2 GENERAL DISCUSSION + +Our tests yielded preliminary and successful proof-of-concept results for the audiovisual AR support for resection wound repair. The results indicate that audio guidance may be helpful but the benefit could not be significantly within our sample. However, there are limitations to the clinical validity of our prototypes and our study setup. + +Table 1: Descriptive results for all dependent variables. All entries are in the format . + +max width= + +Navigation condition Task completion time [s] Accuracy [mm] NASA-TLX + +1-4 +No support 29.92 (18.59) 12.54 (4.21) 14.14 (2.18) + +1-4 +Auditory support 41.44 (22.9) 9.69 (5.14) 12.93 (3.46) + +1-4 +Visual support 36.02 (19.21) 4.39 (3.49) 10.87 (3.86) + +1-4 +Auditory and visual support 38.12 (22.48) 6.45 (5.08) 11.93 (3.3) + +1-4 + +Table 2: ANOVA results for all variables. AD: Auditory display, VD: Visual display. All cells are in the format $< \mathrm{F}$ value (degrees of freedom); p value>. + +max width= + +Dependent variable Main effect AD Main effect VD Interaction AD:VD + +1-4 +Task completion time 1.41 $\left( {1,{10}}\right) ;{0.263}$ 0.17(1,10); 0.688 1.47(1,10); 0.253 + +1-4 +Accuracy ${0.11}\left( {1,{10}}\right) ;{0.748}$ 28.01(1,10); <0.001* 2.67(1,10); 0.133 + +1-4 +NASA TLX 0.01(1,10); 0.911 6.35(1,10); 0.03* 1.47(1,10); 0.253 + +1-4 + + < g r a p h i c s > + +Figure 5: Significant ANOVA main effects. The error bars represent standard errors. + +First and foremost, the study task is an abstraction of the actual surgical task: The surgical task requires not only the identification of major subsurface structures but also the judgment and selection of a suture path. This task limitation went along with an abstract laparoscopic environment and surgical site. Our kidney phantom imitated an in-vivo kidney only in its geometric properties. The color, biomechanical behavior, and surgical surroundings did not resemble their real clinical equivalents. Moreover, the phantom was simplified in that it was based on an intact kidney rather than containing a resection wound. While this simplification is an additional limitation to our study's clinical validity, we believe that introducing a phantom with a resection bed will only be meaningful in combination with a more complex simulated task. This is because, in a realistic setting, the urologist will be familiar with the wound and aware of potential landmarks (like intentionally severed vessels) to help navigate. This would not have been the case in our simplified task and for our participants. One further step in improving the phantom for increased realism may be the simulation of the deformation that occurs. This may be achieved by producing one preoperative phantom and one intraoperative phantom based on simulated intra-operative deformation (e.g., using the Simulation Open Framework Architecture [10]) + +Another aspect to improve the clinical validity of our evaluation could be a more realistic task. The most valid performance parameter, however, would be the frequency of suture setting errors. Because these are not very frequent, the study would require a large sample consisting of experienced urologists. This is logistically challenging. We, therefore, regard our preliminary evaluation as a good first indication for the aptitude of our navigation support methods. + +Future evaluation with a more realistic phantom and task should include the overlay of AR structures on the simulated resection wound as an (additional) reference condition. Moreover, AR registration was excluded from our study's scope to focus the investigation on the tested information presentation methods. A dedicated registration method for post-resection AR has been previously proposed [reference anonymized for peer review]. This could be combined with the dedicated AR concepts reported in this article for future, high-fidelity evaluations. + +The participants were medical students with limited laparoscopic experience: They were less trained in the spatial cognitive processes that are involved in laparoscopic navigation than the experienced urologists who would be the intended users for a support system like ours. Thus, the navigation methods presented in this article will need to be further evaluated in clinically realistic settings. This may include testing on an in-vivo or ex-vivo human or porcine kidney phantom. This, however, requires an effective AR registration that is compromised by the time pressure (for in-vivo phantoms) or postmortem deformation in ex-vivo phantoms. + +Beyond more clinically valid evaluation, some other research questions arise from our work: Firstly, some design iteration and comparison should be implemented to evaluate whether the limited success of our auditory display was due to the specific designs or due to a limited aptitude of the auditory modality for such information. Secondly, further visualizations should be developed and compared with our first proposal to identify an ideal information visualization. Finally, it should be investigated whether other procedures with soft tissue resection (e.g., liver or brain surgery) may benefit from similar navigation support systems for the resection wound repair. + +§ 6 CONCLUSION + +This work introduces and tests an audiovisual AR concept to support urologists during the resection wound repair phase in LPN/RPN. To our knowledge, these are the first dedicated solutions that have been proposed for this particular challenge. These concepts have been preliminarily evaluated in a laboratory-based study with an abstracted task. Although the results only represent a proof-of-concept evaluation, we believe that they indicate the potential of our concepts. The next steps for this work include the integration of a targeted AR registration solution and the integrated prototype's evaluation in a clinically realistic setting. Pending this work, we believe that the concepts presented in this article sketch a promising path to a clinically meaningful AR navigation system for minimally invasive, oncological resection wound repair. + +§ ACKNOWLEDGMENTS + +Funding information anonymized for peer review. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..b6778082349c61754e56d36d0c67d3238d184b3a --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,423 @@ +# Exploring Sketch-based Character Design Guided by Automatic Colorization + +Category: Research + +![01963e98-d669-739d-87db-14b5f9244e3a_0_220_380_1351_422_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_0_220_380_1351_422_0.jpg) + +Figure 1: Our character exploration tool facilitates the character design process by allowing artists to explore characters using colored thumbnails synthesized from sketches. These colored thumbnails, which are traditionally rough grey-scale sketches, better visualize the character for creating the turnaround sheet. + +## Abstract + +Character design is a lengthy process, requiring artists to iteratively alter their characters' features and colorization schemes according to feedback from creative directors or peers. Artists experiment with multiple colorization schemes before deciding on the right color palette. This process may necessitate several tedious manual re-colorizations of the character. Any substantial changes to the character's appearance may also require manual re-colorization. Such complications motivate a computational approach for visualizing characters and drafting solutions. + +We propose a character exploration tool that automatically colors a sketch based on a selected style. The tool employs a Generative Adversarial Network trained to automatically color sketches. The tool also allows a selection of faces to be used as a baseline for the character's design. We validated our tool by comparing it with using Photoshop for character exploration in our pilot study. Finally, we conducted a study to evaluate our tool's efficacy within the design pipeline. + +Index Terms: Human-centered computing-Human computer interaction (HCI)—Interaction paradigms—Graphical user interfaces + +## 1 INTRODUCTION + +Fig. 1 illustrates a typical character design process. At the very beginning of the process, the designer is furnished with a character description that outlines a combination of personality (e.g., courageous, melancholic) and physical traits (e.g., long hair, small frame) $\left\lbrack {7,{31}}\right\rbrack$ . Their first task is then to sketch out the character’s distinguishing expressions and physical features into a thumbnail-which is often a rough low-resolution gray-scale sketch. From the thumbnail the designer then develops a character turnaround sheet, a reference for later drawing the character in context. The turnaround sheet is then presented to the creative director for feedback, and the entire process iterates. Because the ideation and creation of a turnaround sheet are manual processes, the artist often has to restart from scratch. + +We devised a tool driven by a sketching interaction that automatically colors the character thumbnail, enabling artists and their creative directors to do more early exploration with less investment of effort. In practice, these thumbnails may also be used as references for producing the turnaround sheet in higher resolution using Photoshop. Fig. 1 shows an example of a colored character thumbnail synthesized using our tool. + +Our novel tool can generate these colored character thumbnails based on artists' sketches, color and character face selections. Specifically, we achieve this by training a Generative Adversarial Network (GAN) using an anime dataset. We used the GAN to generate the colored thumbnails as the artists sketch, while also allowing them to place characters' faces and select their colorization style. This selection-based automatic colorization framework was able to significantly speed-up the character exploration process compared to using Photoshop for participants in our user study, without sacrificing quality. The major contributions of our work include: + +- Proposing a novel generative character exploration tool by training a GAN to automatically color sketches. + +- Allowing artists using our tool to select character faces and colorization style, as well as to edit the character by directly sketching on the canvas. + +- Validating the effectiveness of our tool in facilitating the character exploration process compared to Photoshop via a number of design tasks. + +## 2 RELATED WORK + +### 2.1 Sketch-based Interactions + +Similar to our approach, several works have explored using sketching as an interaction technique in different contexts. + +We draw inspiration from several works that utilized sketching as an animation interaction. Kazi et al. $\left\lbrack {{25},{26}}\right\rbrack$ created an interface to allow users to animate their 2D sketches, while Guay et al. [11] presented a novel technique to animate $3\mathrm{D}$ characters’ motion using a single stroke. Storeoboard [15] allows filmmakers to sketch stereoscopic storyboards to visualize the depth of their scenes. + +Our approach aims to incorporate sketching into a 2D design process, while several works aim to examine sketch interactions in 3D design. Saul et al. [37] created a design system for chair fabrication. Xu et al. [49] introduced a model-guided 3D sketching tool which allows designers to redesign existing 3D models. Huang et al. [16] created a sketch-based user interface design system. ILoveSketch [3], a curve sketching system, allows designers to iterate directly on their 3D designs. Sketch-based interaction techniques in Augmented Reality were explored by the HCI community as well $\left\lbrack {2,{29},{44}}\right\rbrack$ . + +Several works explored using sketching to design cartoon characters specifically. Sketch2Manga [33] creates characters from sketches. Unlike our approach which uses a generative method to output a character from a sketch, it uses image retrieval to match the query with a character from the database. Han et al. [12] introduced a deep learning method to create 3D caricatures from an input $2\mathrm{D}$ sketch. Because the generated caricatures take the form of a texture-less 3D model, we opted to use a network architecture that enabled the generation of $2\mathrm{D}$ images and control of their colorization style. + +With our tool, we aim to improve the traditional design process for artists. Similarly, Jacobs et al. [20] introduced a tool which allows artists to create dynamic procedural brushes by varying the rotation, reflection and style of their strokes. Moreover, Vignette [27] is an interactive tool which allows artists to create custom textures, and automatically fill selected regions of their illustrations with these textures. + +### 2.2 Image Generation + +Recently, generative modeling approaches have emerged as a powerful, data-driven approach for directly mapping sketches into images. Isola et al. [18] show that conditional GANs are an effective general purpose tool for image-to-image translation problems and can be applied to mapping sketches to images. The sketch-to-image problem is also inherently ambiguous, as different colors and "styles" can be used for multiple plausible completions. Follow-up works $\left\lbrack {{17},{50}}\right\rbrack$ introduce extensions to enable multiple predictions. We find that for our task BicycleGAN [50] is able to effectively generate colored character illustrations from edge maps due to its multimodality. One challenge is the difficulty in obtaining real sketches. Methods such as $\left\lbrack {5,9,{40}}\right\rbrack$ use generative models to generate sketches themselves. We find that "synthesized" sketches based on edge maps, with some carefully selected preprocessing choices, are adequate for our application. + +Using the style selector in our tool, artists can choose the colorization scheme of their characters. Similarly, Color Sails [39] is a tool which allows coloring designs from a discrete-continuous color palette defined by the user. Tan et al. [42] developed a tool to allow real-time image palette editing. Zou et al. [51] introduced a language-based scene colorization tool. Xiang et al. [47] explored the style space of anime characters by training a style encoder which effectively encodes images into the style space, such that the distance of their codes in the space corresponds to the similarity of their artists' styles. + +In later work, Xiang et al. [48] developed a Generative Adversarial Disentanglement Network which can incorporate style and content codes that are independent. This allows separate control over the style and content codes of the image, enabling faithful image generation with proper style-specific facial features (e.g., eyes, mouth, chin, hair, blushes, highlights, contours,) as well as overall color saturation and contrast. The neural transfer methods used by Xiang et al. $\left\lbrack {{47},{48}}\right\rbrack$ do not transfer facial features consistently (e.g., may transfer the mouth from some images but not all). Therefore we allow artists to control facial features via only the sketch canvas and/or face selector instead of using the neural transfer methods of Xiang et al. Nonetheless, due to its effectiveness, we still used neural transfer to control the sketch's colorization. + +### 2.3 Character Design + +EmoG [38] is a character design tool introduced to facilitate story-boarding. EmoG generates facial expressions according to the user's emotion selection and sketch. Akin to our approach users can drag and drop a facial expression onto the canvas in addition to the ability to draw directly on the canvas. Unlike our approach, EmoG renders no colorization suggestions to the user and is focused on facilitating drafting characters' emotional expressions rather than their overall appearance. + +![01963e98-d669-739d-87db-14b5f9244e3a_1_922_148_723_343_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_1_922_148_723_343_0.jpg) + +Figure 2: Overview of our approach. To begin, an artist may place a face from the face selector onto the design canvas. The artist then directly sketches on the design canvas. If a style is selected, the sketch will be automatically colored by the GAN with the selected style. Otherwise, the tool suggests a random colorization scheme. + +MakeGirlsMoe [21] is a tool that helps artists brainstorm by allowing them to select facial features to automatically generate a character illustration. However, it has an unnatural discrete selection-based interaction compared to interfaces that allow the user to illustrate by sketching. MakeGirlsMoe was updated to create the crypto-currency generator, Crypko [6]. Both frameworks were not available to us during the user evaluation and thus were not compared to our tool. PaintsChainer [35] automatically colors sketches based on the artist's color hints in the form of brush strokes on top of the sketch. It colors a completed line art that a user uploads, though it does not allow the user to modify the character by placing or editing expressions and features onto the canvas, nor does it allow the user to start from a blank canvas and iteratively sketch a character. Consequentially, it neglects the need for a sketch-based iterative tool that combines both a feature selection-based interaction and automatic colorization. Hence, we developed an interactive character design tool equipped with a face selector, colorization style selector and sketching canvas to fulfill that need. + +Auto-colorization features were introduced in commercial software like Adobe Illustrator and Clip Studio Paint. However, Adobe Illustrator is limited to coloring black-and-white photographs. On the other hand, Clip Studio Paint can color cartoons, but like PaintsChainer, it can only color completed line art. + +## 3 OVERVIEW + +Fig. 2 shows an overview of our approach. We trained a GAN by using an edges-to-character dataset obtained by extracting the edgemaps of colored anime characters. The GAN learned to produce a colored anime character illustration given a sketch. Using the GAN we built a framework which allows character exploration by enabling a user to select and place facial features as well as sketch onto a canvas. As the user edits the canvas, the GAN will automatically color his illustrations according to the styles he selected. Finally, we demonstrated the effectiveness of our tool by conducting a user study comparing our tool with Adobe Photoshop. + +## 4 DATA PROCESSING + +We obtained our training and validation image pairs from the anime-face character dataset [34]. We used an automated process described below to extract edge maps from the face images, creating our edges-to-character dataset. + +![01963e98-d669-739d-87db-14b5f9244e3a_2_159_175_706_230_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_2_159_175_706_230_0.jpg) + +Figure 3: To synthesize artist sketches we used an edge operator on our dataset images. (a) A character image sampled from our training set. The edgemaps were created by applying the DoG filter with (b) $\sigma = {0.3}$ and (c) $\sigma = {0.5}$ . + +Animeface Dataset. The animeface character dataset [34] contains a total of 12,213 samples of face images. We randomly extracted 10,992 images of them for the training set. The remaining ${10}\%$ of samples (1221 images) were used as validation to monitor the progress of training the GAN. + +Edgemaps. Due to the costliness of obtaining character datasets which include sketches paired with their corresponding colored counterparts, we used an edge detector on the dataset images to simulate sketches. The standard Difference of Gaussians (DoG) filter was used successfully in several works to synthesize line drawings $\left\lbrack {{10},{22},{30},{46}}\right\rbrack$ , and unlike the eXtended difference-of-Gaussians (xDoG) filter [45] it does not tend to fill dark regions. Before processing the images, we created the edgemaps of our training and validation images after converting them to grayscale and then applying the DoG filter with $\gamma = {10}^{9}$ and $k = {4.5}$ (see Fig. 3). The value of $\sigma$ was randomly selected from $\{ {0.3},{0.4},{0.5}\}$ for each image to allow for variations in the amount of noise in the edgemaps. + +Image Processing. The images in the animeface dataset have a maximum size of ${160}\mathrm{{px}}$ in either dimension and various aspect ratios, while our GAN training process expects images sized exactly ${256} \times$ 256px. In order to match these requirements, we uniformly scaled the animeface faces to fit using bilinear interpolation. For other than square aspect ratios, we filled the rest of the square canvas by repeating edge pixels as shown in Fig. 4. We selected this repetition fill rather than a solid background color to avoid the network learning to reproduce such a solid border. + +![01963e98-d669-739d-87db-14b5f9244e3a_2_503_1112_370_339_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_2_503_1112_370_339_0.jpg) + +Figure 4: The arrow depicts the direction of replicating the border pixels when the (a) height or (b) width is the smallest dimension. + +## 5 CHARACTER COLORIZATION MODEL TRAINING + +To generate each colored character image from its paired edgemap image in our dataset, we used BicycleGAN from Zhu et al [50]. + +Architecture. We train the network on the ${256} \times {256}$ paired images from our training edges-to-character dataset. For our encoder, we found that using a ResNet [14] encoder explored by Zhu et al. helped decrease the amount of artifact images generated by the GAN. We use a U-Net [36] generator and PatchGAN [19] discriminators. + +In preliminary experiments, we found that changing the dimension of the latent code $\left| \mathbf{z}\right|$ produces different results. A code of too high dimension leads to variation in the background style instead of the character colorization style, while too low dimension leads to inadequate variation in the character colorization style. Ultimately, we found a latent code of size 8 to empirically work well. GANs are known to "collapse" when training lasts too long [4]. Subsequently, we noticed that eventually colorization resolution improves at the expense of style variation after 71 epochs as the GAN starts over-fitting on the training data. Because our tool is created to explore character designs and produce low-resolution sample thumbnails, we opted to halt training at 71 epochs to maintain variation in style and to avoid overfitting. + +Training. We inherit many of the default parameters and practices of BicycleGAN: ${\lambda }_{\text{image }} = {10},{\lambda }_{\text{latent }} = {0.5}$ and ${\lambda }_{\mathrm{{KL}}} = {0.01}$ . We trained for 71 epochs using Adam [28] with batch size 1 and learning rate 0.0002 . We updated the generator once for each discriminator update, while the encoder and generator are updated simultaneously. We used the TensorFlow library [1]. Training took approximately 48 hours on an Nvidia GeForce GTX 1070 GPU. + +We found these parameters empirically. Note that our goal is not to produce the state-of-the-art generative model for this task per se, but rather to explore how a reasonable implementation of a powerful generative model can be leveraged for downstreaming character exploration by an artist. + +## 6 CHARACTER EXPLORATION TOOL + +Due to its multi-modality, our trained neural network is able to color each edgemap in various styles. We demonstrate the several methods incorporated in our character design tool shown in Fig. 6 to color character sketches. The colorization results we presented in this section were generated using the same apparatus used for training. + +Suggested Colorization. We can color the edgemaps by randomly sampling the latent code $\mathbf{z}$ from a Gaussian distribution and injecting it into the network using the add_to_input method explored by Zhu et al. [50], which spatially replicates and concatenates $\mathbf{z}$ into only the first layer of the generator. + +Fig. 5 shows colorization results of images in our validation set by randomly sampling the latent code. By varying the latent code the network was able to vary the character's hair color. Because the majority of anime characters in the dataset have matching hair and eye colors, the network jointly varies the hair and eye color. Darker hair colors can be generated by increasing the amount of shading as can be seen in the second row of Fig. 5. + +Style-based Colorization. We can also inject the latent code $\mathbf{z}$ of other images (i.e. style images) into the network, which enables us to color the input edgemap according to the style images. We first encode the style image to its latent code $\mathbf{z}$ . We then generate the character image from the edgemap by injecting the style image's latent code $\mathbf{z}$ using the add_to_input method. + +Fig. 7 shows the results of coloring input edge images from our validation sets using a set of style images. Due to the inclusion of multi-faced images within the training set, the network is able to color multiple faces in one image (as illustrated by the final row of Fig. 7), giving artists the ability to sketch multiple faces on the same canvas. These faces are generated with the same colorization scheme. Because we did not remove the backgrounds from the training images, the GAN generates the backgrounds as part of the image's style. + +Implementation Details. We designed the character exploration tool (shown in Fig. 6) to allow artists to sketch on the design canvas using the brush and eraser provided. The brush is circular and its diameter can be adjusted using the brush slider from 1 to 10 pixels. The eraser is likewise circular and its diameter can be adjusted in the range of 1 to 20 pixels. + +Artists are also able to place facial expressions into any location on the design canvas from our face selector by clicking the facial expression and the canvas respectively. The facial expressions were created by extracting the faces detected by applying an anime face detector [41]. We selected 60 of the faces detected in the validation set to be used in the face selector. + +The style selector provides a set of style images from our validation set. These style images were selected by embedding the 8 dimensional latent codes of images in our validation set into two dimensions using t-SNE [32]. The embeddings are visualized as a ${10} \times {10}$ grid by snapping the two dimensional embeddings to the grid. The embeddings were arranged in the grid such that every position in the grid contains the style image with the latent code which has the smallest euclidean distance to the grid position. Twelve images of the 100 style images visualized using t-SNE embedding were discarded due to the presence of some artifacts after using them as style images in colorization. Therefore, we used 88 images in total in our style selector. For consistency and to avoid the varying background, resolution and artistry of the style images from biasing artist's selections in the user study, we display preview images colored with the style images shown in the t-SNE grid using the style-based colorization method. Fig. 6 shows some of these preview images in the style selector. Please refer to the supplementary material for the t-SNE grid visualization. + +![01963e98-d669-739d-87db-14b5f9244e3a_3_166_144_1445_736_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_3_166_144_1445_736_0.jpg) + +Figure 5: Sample suggested colorizations from our model. The first column shows the input edgemap. The second column shows the original image. The last four columns show the colorization results of our network with a latent code $\mathbf{z}$ randomly sampled from a Gaussian distribution for each generated sample. + +The colored sketch will be shown on the display canvas. If the artist has not selected a style image in the style selector, the image will be colored using the suggested colorization method. Otherwise, the sketch will be colored according to the style-based colorization method using the artist's selection in the style selector as the style image. The display canvas will be automatically updated every 20 seconds. The update can also be triggered by the artist by pressing the run button. If a style image was not selected, pressing the run button will trigger applying random colorization with a newly sampled latent vector, giving the artist an additional way-other than the style selector-to explore the colorization space. The sketching canvas can be cleared by pressing the clear button. + +## 7 Pilot User Study + +Participants. We recruited 27 artists with ages ranging from 19 to 30 to participate in our IRB-approved study. Fig. 8 shows participants’ average experience with sketching $\left( {\mathrm{M} = {5.52},\mathrm{{SD}} = {4.65}}\right)$ , character design $\left( {\mathrm{M} = {2.26},\mathrm{{SD}} = {2.96}}\right)$ and using Adobe Photoshop (M=3.26, SD=3.04). Participants’ experience is listed in more detail in the supplemental material. + +Setup. Participants sketched on a Wacom Cintiq Pro 13 tablet with a 13-inch display. Our tool was loaded on the tablet. The participants sketched directly on the screen. We used the same apparatus employed in training the GAN to generate the images of + +## the display canvas. + +Tasks. Following the completion of a training task, participants were given 6 tasks. Each design task refers to a combination of design request, time condition, and tool condition. Participants completed each of the 6 design requests shown in Table 1, which were created under the consultation of a professional character designer. The time conditions Limited and Unlimited determined whether participants completed the request within a 15-minute limit or under no time constraints, respectively. The Limited time condition was used to compare the quality of designs under tight time constraints. Our tool conditions are defined as: our character design tool, our character design tool supplemented with pencil/paper, and Adobe Photoshop condition. To allow for within-subject comparisons between tool conditions under each time condition, participants completed all of the 3 tool conditions under each time condition. The ordering and combinations of the design requests, tool conditions and time conditions were randomized for each participant to avoid any carryover effects. For example, one participant may be given the first design request from Table 1 to be completed under the Adobe Photoshop and Unlimited conditions as their first task; while another participant completes the third design request under the character design tool and Limited conditions first. On average, participants completed the study-including the training task-in approximately 90 minutes. + +## Training Task + +We allowed participants to freely explore the tool before receiving any design requests. To facilitate the learning process, we provided participants with a tutorial explaining the various functionalities of our tool along with the study's structure. + +## Character Design Tool + +Participants completed two design requests using our tool and without using a pencil or paper. One request was completed within 15 minutes while another was completed without a time constraint. + +![01963e98-d669-739d-87db-14b5f9244e3a_4_141_198_1507_530_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_4_141_198_1507_530_0.jpg) + +Figure 6: The UI of the character exploration tool in our user study. The canvases show a thumbnail designed using our tool. + +![01963e98-d669-739d-87db-14b5f9244e3a_4_138_818_1525_1172_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_4_138_818_1525_1172_0.jpg) + +Figure 7: Style-based colorization using our model. The left column shows the input edgemap while the second column shows the original image. The six rightmost columns show the results of style-based colorization on the edgemap with various styles. + +![01963e98-d669-739d-87db-14b5f9244e3a_5_145_116_726_532_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_5_145_116_726_532_0.jpg) + +Figure 8: The participants' average years of experience with sketching, character design, and using Adobe Photoshop. + +--- + + Design Request + +D1 Cheerful female character with long hair. + + She has a cool and flowing appearance. + + D2 She's a cold, lone wolf with a sense of humor. + +D3 A determined and patient girl with a simple and + + practical look. Her greatest desire is + + ultimate knowledge. + + D4 She's a determined and courageous healer, + + with a dark and eerie appearance. + +D5 She's a dedicated and knowledgeable scholar + + with a bright and sunny aesthetic. + + 6 She's a charming and fun-loving socialite + + with a vintage and classic look. + +--- + +Table 1: The 6 character design requests given to participants in our user study. + +## Character Design Tool and Pencil/Paper + +Participants were allowed to use a pencil and paper for two design requests completed using our tool. They completed one request within 15 minutes, and one without a time constraint. Some artists typically plan their designs on paper prior to using editing software. Thus, we added this tool condition to inspect whether allowing artists to use pencil/paper affects their workflow. + +## Adobe Photoshop + +Similar to the previous tool conditions, participants completed two requests by using Adobe Photoshop, once without a time constraint and once with a 15-minute limit. To mimic participant's typical design process as closely as possible, we provided them with a pencil and paper in this tool condition as well. + +User Survey. After completing the 2 tasks under each tool condition (i.e. once with a 15-minute time limit and once without), participants were asked to complete a survey to evaluate the performance of the tool used. We opted to use a 5-point Likert scale to evaluate the tools akin to [25]. Participants were asked to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree): + +- The tool was easy to use and learn. + +- I find the tool overall to be useful. + +Finally, we surveyed participants once more after completing the user study in its entirety. Participants were asked to rate each of their colored designs from 1 (Poor) to 5 (Excellent). Participants are aware of the study's time constraints, so they are more likely to fairly judge their artworks' quality. Therefore, we opted to rely on the participants' evaluation of their own work instead of using external evaluators. We were also interested in learning which designs participants favored overall, so for each time condition the participants were asked to vote for either the design created under the Character Design Tool, Character Design Tool and Pencil/Paper, or Photoshop condition. + +![01963e98-d669-739d-87db-14b5f9244e3a_5_923_143_726_537_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_5_923_143_726_537_0.jpg) + +Figure 9: Average time of completing the design requests. + +### 7.1 Time of Completion. + +Fig. 9 shows the average time taken by participants to complete the character design requests under each tool condition. Mauchly's test did not show a violation of sphericity against tool condition $\left( {W\left( 2\right) = {0.84}, p = {0.11}}\right)$ . With one-way repeated-measure ANOVA, we found a significant effect of the tool used on the time of design completion $\left( {F\left( {2,{52}}\right) = {14.53}\text{, partial}{\eta }^{2} = {0.36}, p < {0.001}}\right)$ . We performed Boneferroni-corrected paired t-tests for our post-hoc pairwise comparisons. + +Participants completed the designs faster by using our tool compared to Photoshop. A post-hoc test showed that the average time participants took to complete the designs using Photoshop $\left( {{1205} \pm {135.16}\text{ seconds }}\right)$ was longer than using our tool with pencil/paper $\left( {{801.7} \pm {91.28}\text{seconds}}\right) \left( {p = {0.01}}\right)$ . A post-hoc test also showed that participants completed the design requests in a shorter amount of time by using our tool without pencil/paper $\left( {{605.93} \pm {60.73}}\right) \left( {p < {0.01}}\right)$ compared to Photoshop. The post-hoc test showed no significant difference in the time of completion when comparing completing the task using our character design tool with or without pencil/paper $\left( {p = {0.12}}\right)$ . This suggests that while using our tool sped-up the design process when compared to Photoshop, including the pencil/paper did not yield any observable significant improvements in our setting. + +Participants remarked that our tool expedites the design process (P1, P3, P10, P20). P3 specifically noted that our tool "makes producing a character design much faster and easier than doing it on paper." + +### 7.2 Evaluation of Experience Survey + +Fig. 10 shows participants' response to "The tool was easy to use and learn." for each tool condition. A Friedman test showed a significant difference in participant’s responses to the statement $\left( {{\chi }^{2}\left( 2\right) = {13.73}}\right.$ , $p = {0.01})$ . We also conducted post-hoc analysis using Wilcoxon signed-rank tests with Bonferroni correction. Similar to Adobe Pho-toshop, the median of participants found our tool easy to use (Md=4 agree). However, the post-hoc tests showed a significant difference between the ease of use of our tool and Adobe Photoshop. In other words, we found a significant difference when comparing participants' responses after using our tool without pencil/paper and Adobe Photoshop $\left( {W = {199}, Z = - {3.04}, r = {0.41}, p = {0.007}}\right)$ . Likewise, we found a significant difference when comparing using our tool with pencil/paper and Adobe Photoshop $(W = {144}, Z = - {3.25}, r =$ ${0.44}, p = {0.003})$ . These results may be observed in Fig. 10 by the broader variation in responses given to our tool conditions compared to the Photoshop condition. + +![01963e98-d669-739d-87db-14b5f9244e3a_6_145_145_723_534_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_6_145_145_723_534_0.jpg) + +Figure 10: Participants answered the questions in the experience survey with a rating of 1 (strongly disagree) to 5 (strongly agree + +The familiarity of photo editing software to our participants may have contributed to the consensus of Adobe Photoshop's ease of use compared to our tool. Although some participants like ${P26}$ appreciated the simplicity of our application by stating that "it's modestly easy to use for character designers of any experience level. It's perfect as it is.", the absence of exhaustive common features that exist in modern editing software might have contributed to our tool's wider range of easiness ratings. The post-hoc test showed no significant difference between the ease of using our tool with or without pencil/paper $\left( {W = {33}, Z = {0.92}, p = {0.35}}\right)$ . + +Our Friedman test found a significant difference in participants' responses to the "I find the tool overall to be useful." statement as well $\left( {{\chi }^{2}\left( 2\right) = {9.86}, p = {0.007}}\right)$ . The post-hoc test $(W = {63.5}, Z =$ $- {1.33}, p = {0.56})$ showed no significant difference between the usefulness rating of using Adobe Photoshop (Md=4 agree) compared to using our tool without pencil/paper $(\mathrm{{Md}} = 5$ strongly agree), despite our tool having a higher median rating than Adobe Photoshop. Conversely, the post-hoc test $\left( {W = {135}, Z = - {2.86}, r = {0.39}, p = {0.013}}\right)$ showed a significant difference between the rating of Adobe Photo-shop and using our tool with pencil/paper despite having the same median rating $(\mathrm{{Md}} = 4$ agree). Furthermore, we found no significant difference between responses under the Character Design Tool (Md=5 strongly agree) and Character Design Tool and Pencil/Paper (Md=4 agree) conditions $\left( {W = 4, Z = - {2.49}, r = {0.34}, p = {0.038}}\right)$ . The inclusion of pencil and paper as an additional step in the participants' pipeline might have made the design process more cumbersome, resulting in the tendency to view the usefulness of our tool under the Character Design Tool and Pencil/Paper condition to be less than the other two conditions as shown in Fig. 10. + +### 7.3 Evaluation of Designs + +Fig. 11 shows how participants rated the designs produced using the various tool conditions we studied. The designs produced under the Limited constraint were rated similarly $\left( {\mathrm{{Md}} = 3}\right)$ under all the tool conditions. A Friedman test also indicated no significant difference in the rating of designs produced under that time constraint $\left( {{\chi }^{2}\left( 2\right) = }\right.$ ${1.98}, p = {0.37})$ . + +![01963e98-d669-739d-87db-14b5f9244e3a_6_920_147_728_532_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_6_920_147_728_532_0.jpg) + +Figure 11: Participants were asked to rate their designs with a rating of 1 (poor) to 5 (excellent). Limited and Unlimited refer to whether the design was created with a 15-minute time limit or with unlimited time. + +Although the median of ratings was higher for images designed under the Character Design Tool and Photoshop conditions (Md=4), than the Character Design tool and Pencil/Paper condition (Md=3); we found no significant difference in the rating of designs produced without any time constraints by applying the Friedman test $\left( {{\chi }^{2}\left( 2\right) = }\right.$ ${4.13}, p = {0.13})$ . + +Some participants(P4, P12)noted that the artwork they produced during the user study does not reflect their abilities. This may suggest that the participants may be rating the designs based on their previous body of work, giving all the designs overall a neutral rating; consequently resulting in no significant difference in the rating of images under different tool conditions. Nevertheless, the designs created using our tool received the majority of participants' votes as can be seen in Fig. 14. + +Fig. 12 shows some selected participant's thumbnails using our tool, while Fig. 13 shows their designs using Photoshop. The examples created under the Character Design Tool condition seem to be of better quality than their Photoshop counterparts. The participant also created the design faster by using our tool (386 seconds) compared to using Photoshop (620 seconds) while under the Unlimited time condition. Although the designs created using Photoshop are comparable to ones created under the Character Design Tool and Pencil/Paper condition, the time it took to complete the design using our tool (388 seconds) was much shorter for the participant than using Photoshop (652 seconds) while under the Unlimited time condition. The remaining thumbnails are included with the supplemental material. + +## 8 EVALUATION OF THE TOOL'S USAGE IN THE WILD + +To evaluate the effectiveness of our tool in the design workflow, we conducted a user study that simulates directors' and artists' workflow in the character design process. Due to the pandemic, we were unable to recruit a large number of participants and thus conduct a large-scale user study. Moreover, our user study was conducted remotely. + +Participants. We recruited 5 of the artists with ages ranging from 19 to 30 in our initial user study to participate in our second IRB-approved study. We also recruited 5 participants of ages 19-25 to act as art directors. + +Setup. To use our tool the artists were asked to connect to the same device utilized in our initial user study using TeamViewer. For comparison, the artists were asked to use their preferred drawing tools. We placed no constraints on the software the artists used. Instead, we encouraged artists to employ the tool that will most facilitate the brainstorming process for them. Some artists used tools that have auto-colorization capabilities like Adobe Illustrator and Clip Studio Paint, while others selected tools that did not support auto-colorization like FireAlpaca and PaintTool SAI. The artists, directors, and researcher used Zoom to communicate. + +![01963e98-d669-739d-87db-14b5f9244e3a_7_225_142_1332_709_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_7_225_142_1332_709_0.jpg) + +Figure 12: Participant's sketches and their corresponding colored thumbnails created using our tool under the (a) Limited time condition and the (b) Unlimited time condition. The images were labeled with the participant number and design request completed (D3: "A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D4: "She's a determined and courageous healer, with a dark and eerie appearance."; D6:"She's a charming and fun-loving socialite with a vintage and classic look."). P10 used the face selector to design the character according to D3 and D4, while P7 used the face selector to design the character according to D4. + +![01963e98-d669-739d-87db-14b5f9244e3a_7_246_1108_1236_356_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_7_246_1108_1236_356_0.jpg) + +Figure 13: Characters created by the participants from Fig. 12 using Photoshop. The two leftmost illustrations were created under the (a) Limited time condition, while the two rightmost were created under the (b) Unlimted time condition. (D1:"Cheerful female character with long hair. She has a cool and flowing appearance."; D3: "A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D5: "She's a dedicated and knowledgeable scholar with a bright and sunny aesthetic."). + +Tasks. Before the study, each director submitted two character designs. Two different artists were randomly assigned to each director to complete his/her designs. The directors introduced their designs to each artist in a brainstorming session. Moreover, the artists shared their screens in these sessions to show their sketches to the directors. The artists used our tool in one brainstorming session and their selected drawing tool in the other. The session was terminated when the director was satisfied with the rough design the artist produced. The time of completing these sessions was recorded to compare our tool with other drawing tools. + +After the brainstorming session ended, the artists submitted the turnaround sheets to their respective directors via e-mail. The artists iterated on the designs based on the directors' feedback. The study concluded when the directors approved the turnaround sheet that each of the two assigned artists submitted. + +After the directors approved an artist's turnaround sheet, they were asked to complete a survey. The directors were asked to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree): + +- I am satisfied with the quality of design the artist produced. + +- The design matches my description. + +![01963e98-d669-739d-87db-14b5f9244e3a_8_146_149_724_530_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_8_146_149_724_530_0.jpg) + +Figure 14: The number of participants (out of 27) who selected the designs created under each tool condition. + +## Communication with the artist in the brainstorming session was easy. + +The artists were asked to report the number of hours they spent working on the design, as well as to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree): + +## Communication with the director in the brainstorming session was easy. + +Results. The amount of time in the brainstorming session is shown in Table 2 as Session time. On average, the artists spent a shorter amount of time in the brainstorming session by using our tool ( ${855} \pm {95.02}$ seconds) compared to using other drawing tools ( ${1584.8} \pm {165.29}$ seconds). Despite artists spending a shorter amount of time in the brainstorming sessions, they spent a similar amount of time working on the designs after using our tool $({3.1} \pm {0.9}$ hours) in the brainstorming session compared to after using other drawing tools ( ${3.2} \pm {0.86}$ hours). + +Moreover, directors overall were satisfied with the quality of the turnaround sheets produced by the artists after using our tool (Md=5 strongly agree) akin to after using other drawing tools (Md=5 strongly agree). Directors overall felt that the turnaround sheets produced after using our tool (Md=4 agree) matched their description as well. The turnaround sheets created in our user study are included in the supplementary material. + +Only one director was unsatisfied with the turnaround sheet the artist produced after using our tool in the brainstorming session. The director worked with Artist 1, giving a score of 2 (disagree) to both the quality of the design and its match to the director's description. In the brainstorming session, the artist produced the thumbnail shown in Fig. 15 which the director approved. In the e-mail correspondence after the brainstorming session, the director was indecisive about the character's specifications. These miscommunications resulted in both the artist and director rating the communication in the brainstorming session lower than in any other session, with the artist giving the director a score of 1 (strongly disagree) and the director giving the artist a score of 3 (neutral) as can be seen in Table 2. + +Overall, both artists and directors believed that communication with their counterparts went smoothly. The directors rated communication with the artists using our tool (Md=5 strongly agree) akin to using other drawing tools (Md=5 strongly agree) during the brainstorming session. Artists reported slightly better communication with the directors after using our tool (Md=5 strongly agree) compared to other drawing tools (Md=4 agree). + +![01963e98-d669-739d-87db-14b5f9244e3a_8_944_144_706_617_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_8_944_144_706_617_0.jpg) + +Figure 15: The turnaround sheet created by Artist 1 after using our tool in the brainstorming session. The lower right corner shows the colored thumbnail produced by our tool and its input sketch. + +## 9 Discussion + +Limitations. Although participants believed that our tool allows them to draft a character much faster than Photoshop, they encountered some limitations to the framework. For example, although our tool was able to color multiple faces sketched by an artist within the same canvas, our GAN tends to style all faces with the same color scheme, limiting designs to only one character per canvas. Moreover, P3 suggested to "simply use the facial expressions, as opposed to the expressions plus some of the hair" in the face selector. Due to the face detection method we utilized, the selections we provided in the face selector included some portions of the characters hair. With a more sophisticated feature segmentation and classification model, the different facial features (e.g., hair, eyes) could be segmented and displayed separately in the selector. Some participants expressed the need for improvements to the interface like a larger sketch canvas (P7), an undo button (P3, P4, P7, P11, P13, P14, P19, P27), and stroke sensitivity/customization (P3, P5, P9, P12, P14). + +Our style selector is not fully customizable. For example, ${P3}$ wanted the ability "to have a way to set up a custom hair and skin color:" Using an architecture similar to the one proposed by Karras et al. [24] to transfer the style could allow a more finely-grained customization of the colorization scheme. + +While some participants were content with the variety of faces (P9, P11, P26) and styles (P14, P18, P27) our tool provides, due to limitations in our dataset, our tool does not provide artists with large variations in skin tone, nor does our tool provide a substantial number of non-female characters in the face selector for the same reason. Our tool is able to color male characters as illustrated in Figure 7 which we also provide in the face selector. Moreover, participants like ${P7}$ who, despite the usage of female pronouns in the design description, created non-female characters using our tool (as shown in Fig. 12). However, they identified the need for further inclusivity, especially in the face selector(P7, P21). Nevertheless, 20 out of the 27 participants used the face selector in at least one of their final designs. + +Participants overall praised the face selector in expediting the design process by providing a baseline for the character $({P1},{P2},{P3}$ , ${P6},{P10},{P26},{P27})$ . P27 found the face selector "made the app more useful in comparison to photoshop because you could start out with a template". + +
Artist 1Artist 2Artist 3Artist 4Artist 5Average
OursOtherOursOtherOursOtherOursOtherOursOtherOursOther
Session time (seconds)1,2001,8009001,8606661.1137971,8937121,2588551584.8
Creation time (hours)33111.5266443.13.2
Quality255555555555
Description matching255545534545
Communication (director)355555555555
Communication (artist)155254535554
+ +Table 2: Results of comparing our tool to other drawing tools in our second user study. Our tool's results are shown in the Ours columns while other tools' are shown in the Other columns. The Session time indicates the duration of the brainstorming session in seconds. The Creation time indicates the number of hours the artists spent working on the design after the brainstorming session. The Quality indicates how the directors rated their satisfaction with the quality of the turnaround sheet. Description matching indicates the director's rating of how well the turnaround sheet matched their description. Communication (director) indicates the rating of communication during the brainstorming session that the director reported, while Communication (artist) indicates the rating the artist reported. + +We received positive feedback from user study participants regarding our tool's applicability within the design pipeline $({P1},{P3}$ , ${P6},{P9},{P10})$ . Moreover, incorporating aspects of the NASA TLX (Task Load Index) [13] could also aid with further investigating our tool's usability. + +Future Work. The current focus of our tool was to expedite the exploration of character faces. By expanding our dataset to include full-body character images we may be able to train a network with an architecture similar to Esser et al.'s [8] to generate characters in various poses, body types and clothing. This may expand the capabilities of our tool to aid artists in creating the entire turnaround sheet in addition to exploring the character's head-shot. PaintsChainer [35] allows artists to provide color hints for the tool. We may be able to achieve a similar interaction by including hint channels into our GAN's input layer. The GAN can be trained by randomly sampling colored strokes from the character images in the edges-to-character dataset and using them as the inputs to the hint channels. + +Expanding our dataset to more styles of characters may also allow us to cater to designers who have a style dissimilar to anime as requested by ${P10}$ and ${P21}$ . Fig. 16 shows a participant’s design using Photoshop compared to our tool. Although our GAN could detect and generate portions of the character's hair and skin, the result is less than optimal when compared to the participant's design with Photoshop. The GAN in this case fell short of differentiating some portions of the skin (e.g., character's neck) from shading or determining the borders of the character's hair precisely. + +Participants believed that our tool was effective in creating images which can be used in the character exploration phase of the design process (i.e. thumbnails) but not as finished pieces (P1, P12). Expanding its capabilities to generate high-resolution images (as suggested by P12), textures, lighting, and shading may broaden its applicability from a simple brainstorming tool to a standalone design tool. We may be able to achieve a higher-resolution output by modifying our GAN's architecture and training it with high-resolution images, or by using the method proposed by Karras et al. [23] to train our GAN with low-resolution images. Finally, removing the generated background may produce more polished finished pieces. We may be able to remove the generated backgrounds in post-processing by training a semantic segmentation network [43] to label backgrounds which can be subtracted thereafter. Alternatively, we may be able to suppress the GAN from generating backgrounds by subtracting the backgrounds from our dataset, and training the GAN with the background-removed images. + +## 10 CONCLUSION + +In this paper, we trained a Generative Adversarial Network (GAN) to automatically color anime character sketches. Using the GAN we created a tool that aids artists in the early stages of the character design process. We evaluated the efficacy of our tool in comparison to using Photoshop by conducting a user study, which showed our tool's potential in speeding-up the character exploration process while maintaining quality. Finally, we conducted a user study that simulates the director and artist interaction in the design pipeline. We concluded that our tool facilitated character design brainstorming without sacrificing the quality of the designs. + +![01963e98-d669-739d-87db-14b5f9244e3a_9_932_693_662_407_0.jpg](images/01963e98-d669-739d-87db-14b5f9244e3a_9_932_693_662_407_0.jpg) + +Figure 16: A character design which strayed from our dataset's anime style. (a) The participant created the design under the Character Design Tool and Limited time conditions without using the face selector. (b) The participant created the design under the Photoshop and Unlimited time conditions. (D3:"A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D2:"She's a cold, lone wolf with a sense of humor") + +## REFERENCES + +[1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265-283, 2016. + +[2] R. Arora, R. Habib Kazi, T. Grossman, G. Fitzmaurice, and K. Singh. Symbiosissketch: Combining 2d & 3d sketching for designing detailed 3d objects in situ. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, pp. 185:1-185:15. ACM, New York, NY, USA, 2018. + +[3] S.-H. Bae, R. Balakrishnan, and K. Singh. Ilovesketch: as-natural-as-possible sketching system for creating $3\mathrm{\;d}$ curve models. In Proceedings of the 21st annual ACM symposium on User interface software and technology, pp. 151-160. ACM, 2008. + +[4] A. Brock, J. Donahue, and K. Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, + +2018. + +[5] W. Chen and J. Hays. Sketchygan: Towards diverse and realistic sketch to image synthesis. CoRR, abs/1801.02753, 2018. + +[6] Crypko. Crypko, 2018. Accessed March 16, 2020. + +[7] H. Ekström. How can a character's personality be conveyed visually, through shape, 2013. + +[8] P. Esser, E. Sutter, and B. Ommer. A variational u-net for conditional appearance and shape generation. CoRR, abs/1804.04694, 2018. + +[9] A. Ghosh, R. Zhang, P. K. Dokania, O. Wang, A. A. Efros, P. H. S. Torr, and E. Shechtman. Interactive sketch & fill: Multiclass sketch-to-image translation. 2019. + +[10] B. Gooch, E. Reinhard, and A. Gooch. Human facial illustrations: Creation and psychophysical evaluation. ACM Transactions on Graphics (TOG), 23(1):27-44, 2004. + +[11] M. Guay, R. Ronfard, M. Gleicher, and M.-P. Cani. Space-time sketching of character animation. ACM Trans. Graph., 34(4):118:1-118:10, July 2015. + +[12] X. Han, C. Gao, and Y. Yu. Deepsketch2face. ACM Transactions on Graphics, 36(4):1-12, Jul 2017. + +[13] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in psychology, vol. 52, pp. 139-183. Elsevier, 1988. + +[14] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016. + +[15] R. Henrikson, B. De Araujo, F. Chevalier, K. Singh, and R. Balakrish-nan. Storeoboard: Sketching stereoscopic storyboards. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 4587-4598. ACM, 2016. + +[16] F. Huang, J. F. Canny, and J. Nichols. Swire: Sketch-based user interface retrieval. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 104:1-104:10. ACM, New York, NY, USA, 2019. + +[17] X. Huang, M. Liu, S. J. Belongie, and J. Kautz. Multimodal unsupervised image-to-image translation. CoRR, abs/1804.04732, 2018. + +[18] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125-1134, 2017. + +[19] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. + +[20] J. Jacobs, J. Brandt, R. Mech, and M. Resnick. Extending manual drawing practices with artist-centric programming tools. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 590. ACM, 2018. + +[21] Y. Jin, J. Zhang, M. Li, Y. Tian, H. Zhu, and Z. Fang. Towards the automatic anime characters creation with generative adversarial networks, 2017. + +[22] H. Kang, S. Lee, and C. K. Chui. Coherent line drawing. In Proceedings of the 5th international symposium on Non-photorealistic animation and rendering, pp. 43-50. ACM, 2007. + +[23] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. + +[24] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. CoRR, abs/1812.04948, 2018. + +[25] R. H. Kazi, F. Chevalier, T. Grossman, and G. Fitzmaurice. Kitty: sketching dynamic and interactive illustrations. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 395-405. ACM, 2014. + +[26] R. H. Kazi, F. Chevalier, T. Grossman, S. Zhao, and G. Fitzmaurice. Draco: bringing life to illustrations with kinetic textures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 351-360. ACM, 2014. + +[27] R. H. Kazi, T. Igarashi, S. Zhao, and R. Davis. Vignette: Interactive texture design and manipulation with freeform gestures for pen-and-ink illustration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, pp. 1727-1736. ACM, New York, NY, USA, 2012. + +[28] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2014. + +[29] K. C. Kwan and H. Fu. Mobi3dsketch: 3d sketching in mobile ar. + +In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 176:1-176:11. ACM, New York, NY, USA, 2019. + +[30] J. E. Kyprianidis and J. Döllner. Image abstraction by structure adaptive filtering. 2008. + +[31] C. Lundwall. Creating guidelines for game character designs. 2017. + +[32] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605, 2008. + +[33] Y. Matsui, K. Aizawa, and Y. Jing. Sketch2manga: Sketch-based manga retrieval. In 2014 IEEE International Conference on Image Processing (ICIP), pp. 3097-3101, Oct 2014. + +[34] Nagadomi. Animeface-character-dataset, 2019. Accessed July 5, 2019. + +[35] P. Paint. Paintschainer. http://paintschainer.preferred.tech, 2017. Accessed March 16, 2020. + +[36] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234-241. Springer, 2015. + +[37] G. Saul, M. Lau, J. Mitani, and T. Igarashi. Sketchchair: An all-in-one chair design system for end users. In Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '11, pp. 73-80. ACM, New York, NY, USA, 2011. + +[38] Y. Shi, N. Cao, X. Ma, S. Chen, and P. Liu. Emog: Supporting the sketching of emotional expressions for storyboarding. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-12. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376520 + +[39] M. Shugrina, A. Kar, K. Singh, and S. Fidler. Color sails: Discrete-continuous palettes for deep color exploration. CoRR, abs/1806.02918, 2018. + +[40] J. Song, K. Pang, Y. Song, T. Xiang, and T. M. Hospedales. Learning to sketch with shortcut cycle consistency. CoRR, abs/1805.00247, 2018. + +[41] S. Takahashi. python-animeface, 2013. Accessed July 7, 2019. + +[42] J. Tan, J. Echevarria, and Y. Gingold. Efficient palette-based decomposition and recoloring of images via rgbxy-space geometry. ACM Transactions on Graphics (TOG), 37(6):262:1-262:10, Dec 2018. + +[43] Y.-H. Tsai, W.-C. Hung, S. Schulter, K. Sohn, M.-H. Yang, and M. Chandraker. Learning to adapt structured output space for semantic segmentation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018. doi: 10.1109/cvpr.2018.00780 + +[44] P. Wacker, O. Nowak, S. Voelker, and J. Borchers. Arpen: Mid-air object manipulation techniques for a bimanual ar system with pen & smartphone. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 619:1-619:12. ACM, New York, NY, USA, 2019. + +[45] H. Winnemöller. Xdog: advanced image stylization with extended difference-of-gaussians. In Proceedings of the ACM SIG-GRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering, pp. 147-156. ACM, 2011. + +[46] H. Winnemöller, S. C. Olsen, and B. Gooch. Real-time video abstraction. In ACM Transactions On Graphics (TOG), vol. 25, pp. 1221-1226. ACM, 2006. + +[47] S. Xiang and H. Li. Anime style space exploration using metric learning and generative adversarial networks, 2018. + +[48] S. Xiang and H. Li. Disentangling style and content in anime illustrations, 2019. + +[49] P. Xu, H. Fu, Y. Zheng, K. Singh, H. Huang, and C. Tai. Model-guided 3d sketching. IEEE Transactions on Visualization and Computer Graphics, pp. 1-1, 2018. + +[50] J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman. Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems, 2017. + +[51] C. Zou, H. Mo, R. Du, X. Wu, C. Gao, and H. Fu. LUCSS: Language-based user-customized colourization of scene sketches. Aug. 2018. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..c9c7d6170b252649223e3dd237897a7869dc7638 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,336 @@ +§ EXPLORING SKETCH-BASED CHARACTER DESIGN GUIDED BY AUTOMATIC COLORIZATION + +Category: Research + + < g r a p h i c s > + +Figure 1: Our character exploration tool facilitates the character design process by allowing artists to explore characters using colored thumbnails synthesized from sketches. These colored thumbnails, which are traditionally rough grey-scale sketches, better visualize the character for creating the turnaround sheet. + +§ ABSTRACT + +Character design is a lengthy process, requiring artists to iteratively alter their characters' features and colorization schemes according to feedback from creative directors or peers. Artists experiment with multiple colorization schemes before deciding on the right color palette. This process may necessitate several tedious manual re-colorizations of the character. Any substantial changes to the character's appearance may also require manual re-colorization. Such complications motivate a computational approach for visualizing characters and drafting solutions. + +We propose a character exploration tool that automatically colors a sketch based on a selected style. The tool employs a Generative Adversarial Network trained to automatically color sketches. The tool also allows a selection of faces to be used as a baseline for the character's design. We validated our tool by comparing it with using Photoshop for character exploration in our pilot study. Finally, we conducted a study to evaluate our tool's efficacy within the design pipeline. + +Index Terms: Human-centered computing-Human computer interaction (HCI)—Interaction paradigms—Graphical user interfaces + +§ 1 INTRODUCTION + +Fig. 1 illustrates a typical character design process. At the very beginning of the process, the designer is furnished with a character description that outlines a combination of personality (e.g., courageous, melancholic) and physical traits (e.g., long hair, small frame) $\left\lbrack {7,{31}}\right\rbrack$ . Their first task is then to sketch out the character’s distinguishing expressions and physical features into a thumbnail-which is often a rough low-resolution gray-scale sketch. From the thumbnail the designer then develops a character turnaround sheet, a reference for later drawing the character in context. The turnaround sheet is then presented to the creative director for feedback, and the entire process iterates. Because the ideation and creation of a turnaround sheet are manual processes, the artist often has to restart from scratch. + +We devised a tool driven by a sketching interaction that automatically colors the character thumbnail, enabling artists and their creative directors to do more early exploration with less investment of effort. In practice, these thumbnails may also be used as references for producing the turnaround sheet in higher resolution using Photoshop. Fig. 1 shows an example of a colored character thumbnail synthesized using our tool. + +Our novel tool can generate these colored character thumbnails based on artists' sketches, color and character face selections. Specifically, we achieve this by training a Generative Adversarial Network (GAN) using an anime dataset. We used the GAN to generate the colored thumbnails as the artists sketch, while also allowing them to place characters' faces and select their colorization style. This selection-based automatic colorization framework was able to significantly speed-up the character exploration process compared to using Photoshop for participants in our user study, without sacrificing quality. The major contributions of our work include: + + * Proposing a novel generative character exploration tool by training a GAN to automatically color sketches. + + * Allowing artists using our tool to select character faces and colorization style, as well as to edit the character by directly sketching on the canvas. + + * Validating the effectiveness of our tool in facilitating the character exploration process compared to Photoshop via a number of design tasks. + +§ 2 RELATED WORK + +§ 2.1 SKETCH-BASED INTERACTIONS + +Similar to our approach, several works have explored using sketching as an interaction technique in different contexts. + +We draw inspiration from several works that utilized sketching as an animation interaction. Kazi et al. $\left\lbrack {{25},{26}}\right\rbrack$ created an interface to allow users to animate their 2D sketches, while Guay et al. [11] presented a novel technique to animate $3\mathrm{D}$ characters’ motion using a single stroke. Storeoboard [15] allows filmmakers to sketch stereoscopic storyboards to visualize the depth of their scenes. + +Our approach aims to incorporate sketching into a 2D design process, while several works aim to examine sketch interactions in 3D design. Saul et al. [37] created a design system for chair fabrication. Xu et al. [49] introduced a model-guided 3D sketching tool which allows designers to redesign existing 3D models. Huang et al. [16] created a sketch-based user interface design system. ILoveSketch [3], a curve sketching system, allows designers to iterate directly on their 3D designs. Sketch-based interaction techniques in Augmented Reality were explored by the HCI community as well $\left\lbrack {2,{29},{44}}\right\rbrack$ . + +Several works explored using sketching to design cartoon characters specifically. Sketch2Manga [33] creates characters from sketches. Unlike our approach which uses a generative method to output a character from a sketch, it uses image retrieval to match the query with a character from the database. Han et al. [12] introduced a deep learning method to create 3D caricatures from an input $2\mathrm{D}$ sketch. Because the generated caricatures take the form of a texture-less 3D model, we opted to use a network architecture that enabled the generation of $2\mathrm{D}$ images and control of their colorization style. + +With our tool, we aim to improve the traditional design process for artists. Similarly, Jacobs et al. [20] introduced a tool which allows artists to create dynamic procedural brushes by varying the rotation, reflection and style of their strokes. Moreover, Vignette [27] is an interactive tool which allows artists to create custom textures, and automatically fill selected regions of their illustrations with these textures. + +§ 2.2 IMAGE GENERATION + +Recently, generative modeling approaches have emerged as a powerful, data-driven approach for directly mapping sketches into images. Isola et al. [18] show that conditional GANs are an effective general purpose tool for image-to-image translation problems and can be applied to mapping sketches to images. The sketch-to-image problem is also inherently ambiguous, as different colors and "styles" can be used for multiple plausible completions. Follow-up works $\left\lbrack {{17},{50}}\right\rbrack$ introduce extensions to enable multiple predictions. We find that for our task BicycleGAN [50] is able to effectively generate colored character illustrations from edge maps due to its multimodality. One challenge is the difficulty in obtaining real sketches. Methods such as $\left\lbrack {5,9,{40}}\right\rbrack$ use generative models to generate sketches themselves. We find that "synthesized" sketches based on edge maps, with some carefully selected preprocessing choices, are adequate for our application. + +Using the style selector in our tool, artists can choose the colorization scheme of their characters. Similarly, Color Sails [39] is a tool which allows coloring designs from a discrete-continuous color palette defined by the user. Tan et al. [42] developed a tool to allow real-time image palette editing. Zou et al. [51] introduced a language-based scene colorization tool. Xiang et al. [47] explored the style space of anime characters by training a style encoder which effectively encodes images into the style space, such that the distance of their codes in the space corresponds to the similarity of their artists' styles. + +In later work, Xiang et al. [48] developed a Generative Adversarial Disentanglement Network which can incorporate style and content codes that are independent. This allows separate control over the style and content codes of the image, enabling faithful image generation with proper style-specific facial features (e.g., eyes, mouth, chin, hair, blushes, highlights, contours,) as well as overall color saturation and contrast. The neural transfer methods used by Xiang et al. $\left\lbrack {{47},{48}}\right\rbrack$ do not transfer facial features consistently (e.g., may transfer the mouth from some images but not all). Therefore we allow artists to control facial features via only the sketch canvas and/or face selector instead of using the neural transfer methods of Xiang et al. Nonetheless, due to its effectiveness, we still used neural transfer to control the sketch's colorization. + +§ 2.3 CHARACTER DESIGN + +EmoG [38] is a character design tool introduced to facilitate story-boarding. EmoG generates facial expressions according to the user's emotion selection and sketch. Akin to our approach users can drag and drop a facial expression onto the canvas in addition to the ability to draw directly on the canvas. Unlike our approach, EmoG renders no colorization suggestions to the user and is focused on facilitating drafting characters' emotional expressions rather than their overall appearance. + + < g r a p h i c s > + +Figure 2: Overview of our approach. To begin, an artist may place a face from the face selector onto the design canvas. The artist then directly sketches on the design canvas. If a style is selected, the sketch will be automatically colored by the GAN with the selected style. Otherwise, the tool suggests a random colorization scheme. + +MakeGirlsMoe [21] is a tool that helps artists brainstorm by allowing them to select facial features to automatically generate a character illustration. However, it has an unnatural discrete selection-based interaction compared to interfaces that allow the user to illustrate by sketching. MakeGirlsMoe was updated to create the crypto-currency generator, Crypko [6]. Both frameworks were not available to us during the user evaluation and thus were not compared to our tool. PaintsChainer [35] automatically colors sketches based on the artist's color hints in the form of brush strokes on top of the sketch. It colors a completed line art that a user uploads, though it does not allow the user to modify the character by placing or editing expressions and features onto the canvas, nor does it allow the user to start from a blank canvas and iteratively sketch a character. Consequentially, it neglects the need for a sketch-based iterative tool that combines both a feature selection-based interaction and automatic colorization. Hence, we developed an interactive character design tool equipped with a face selector, colorization style selector and sketching canvas to fulfill that need. + +Auto-colorization features were introduced in commercial software like Adobe Illustrator and Clip Studio Paint. However, Adobe Illustrator is limited to coloring black-and-white photographs. On the other hand, Clip Studio Paint can color cartoons, but like PaintsChainer, it can only color completed line art. + +§ 3 OVERVIEW + +Fig. 2 shows an overview of our approach. We trained a GAN by using an edges-to-character dataset obtained by extracting the edgemaps of colored anime characters. The GAN learned to produce a colored anime character illustration given a sketch. Using the GAN we built a framework which allows character exploration by enabling a user to select and place facial features as well as sketch onto a canvas. As the user edits the canvas, the GAN will automatically color his illustrations according to the styles he selected. Finally, we demonstrated the effectiveness of our tool by conducting a user study comparing our tool with Adobe Photoshop. + +§ 4 DATA PROCESSING + +We obtained our training and validation image pairs from the anime-face character dataset [34]. We used an automated process described below to extract edge maps from the face images, creating our edges-to-character dataset. + + < g r a p h i c s > + +Figure 3: To synthesize artist sketches we used an edge operator on our dataset images. (a) A character image sampled from our training set. The edgemaps were created by applying the DoG filter with (b) $\sigma = {0.3}$ and (c) $\sigma = {0.5}$ . + +Animeface Dataset. The animeface character dataset [34] contains a total of 12,213 samples of face images. We randomly extracted 10,992 images of them for the training set. The remaining ${10}\%$ of samples (1221 images) were used as validation to monitor the progress of training the GAN. + +Edgemaps. Due to the costliness of obtaining character datasets which include sketches paired with their corresponding colored counterparts, we used an edge detector on the dataset images to simulate sketches. The standard Difference of Gaussians (DoG) filter was used successfully in several works to synthesize line drawings $\left\lbrack {{10},{22},{30},{46}}\right\rbrack$ , and unlike the eXtended difference-of-Gaussians (xDoG) filter [45] it does not tend to fill dark regions. Before processing the images, we created the edgemaps of our training and validation images after converting them to grayscale and then applying the DoG filter with $\gamma = {10}^{9}$ and $k = {4.5}$ (see Fig. 3). The value of $\sigma$ was randomly selected from $\{ {0.3},{0.4},{0.5}\}$ for each image to allow for variations in the amount of noise in the edgemaps. + +Image Processing. The images in the animeface dataset have a maximum size of ${160}\mathrm{{px}}$ in either dimension and various aspect ratios, while our GAN training process expects images sized exactly ${256} \times$ 256px. In order to match these requirements, we uniformly scaled the animeface faces to fit using bilinear interpolation. For other than square aspect ratios, we filled the rest of the square canvas by repeating edge pixels as shown in Fig. 4. We selected this repetition fill rather than a solid background color to avoid the network learning to reproduce such a solid border. + + < g r a p h i c s > + +Figure 4: The arrow depicts the direction of replicating the border pixels when the (a) height or (b) width is the smallest dimension. + +§ 5 CHARACTER COLORIZATION MODEL TRAINING + +To generate each colored character image from its paired edgemap image in our dataset, we used BicycleGAN from Zhu et al [50]. + +Architecture. We train the network on the ${256} \times {256}$ paired images from our training edges-to-character dataset. For our encoder, we found that using a ResNet [14] encoder explored by Zhu et al. helped decrease the amount of artifact images generated by the GAN. We use a U-Net [36] generator and PatchGAN [19] discriminators. + +In preliminary experiments, we found that changing the dimension of the latent code $\left| \mathbf{z}\right|$ produces different results. A code of too high dimension leads to variation in the background style instead of the character colorization style, while too low dimension leads to inadequate variation in the character colorization style. Ultimately, we found a latent code of size 8 to empirically work well. GANs are known to "collapse" when training lasts too long [4]. Subsequently, we noticed that eventually colorization resolution improves at the expense of style variation after 71 epochs as the GAN starts over-fitting on the training data. Because our tool is created to explore character designs and produce low-resolution sample thumbnails, we opted to halt training at 71 epochs to maintain variation in style and to avoid overfitting. + +Training. We inherit many of the default parameters and practices of BicycleGAN: ${\lambda }_{\text{ image }} = {10},{\lambda }_{\text{ latent }} = {0.5}$ and ${\lambda }_{\mathrm{{KL}}} = {0.01}$ . We trained for 71 epochs using Adam [28] with batch size 1 and learning rate 0.0002 . We updated the generator once for each discriminator update, while the encoder and generator are updated simultaneously. We used the TensorFlow library [1]. Training took approximately 48 hours on an Nvidia GeForce GTX 1070 GPU. + +We found these parameters empirically. Note that our goal is not to produce the state-of-the-art generative model for this task per se, but rather to explore how a reasonable implementation of a powerful generative model can be leveraged for downstreaming character exploration by an artist. + +§ 6 CHARACTER EXPLORATION TOOL + +Due to its multi-modality, our trained neural network is able to color each edgemap in various styles. We demonstrate the several methods incorporated in our character design tool shown in Fig. 6 to color character sketches. The colorization results we presented in this section were generated using the same apparatus used for training. + +Suggested Colorization. We can color the edgemaps by randomly sampling the latent code $\mathbf{z}$ from a Gaussian distribution and injecting it into the network using the add_to_input method explored by Zhu et al. [50], which spatially replicates and concatenates $\mathbf{z}$ into only the first layer of the generator. + +Fig. 5 shows colorization results of images in our validation set by randomly sampling the latent code. By varying the latent code the network was able to vary the character's hair color. Because the majority of anime characters in the dataset have matching hair and eye colors, the network jointly varies the hair and eye color. Darker hair colors can be generated by increasing the amount of shading as can be seen in the second row of Fig. 5. + +Style-based Colorization. We can also inject the latent code $\mathbf{z}$ of other images (i.e. style images) into the network, which enables us to color the input edgemap according to the style images. We first encode the style image to its latent code $\mathbf{z}$ . We then generate the character image from the edgemap by injecting the style image's latent code $\mathbf{z}$ using the add_to_input method. + +Fig. 7 shows the results of coloring input edge images from our validation sets using a set of style images. Due to the inclusion of multi-faced images within the training set, the network is able to color multiple faces in one image (as illustrated by the final row of Fig. 7), giving artists the ability to sketch multiple faces on the same canvas. These faces are generated with the same colorization scheme. Because we did not remove the backgrounds from the training images, the GAN generates the backgrounds as part of the image's style. + +Implementation Details. We designed the character exploration tool (shown in Fig. 6) to allow artists to sketch on the design canvas using the brush and eraser provided. The brush is circular and its diameter can be adjusted using the brush slider from 1 to 10 pixels. The eraser is likewise circular and its diameter can be adjusted in the range of 1 to 20 pixels. + +Artists are also able to place facial expressions into any location on the design canvas from our face selector by clicking the facial expression and the canvas respectively. The facial expressions were created by extracting the faces detected by applying an anime face detector [41]. We selected 60 of the faces detected in the validation set to be used in the face selector. + +The style selector provides a set of style images from our validation set. These style images were selected by embedding the 8 dimensional latent codes of images in our validation set into two dimensions using t-SNE [32]. The embeddings are visualized as a ${10} \times {10}$ grid by snapping the two dimensional embeddings to the grid. The embeddings were arranged in the grid such that every position in the grid contains the style image with the latent code which has the smallest euclidean distance to the grid position. Twelve images of the 100 style images visualized using t-SNE embedding were discarded due to the presence of some artifacts after using them as style images in colorization. Therefore, we used 88 images in total in our style selector. For consistency and to avoid the varying background, resolution and artistry of the style images from biasing artist's selections in the user study, we display preview images colored with the style images shown in the t-SNE grid using the style-based colorization method. Fig. 6 shows some of these preview images in the style selector. Please refer to the supplementary material for the t-SNE grid visualization. + + < g r a p h i c s > + +Figure 5: Sample suggested colorizations from our model. The first column shows the input edgemap. The second column shows the original image. The last four columns show the colorization results of our network with a latent code $\mathbf{z}$ randomly sampled from a Gaussian distribution for each generated sample. + +The colored sketch will be shown on the display canvas. If the artist has not selected a style image in the style selector, the image will be colored using the suggested colorization method. Otherwise, the sketch will be colored according to the style-based colorization method using the artist's selection in the style selector as the style image. The display canvas will be automatically updated every 20 seconds. The update can also be triggered by the artist by pressing the run button. If a style image was not selected, pressing the run button will trigger applying random colorization with a newly sampled latent vector, giving the artist an additional way-other than the style selector-to explore the colorization space. The sketching canvas can be cleared by pressing the clear button. + +§ 7 PILOT USER STUDY + +Participants. We recruited 27 artists with ages ranging from 19 to 30 to participate in our IRB-approved study. Fig. 8 shows participants’ average experience with sketching $\left( {\mathrm{M} = {5.52},\mathrm{{SD}} = {4.65}}\right)$ , character design $\left( {\mathrm{M} = {2.26},\mathrm{{SD}} = {2.96}}\right)$ and using Adobe Photoshop (M=3.26, SD=3.04). Participants’ experience is listed in more detail in the supplemental material. + +Setup. Participants sketched on a Wacom Cintiq Pro 13 tablet with a 13-inch display. Our tool was loaded on the tablet. The participants sketched directly on the screen. We used the same apparatus employed in training the GAN to generate the images of + +§ THE DISPLAY CANVAS. + +Tasks. Following the completion of a training task, participants were given 6 tasks. Each design task refers to a combination of design request, time condition, and tool condition. Participants completed each of the 6 design requests shown in Table 1, which were created under the consultation of a professional character designer. The time conditions Limited and Unlimited determined whether participants completed the request within a 15-minute limit or under no time constraints, respectively. The Limited time condition was used to compare the quality of designs under tight time constraints. Our tool conditions are defined as: our character design tool, our character design tool supplemented with pencil/paper, and Adobe Photoshop condition. To allow for within-subject comparisons between tool conditions under each time condition, participants completed all of the 3 tool conditions under each time condition. The ordering and combinations of the design requests, tool conditions and time conditions were randomized for each participant to avoid any carryover effects. For example, one participant may be given the first design request from Table 1 to be completed under the Adobe Photoshop and Unlimited conditions as their first task; while another participant completes the third design request under the character design tool and Limited conditions first. On average, participants completed the study-including the training task-in approximately 90 minutes. + +§ TRAINING TASK + +We allowed participants to freely explore the tool before receiving any design requests. To facilitate the learning process, we provided participants with a tutorial explaining the various functionalities of our tool along with the study's structure. + +§ CHARACTER DESIGN TOOL + +Participants completed two design requests using our tool and without using a pencil or paper. One request was completed within 15 minutes while another was completed without a time constraint. + + < g r a p h i c s > + +Figure 6: The UI of the character exploration tool in our user study. The canvases show a thumbnail designed using our tool. + + < g r a p h i c s > + +Figure 7: Style-based colorization using our model. The left column shows the input edgemap while the second column shows the original image. The six rightmost columns show the results of style-based colorization on the edgemap with various styles. + + < g r a p h i c s > + +Figure 8: The participants' average years of experience with sketching, character design, and using Adobe Photoshop. + + Design Request + +D1 Cheerful female character with long hair. + + She has a cool and flowing appearance. + + D2 She's a cold, lone wolf with a sense of humor. + +D3 A determined and patient girl with a simple and + + practical look. Her greatest desire is + + ultimate knowledge. + + D4 She's a determined and courageous healer, + + with a dark and eerie appearance. + +D5 She's a dedicated and knowledgeable scholar + + with a bright and sunny aesthetic. + + 6 She's a charming and fun-loving socialite + + with a vintage and classic look. + +Table 1: The 6 character design requests given to participants in our user study. + +§ CHARACTER DESIGN TOOL AND PENCIL/PAPER + +Participants were allowed to use a pencil and paper for two design requests completed using our tool. They completed one request within 15 minutes, and one without a time constraint. Some artists typically plan their designs on paper prior to using editing software. Thus, we added this tool condition to inspect whether allowing artists to use pencil/paper affects their workflow. + +§ ADOBE PHOTOSHOP + +Similar to the previous tool conditions, participants completed two requests by using Adobe Photoshop, once without a time constraint and once with a 15-minute limit. To mimic participant's typical design process as closely as possible, we provided them with a pencil and paper in this tool condition as well. + +User Survey. After completing the 2 tasks under each tool condition (i.e. once with a 15-minute time limit and once without), participants were asked to complete a survey to evaluate the performance of the tool used. We opted to use a 5-point Likert scale to evaluate the tools akin to [25]. Participants were asked to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree): + + * The tool was easy to use and learn. + + * I find the tool overall to be useful. + +Finally, we surveyed participants once more after completing the user study in its entirety. Participants were asked to rate each of their colored designs from 1 (Poor) to 5 (Excellent). Participants are aware of the study's time constraints, so they are more likely to fairly judge their artworks' quality. Therefore, we opted to rely on the participants' evaluation of their own work instead of using external evaluators. We were also interested in learning which designs participants favored overall, so for each time condition the participants were asked to vote for either the design created under the Character Design Tool, Character Design Tool and Pencil/Paper, or Photoshop condition. + + < g r a p h i c s > + +Figure 9: Average time of completing the design requests. + +§ 7.1 TIME OF COMPLETION. + +Fig. 9 shows the average time taken by participants to complete the character design requests under each tool condition. Mauchly's test did not show a violation of sphericity against tool condition $\left( {W\left( 2\right) = {0.84},p = {0.11}}\right)$ . With one-way repeated-measure ANOVA, we found a significant effect of the tool used on the time of design completion $\left( {F\left( {2,{52}}\right) = {14.53}\text{ , partial }{\eta }^{2} = {0.36},p < {0.001}}\right)$ . We performed Boneferroni-corrected paired t-tests for our post-hoc pairwise comparisons. + +Participants completed the designs faster by using our tool compared to Photoshop. A post-hoc test showed that the average time participants took to complete the designs using Photoshop $\left( {{1205} \pm {135.16}\text{ seconds }}\right)$ was longer than using our tool with pencil/paper $\left( {{801.7} \pm {91.28}\text{ seconds }}\right) \left( {p = {0.01}}\right)$ . A post-hoc test also showed that participants completed the design requests in a shorter amount of time by using our tool without pencil/paper $\left( {{605.93} \pm {60.73}}\right) \left( {p < {0.01}}\right)$ compared to Photoshop. The post-hoc test showed no significant difference in the time of completion when comparing completing the task using our character design tool with or without pencil/paper $\left( {p = {0.12}}\right)$ . This suggests that while using our tool sped-up the design process when compared to Photoshop, including the pencil/paper did not yield any observable significant improvements in our setting. + +Participants remarked that our tool expedites the design process (P1, P3, P10, P20). P3 specifically noted that our tool "makes producing a character design much faster and easier than doing it on paper." + +§ 7.2 EVALUATION OF EXPERIENCE SURVEY + +Fig. 10 shows participants' response to "The tool was easy to use and learn." for each tool condition. A Friedman test showed a significant difference in participant’s responses to the statement $\left( {{\chi }^{2}\left( 2\right) = {13.73}}\right.$ , $p = {0.01})$ . We also conducted post-hoc analysis using Wilcoxon signed-rank tests with Bonferroni correction. Similar to Adobe Pho-toshop, the median of participants found our tool easy to use (Md=4 agree). However, the post-hoc tests showed a significant difference between the ease of use of our tool and Adobe Photoshop. In other words, we found a significant difference when comparing participants' responses after using our tool without pencil/paper and Adobe Photoshop $\left( {W = {199},Z = - {3.04},r = {0.41},p = {0.007}}\right)$ . Likewise, we found a significant difference when comparing using our tool with pencil/paper and Adobe Photoshop $(W = {144},Z = - {3.25},r =$ ${0.44},p = {0.003})$ . These results may be observed in Fig. 10 by the broader variation in responses given to our tool conditions compared to the Photoshop condition. + + < g r a p h i c s > + +Figure 10: Participants answered the questions in the experience survey with a rating of 1 (strongly disagree) to 5 (strongly agree + +The familiarity of photo editing software to our participants may have contributed to the consensus of Adobe Photoshop's ease of use compared to our tool. Although some participants like ${P26}$ appreciated the simplicity of our application by stating that "it's modestly easy to use for character designers of any experience level. It's perfect as it is.", the absence of exhaustive common features that exist in modern editing software might have contributed to our tool's wider range of easiness ratings. The post-hoc test showed no significant difference between the ease of using our tool with or without pencil/paper $\left( {W = {33},Z = {0.92},p = {0.35}}\right)$ . + +Our Friedman test found a significant difference in participants' responses to the "I find the tool overall to be useful." statement as well $\left( {{\chi }^{2}\left( 2\right) = {9.86},p = {0.007}}\right)$ . The post-hoc test $(W = {63.5},Z =$ $- {1.33},p = {0.56})$ showed no significant difference between the usefulness rating of using Adobe Photoshop (Md=4 agree) compared to using our tool without pencil/paper $(\mathrm{{Md}} = 5$ strongly agree), despite our tool having a higher median rating than Adobe Photoshop. Conversely, the post-hoc test $\left( {W = {135},Z = - {2.86},r = {0.39},p = {0.013}}\right)$ showed a significant difference between the rating of Adobe Photo-shop and using our tool with pencil/paper despite having the same median rating $(\mathrm{{Md}} = 4$ agree). Furthermore, we found no significant difference between responses under the Character Design Tool (Md=5 strongly agree) and Character Design Tool and Pencil/Paper (Md=4 agree) conditions $\left( {W = 4,Z = - {2.49},r = {0.34},p = {0.038}}\right)$ . The inclusion of pencil and paper as an additional step in the participants' pipeline might have made the design process more cumbersome, resulting in the tendency to view the usefulness of our tool under the Character Design Tool and Pencil/Paper condition to be less than the other two conditions as shown in Fig. 10. + +§ 7.3 EVALUATION OF DESIGNS + +Fig. 11 shows how participants rated the designs produced using the various tool conditions we studied. The designs produced under the Limited constraint were rated similarly $\left( {\mathrm{{Md}} = 3}\right)$ under all the tool conditions. A Friedman test also indicated no significant difference in the rating of designs produced under that time constraint $\left( {{\chi }^{2}\left( 2\right) = }\right.$ ${1.98},p = {0.37})$ . + + < g r a p h i c s > + +Figure 11: Participants were asked to rate their designs with a rating of 1 (poor) to 5 (excellent). Limited and Unlimited refer to whether the design was created with a 15-minute time limit or with unlimited time. + +Although the median of ratings was higher for images designed under the Character Design Tool and Photoshop conditions (Md=4), than the Character Design tool and Pencil/Paper condition (Md=3); we found no significant difference in the rating of designs produced without any time constraints by applying the Friedman test $\left( {{\chi }^{2}\left( 2\right) = }\right.$ ${4.13},p = {0.13})$ . + +Some participants(P4, P12)noted that the artwork they produced during the user study does not reflect their abilities. This may suggest that the participants may be rating the designs based on their previous body of work, giving all the designs overall a neutral rating; consequently resulting in no significant difference in the rating of images under different tool conditions. Nevertheless, the designs created using our tool received the majority of participants' votes as can be seen in Fig. 14. + +Fig. 12 shows some selected participant's thumbnails using our tool, while Fig. 13 shows their designs using Photoshop. The examples created under the Character Design Tool condition seem to be of better quality than their Photoshop counterparts. The participant also created the design faster by using our tool (386 seconds) compared to using Photoshop (620 seconds) while under the Unlimited time condition. Although the designs created using Photoshop are comparable to ones created under the Character Design Tool and Pencil/Paper condition, the time it took to complete the design using our tool (388 seconds) was much shorter for the participant than using Photoshop (652 seconds) while under the Unlimited time condition. The remaining thumbnails are included with the supplemental material. + +§ 8 EVALUATION OF THE TOOL'S USAGE IN THE WILD + +To evaluate the effectiveness of our tool in the design workflow, we conducted a user study that simulates directors' and artists' workflow in the character design process. Due to the pandemic, we were unable to recruit a large number of participants and thus conduct a large-scale user study. Moreover, our user study was conducted remotely. + +Participants. We recruited 5 of the artists with ages ranging from 19 to 30 in our initial user study to participate in our second IRB-approved study. We also recruited 5 participants of ages 19-25 to act as art directors. + +Setup. To use our tool the artists were asked to connect to the same device utilized in our initial user study using TeamViewer. For comparison, the artists were asked to use their preferred drawing tools. We placed no constraints on the software the artists used. Instead, we encouraged artists to employ the tool that will most facilitate the brainstorming process for them. Some artists used tools that have auto-colorization capabilities like Adobe Illustrator and Clip Studio Paint, while others selected tools that did not support auto-colorization like FireAlpaca and PaintTool SAI. The artists, directors, and researcher used Zoom to communicate. + + < g r a p h i c s > + +Figure 12: Participant's sketches and their corresponding colored thumbnails created using our tool under the (a) Limited time condition and the (b) Unlimited time condition. The images were labeled with the participant number and design request completed (D3: "A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D4: "She's a determined and courageous healer, with a dark and eerie appearance."; D6:"She's a charming and fun-loving socialite with a vintage and classic look."). P10 used the face selector to design the character according to D3 and D4, while P7 used the face selector to design the character according to D4. + + < g r a p h i c s > + +Figure 13: Characters created by the participants from Fig. 12 using Photoshop. The two leftmost illustrations were created under the (a) Limited time condition, while the two rightmost were created under the (b) Unlimted time condition. (D1:"Cheerful female character with long hair. She has a cool and flowing appearance."; D3: "A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D5: "She's a dedicated and knowledgeable scholar with a bright and sunny aesthetic."). + +Tasks. Before the study, each director submitted two character designs. Two different artists were randomly assigned to each director to complete his/her designs. The directors introduced their designs to each artist in a brainstorming session. Moreover, the artists shared their screens in these sessions to show their sketches to the directors. The artists used our tool in one brainstorming session and their selected drawing tool in the other. The session was terminated when the director was satisfied with the rough design the artist produced. The time of completing these sessions was recorded to compare our tool with other drawing tools. + +After the brainstorming session ended, the artists submitted the turnaround sheets to their respective directors via e-mail. The artists iterated on the designs based on the directors' feedback. The study concluded when the directors approved the turnaround sheet that each of the two assigned artists submitted. + +After the directors approved an artist's turnaround sheet, they were asked to complete a survey. The directors were asked to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree): + + * I am satisfied with the quality of design the artist produced. + + * The design matches my description. + + < g r a p h i c s > + +Figure 14: The number of participants (out of 27) who selected the designs created under each tool condition. + +§ COMMUNICATION WITH THE ARTIST IN THE BRAINSTORMING SESSION WAS EASY. + +The artists were asked to report the number of hours they spent working on the design, as well as to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree): + +§ COMMUNICATION WITH THE DIRECTOR IN THE BRAINSTORMING SESSION WAS EASY. + +Results. The amount of time in the brainstorming session is shown in Table 2 as Session time. On average, the artists spent a shorter amount of time in the brainstorming session by using our tool ( ${855} \pm {95.02}$ seconds) compared to using other drawing tools ( ${1584.8} \pm {165.29}$ seconds). Despite artists spending a shorter amount of time in the brainstorming sessions, they spent a similar amount of time working on the designs after using our tool $({3.1} \pm {0.9}$ hours) in the brainstorming session compared to after using other drawing tools ( ${3.2} \pm {0.86}$ hours). + +Moreover, directors overall were satisfied with the quality of the turnaround sheets produced by the artists after using our tool (Md=5 strongly agree) akin to after using other drawing tools (Md=5 strongly agree). Directors overall felt that the turnaround sheets produced after using our tool (Md=4 agree) matched their description as well. The turnaround sheets created in our user study are included in the supplementary material. + +Only one director was unsatisfied with the turnaround sheet the artist produced after using our tool in the brainstorming session. The director worked with Artist 1, giving a score of 2 (disagree) to both the quality of the design and its match to the director's description. In the brainstorming session, the artist produced the thumbnail shown in Fig. 15 which the director approved. In the e-mail correspondence after the brainstorming session, the director was indecisive about the character's specifications. These miscommunications resulted in both the artist and director rating the communication in the brainstorming session lower than in any other session, with the artist giving the director a score of 1 (strongly disagree) and the director giving the artist a score of 3 (neutral) as can be seen in Table 2. + +Overall, both artists and directors believed that communication with their counterparts went smoothly. The directors rated communication with the artists using our tool (Md=5 strongly agree) akin to using other drawing tools (Md=5 strongly agree) during the brainstorming session. Artists reported slightly better communication with the directors after using our tool (Md=5 strongly agree) compared to other drawing tools (Md=4 agree). + + < g r a p h i c s > + +Figure 15: The turnaround sheet created by Artist 1 after using our tool in the brainstorming session. The lower right corner shows the colored thumbnail produced by our tool and its input sketch. + +§ 9 DISCUSSION + +Limitations. Although participants believed that our tool allows them to draft a character much faster than Photoshop, they encountered some limitations to the framework. For example, although our tool was able to color multiple faces sketched by an artist within the same canvas, our GAN tends to style all faces with the same color scheme, limiting designs to only one character per canvas. Moreover, P3 suggested to "simply use the facial expressions, as opposed to the expressions plus some of the hair" in the face selector. Due to the face detection method we utilized, the selections we provided in the face selector included some portions of the characters hair. With a more sophisticated feature segmentation and classification model, the different facial features (e.g., hair, eyes) could be segmented and displayed separately in the selector. Some participants expressed the need for improvements to the interface like a larger sketch canvas (P7), an undo button (P3, P4, P7, P11, P13, P14, P19, P27), and stroke sensitivity/customization (P3, P5, P9, P12, P14). + +Our style selector is not fully customizable. For example, ${P3}$ wanted the ability "to have a way to set up a custom hair and skin color:" Using an architecture similar to the one proposed by Karras et al. [24] to transfer the style could allow a more finely-grained customization of the colorization scheme. + +While some participants were content with the variety of faces (P9, P11, P26) and styles (P14, P18, P27) our tool provides, due to limitations in our dataset, our tool does not provide artists with large variations in skin tone, nor does our tool provide a substantial number of non-female characters in the face selector for the same reason. Our tool is able to color male characters as illustrated in Figure 7 which we also provide in the face selector. Moreover, participants like ${P7}$ who, despite the usage of female pronouns in the design description, created non-female characters using our tool (as shown in Fig. 12). However, they identified the need for further inclusivity, especially in the face selector(P7, P21). Nevertheless, 20 out of the 27 participants used the face selector in at least one of their final designs. + +Participants overall praised the face selector in expediting the design process by providing a baseline for the character $({P1},{P2},{P3}$ , ${P6},{P10},{P26},{P27})$ . P27 found the face selector "made the app more useful in comparison to photoshop because you could start out with a template". + +max width= + +2*X 2|c|Artist 1 2|c|Artist 2 2|c|Artist 3 2|c|Artist 4 2|c|Artist 5 2|c|Average + +2-13 + Ours Other Ours Other Ours Other Ours Other Ours Other Ours Other + +1-13 +Session time (seconds) 1,200 1,800 900 1,860 666 1.113 797 1,893 712 1,258 855 1584.8 + +1-13 +Creation time (hours) 3 3 1 1 1.5 2 6 6 4 4 3.1 3.2 + +1-13 +Quality 2 5 5 5 5 5 5 5 5 5 5 5 + +1-13 +Description matching 2 5 5 5 4 5 5 3 4 5 4 5 + +1-13 +Communication (director) 3 5 5 5 5 5 5 5 5 5 5 5 + +1-13 +Communication (artist) 1 5 5 2 5 4 5 3 5 5 5 4 + +1-13 + +Table 2: Results of comparing our tool to other drawing tools in our second user study. Our tool's results are shown in the Ours columns while other tools' are shown in the Other columns. The Session time indicates the duration of the brainstorming session in seconds. The Creation time indicates the number of hours the artists spent working on the design after the brainstorming session. The Quality indicates how the directors rated their satisfaction with the quality of the turnaround sheet. Description matching indicates the director's rating of how well the turnaround sheet matched their description. Communication (director) indicates the rating of communication during the brainstorming session that the director reported, while Communication (artist) indicates the rating the artist reported. + +We received positive feedback from user study participants regarding our tool's applicability within the design pipeline $({P1},{P3}$ , ${P6},{P9},{P10})$ . Moreover, incorporating aspects of the NASA TLX (Task Load Index) [13] could also aid with further investigating our tool's usability. + +Future Work. The current focus of our tool was to expedite the exploration of character faces. By expanding our dataset to include full-body character images we may be able to train a network with an architecture similar to Esser et al.'s [8] to generate characters in various poses, body types and clothing. This may expand the capabilities of our tool to aid artists in creating the entire turnaround sheet in addition to exploring the character's head-shot. PaintsChainer [35] allows artists to provide color hints for the tool. We may be able to achieve a similar interaction by including hint channels into our GAN's input layer. The GAN can be trained by randomly sampling colored strokes from the character images in the edges-to-character dataset and using them as the inputs to the hint channels. + +Expanding our dataset to more styles of characters may also allow us to cater to designers who have a style dissimilar to anime as requested by ${P10}$ and ${P21}$ . Fig. 16 shows a participant’s design using Photoshop compared to our tool. Although our GAN could detect and generate portions of the character's hair and skin, the result is less than optimal when compared to the participant's design with Photoshop. The GAN in this case fell short of differentiating some portions of the skin (e.g., character's neck) from shading or determining the borders of the character's hair precisely. + +Participants believed that our tool was effective in creating images which can be used in the character exploration phase of the design process (i.e. thumbnails) but not as finished pieces (P1, P12). Expanding its capabilities to generate high-resolution images (as suggested by P12), textures, lighting, and shading may broaden its applicability from a simple brainstorming tool to a standalone design tool. We may be able to achieve a higher-resolution output by modifying our GAN's architecture and training it with high-resolution images, or by using the method proposed by Karras et al. [23] to train our GAN with low-resolution images. Finally, removing the generated background may produce more polished finished pieces. We may be able to remove the generated backgrounds in post-processing by training a semantic segmentation network [43] to label backgrounds which can be subtracted thereafter. Alternatively, we may be able to suppress the GAN from generating backgrounds by subtracting the backgrounds from our dataset, and training the GAN with the background-removed images. + +§ 10 CONCLUSION + +In this paper, we trained a Generative Adversarial Network (GAN) to automatically color anime character sketches. Using the GAN we created a tool that aids artists in the early stages of the character design process. We evaluated the efficacy of our tool in comparison to using Photoshop by conducting a user study, which showed our tool's potential in speeding-up the character exploration process while maintaining quality. Finally, we conducted a user study that simulates the director and artist interaction in the design pipeline. We concluded that our tool facilitated character design brainstorming without sacrificing the quality of the designs. + + < g r a p h i c s > + +Figure 16: A character design which strayed from our dataset's anime style. (a) The participant created the design under the Character Design Tool and Limited time conditions without using the face selector. (b) The participant created the design under the Photoshop and Unlimited time conditions. (D3:"A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D2:"She's a cold, lone wolf with a sense of humor") \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..00c5c03491e5956b7ef2dd2d17c592afc3328ebb --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,301 @@ +# MeetingMate: an Ambient Interface for Improved Meeting Effectiveness and Corporate Knowledge Sharing + +![01963e96-4198-7d64-9c0b-55b53a901671_0_301_364_1192_586_0.jpg](images/01963e96-4198-7d64-9c0b-55b53a901671_0_301_364_1192_586_0.jpg) + +Figure 1. The MeetingMate system. The content being presented (A) is captured and interpreted, then relevant corporate knowledge information is displayed on the devices of meeting attendees. + +## ABSTRACT + +We present MeetingMate, a system for improving meeting effectiveness and knowledge transfer within an organization. The system utilizes already existing content produced within the organization (slide decks, meeting information, HR databases, etc.) from which it generates and presents contextually relevant information in real-time to meeting participants through an ambient interface. Besides providing details about projects and content within the company, an employee relationship graph is created which supports increasing a user's "metaknowlege" about who knows what and who knows whom within the organization. + +## INTRODUCTION + +The institutional knowledge of a corporation is an important resource [29], and for a corporation to be successful it is necessary for this knowledge to be shared and transferred from those who have it, to those who need it [9]. However, large workforces, distributed locations, and demanding schedules act as barriers to successful knowledge transfer. Companies often employ specific activities designed to improve knowledge sharing such as email newsletters, wiki pages, + +Submitted to GI 2021 and all-hands presentations, however, these require employees to do additional work beyond their normal job functions, for some unknown, and unsure, future benefit. + +Besides improving knowledge and awareness of what is going on within a company, it is valuable to improve knowledge of "who knows what" and "who knows whom" within an organization. Such knowledge is referred to as metaknowledge [24], and increases in metaknowledge have been linked to improved work performance [25], improved ability to create new innovations combining existing ideas [13], and reduced duplication of work [10]. + +In knowledge-based work environments it is common for workers to spend between 20% and 80% of their time in meetings $\left\lbrack {{17},{20},{28},{32}}\right\rbrack$ , and while meetings are considered important $\left\lbrack {7,{16}}\right\rbrack$ , they are also often deemed by the attendees to be inefficient and ineffective $\left\lbrack {{18},{27}}\right\rbrack$ . + +This paper describes MeetingMate, a system for improving meeting effectiveness and knowledge transfer in an organization through an ambient interface. The MeetingMate system utilizes already existing content produced within the organization as source material (slide decks, meeting information, HR databases, etc.) from which it generates and presents contextually relevant information in real-time to meeting participants. This work contributes a novel technique for extracting presented meeting content directly from an HDMI stream, and is unique in its goal of presenting not only corporate "knowledge" about topics within the company, but also improving employees' "metaknowledge" about who knows what and who knows whom within the organization. + +## RELATED WORK + +## Meeting Assistance + +The development of technology to support and enhance meetings has long been a popular topic of research [34]. Rienks et al. [26] summarize much of the work in "pro-active" meeting assistants, and divide systems into categories based on when they provide assistance: before the meeting, during the meeting, or after the meeting. + +Meeting assistants which record the audio and/or visual content of meetings for future viewing are often referred to as "Smart Meeting Systems", and include projects such as the CALO Meeting Assistant System [33] which distributes the task of meeting capture, annotation, and audio transcription, and work by Geyer et al. [8] exploring the idea of allowing meeting participants to create meaningful indies into the meeting timeline while the meeting is occurring to improve later navigation. For a more thorough listing of work on "after the meeting" assistance, see Yu and Nakumura [37]. + +Of the systems designed for in-meeting support, many of them make use of an audio channel. SmartMic [36] makes use of smartphones to capture the audio of a meeting, and the AMIDA system [22] uses microphones in an instrumented meeting room to listen for key words in the conversation of a meeting and pull up or suggest contextually relevant documents. The Connector [5], uses the audio and video channels of a smart meeting room to determine if someone is available to receive a message, and provides mechanisms to deliver the message using the meeting room facilities. Our system is similar in some ways to AMIDA in that both bring up relevant content based on meeting context, however while AMIDA uses the audio of a meeting, our system derives context from the material being sent to the meeting room's projector. We are unaware of any prior work which extracts the visual content being presented in a meeting as context for a real-time meeting assistant. + +## Corporate Knowledge + +Some consider knowledge to be a company’s "greatest asset" [31]. Lee et al. developed the KMPI metric [3] to measure how well an organization performs in the area of Knowledge Management measured in five dimensions: knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. Our system aims to primarily improve knowledge sharing and knowledge utilization. + +For making better use of existing corporate knowledge resources, Zanker and Gordea [38] created a recommendation engine to help when manually searching through internal documents. Aastrand et al. [1] propose using open data to bootstrap the process of creating a hierarchical tagging structure for internal content, while Chen [4] looks at the process of text-mining through corporate documents to extract useful information. When these projects consider searching through and mining corporate data, they are considering "purposefully" created artifacts such as documents and web pages. + +Our work differs in that while we do mine purposefully created materials such as slide decks and project pages for data, we also make substantial use of "ancillary" corporate data such as meeting room records, mailing lists, and HR databases to generate a more complete picture of the corporate network. + +## Ambient Information Systems + +Ambient interfaces $\left\lbrack {2,{15},{21},{35}}\right\rbrack$ can be characterized as systems which support the monitoring of noncritical information with the intent of not distracting or burdening the user. Ambient displays have been studied for many uses, including software learning [14], social awareness [6], and office work [12]. + +Pousman and Stasko [23] outline four dimensions in the design of an ambient display system: information capacity, notification level, representational fidelity, and aesthetic emphasis. In our system we are aiming for high information capacity and representational fidelity, while keeping distractions to a minimum with a low notification level. + +## CORPORATE KNOWLEDGE CHALLENGES + +This work was developed at [COMPANY NAME] using [COMPANY NAME] internal data. [COMPANY NAME] is a multinational software company of $\sim {11},{000}$ employees. The workforce is widely distributed, with many distinct offices, and 17 of those offices house more than 150 employees. + +The company faces many of the challenges with corporate knowledge management [19], and results from the yearly employee survey suggest employees generally wish they had more awareness of what is going on in other parts of the company. [COMPANY NAME] has started a number of initiatives designed to improve awareness and knowledge sharing throughout the company such as wiki pages, project groups, mailing lists, and all-hands presentations. However, since these all require employees to do some additional work beyond their normal job duties without the guarantee of a particular future benefit, these initiatives have not had the desired effect on corporate knowledge sharing. + +The goal of our work is to take advantage of the vast amount of material already being produced, and information naturally available within an organization to improve efficiency and awareness. A primary design objective for the system is that there is no additional cost for someone to use the system; that is, it should be just as easy to use the system as it is to not use the system. + +## MEETINGMATE + +There are many different times and activities through the day where employee corporate knowledge could be improved. We've chosen to focus on times when employees are attending meetings. Since employees are involved in a large number of meetings $\left\lbrack {{17},{20},{28},{32}}\right\rbrack$ , a system designed to augment the experience of attending meetings would have a broad reach within the organizations, and since those meetings are often considered ineffective $\left\lbrack {{18},{27}}\right\rbrack$ , a meeting augmentation system could have the dual benefit of increasing overall corporate knowledge, while simultaneously improving the effectiveness of the meeting. + +To this end, we've created MeetingMate, a system consisting of three main components: a Data Collector, a Presentation Capture System, and an Ambient Assistant (Figure 2). + +![01963e96-4198-7d64-9c0b-55b53a901671_2_175_408_671_515_0.jpg](images/01963e96-4198-7d64-9c0b-55b53a901671_2_175_408_671_515_0.jpg) + +Figure 2. Architecture of the MeetingMate system. (components in light grey are part of the existing meeting room infrastructure). + +At a high level, the MeetingMate system uses the visual content presented at meeting as the "search query" for a corporate knowledge database, and presents contextually relevant information to the meeting attendees through an ambiently updating interface. We next describe the three main components of the MeetingMate system in more detail. + +## Data Sources/Data Collection + +This section describes a number of existing data sources within the organization, what information is available within these sources, and how the sources are processed to extract their content. For this work we only considered data which was "publically" available within the company, that is, data which everyone within the company has access to. By only using "publically" available data, we minimize the risk that someone using MeetingMate will see privileged or confidential information to which they should not have access. + +## S1. Slide Decks + +Within the company, there are two main locations where documents are stored: a Microsoft Sharepoint [39] server, and a JFile [name changed for anonymity] project management system. Between the two locations, there are 13,688 Power-Point (PPT) slide decks dating back to 1997, with 5,343 presentations created between 2014 and 2016. The decks cover a wide range of topics and have been submitted by authors in all divisions of the company. + +Processing the slide decks involves two main steps: collecting them from the servers, and analyzing the slides to extract relevant data. For the documents hosted on the Sharepoint server, the Sharepoint API [40] was used to search and download all files of either *.ppt or *.pptx file type. The JFile server does not have a useful API for this purpose, so a web-scraper was written in Python which iterates over each project and crawls through each sub-folder in the documents tree downloading *.ppt and *.pptx files. For both the Sharepoint and JFile based slide decks, high-level metadata such as the creation date, author, and file location are captured during the collection process. + +Once the PPTs are downloaded, a data extraction process begins. A C# program using the Office. Interop. PowerPoint libraries saves images of each slide in multiple resolutions as .png files, and the text on each slide is extracted and saved to a database. + +To download the full collection of 13,688 slide decks and extract the content from the 310,554 slides takes approximately 48 hours on a desktop computer. On a daily basis the Share-point and JFile systems are respectively searched and crawled, and newly added PPTs are downloaded and processed. This daily process takes approximately one hour. + +## S2. Meeting Information + +Since the system is restricted internally-public data, we cannot access the calendars of individual employees for meeting records. However, the majority of meetings take place in meeting rooms, which have shared calendars. As the company uses a Microsoft Outlook mail and calendar system, a C# program using the Office.Interop.Outlook libraries was written to first collect a list of all meeting rooms, and then step through each of the past meetings which have occurred in the room. For each meeting we record the meeting's: name (which often indicates the topic of the meeting), location, length, and a list of attendees. + +In total there were 719 meeting rooms which held a total of 355,233 meetings between 2014 and 2016. Collection of the entire data set took approximately 36 hours. The process of accessing each of the individual calendars is relatively time-consuming, taking $\sim 5$ hours for incremental daily updates. + +## S3. Code Repositories + +The source code developed by the company is primarily managed through and internal GitHub Enterprise Server. Using a Python script with the GitHub API [41], data for the 6,251 internal git repositories are collected including: repository name, description, contributors, languages used, and bytes of code. 1,131 employees are listed as contributors to at least one git repository + +Data collection for the full set of repositories requires $\sim {3.5}$ hours. Incremental updates are not easily captured using the API, so the full set of repository data is collected each day. + +## S4. JFile Project Pages + +The JFile project management system is organized into individual "projects" which represent specific working or interest groups within the company. The system houses 3,405 groups, with a median member count of 8 . The same crawler used for collecting the PPTs from JFile is used to collect the project information, collecting information such as: project name, project description, and a list of group members. + +## S5. Individual Human Resources Data + +Each of the 11,615 employees (contingent and full-time) at the company has an entry in the internal employee search system. This data is also available in spreadsheet form with 42 columns of information for each employee. Among the most relevant ones are name, email address, work location, job title, and manager's name. From the employee name, and manager's name fields we are able to construct the formal organizational structure of the company. Headshot photos (which are available for ${58}\%$ of employees), follow a consistent naming pattern and location, and are easily downloaded and associated with the appropriate record. + +Updating the individual HR data entails copying the daily spreadsheet from the HR system and running the script to look for and download any new, or updated, headshots. This process takes approximately 30 minutes. + +## S6. Email Group Memberships + +To simplify sending emails and meeting requests to collections of people, the company makes use of email groups. There are a total of 15,054 email groups stored on the Mi-crosoft Outlook mail server, with between 1 and 4,316 members in each, with a median member count of 6 . + +The email group data (group name, and membership list) is again collected with a C# program using the Office.In-terop. Outlook library, and the collection completes in approximately 30 minutes. + +## S7. Corporate Definitions + +Stored on the company intranet is an employee-maintained database of acronyms and terms frequently used within the organization. 322 acronyms and 776 terms are defined in this database which is downloaded on a daily basis. + +## Live Presentation Capture + +In order to supplement the presentation material with relevant information, the MeetingMate system needs to be aware of what is being presented. One possible way to do this would be to write an extension for PowerPoint which uses the Interop. PowerPoint APIs to extract the data being presented and transfer that information to the MeetingMate server. However, this approach has a number of shortcomings. First, it would only work for presentation material from Microsoft PowerPoint. Second, and more significantly, it would require presenters to do the additional work of installing a plug-in on the machine from which they are presenting. Since a primary design concern of MeetingMate is to not require additional set-up work for people to make use of the system, this approach is undesirable. + +Our approach is to instead use an HDMI capture and pass through device (designed for live streaming video games) to capture a copy of exactly what is being displayed on the presentation screen. In this way, the presenter performs the exact same steps to present content as they usually (plug a video cable into their laptop), but rather than the cable going directly to the projector, it goes to the HDMI capture device, which passes the signal on to the projector (Figure 2). + +The content saved by the HDMI capture device is an image of what is currently being sent to the presentation screen. This image needs to be processed to find any text being displayed. + +The windows computer connected to the capture device uploads screenshots to the Project Oxford OCR [42] service for text extraction. The process of uploading the screenshot to the OCR server and receiving the extracted text takes an average of 2.0 seconds. Images are only sent to the OCR service when the projected slide has changed, and this is accomplished by comparing the most recent image with the previously uploaded one, and only uploading the new image if at least 15% of the pixels have changed. The extracted text is sent to the Ambient Assistant server (Figure 2). + +Using OCR for the text extraction not only allows for text within images to be recognized, but also enables content from any source (PDF, video, etc.) to be analyzed. This makes MeetingMate completely agnostic to the format of the presented material. + +## Ambient Assistant + +The final piece of the MeetingMate system is the Ambient Assistant server, which receives the extracted text from the content being presented, finds relevant corporate knowledge content, and serves the results as a responsive webpage. The server is written in Python with the Flask framework, and a Tornado server running on a Windows Server 2012 instance. + +The goal of the Ambient Assistant webpage is to be as unobtrusive and non-disruptive as possible, while still providing useful information which will enhance the audience's understanding of the presentation. + +The Ambient Assistant displays relevant knowledge content and is shown on the served page as a series of 'cards' (Figure 1). The individual cards are designed to present the most relevant information at a glance, without requiring input, or too much attention, from the user. As new cards become available they slowly fade in at the bottom of the screen (over a period of 5 seconds) while the page automatically scrolls to make the most recent cards visible. This webpage could be viewed on a number of devices - we have explored several including projecting the ambient assistant onto a secondary screen beside the main projection screen - but believe the most useful configuration is for individuals to view the Ambient Assistant on a personal device such as a phone, tablet, or laptop. + +The following sections describe the types of cards which are available, when they are displayed, and what information they contain. + +## Acronyms and Definitions + +Corporate communications are often riddled with acronyms and jargon, making the text unnecessarily difficult to understand $\left\lbrack {{30},{43}}\right\rbrack$ . As an example, in the 13,688 slide decks collected from the company servers, there are over 132,588 instances of acronyms on the 300,000 slides. However, only 1.7% of those acronyms are defined within the slide deck where they are used. + +The first two card types are internal technology definitions and acronym expansions. The presentation text is searched for any of the collected corporate definitions or acronyms (S7), and if they are found, a card is shown with the acronym expanded (if applicable) and the term defined (Figure 3). + +![01963e96-4198-7d64-9c0b-55b53a901671_4_161_521_703_227_0.jpg](images/01963e96-4198-7d64-9c0b-55b53a901671_4_161_521_703_227_0.jpg) + +Figure 3. Sample internal technology definition (left) and acronym expansion (right) cards. + +## Employee Information + +The employee information card is displayed whenever an employee's name or email address is found on a slide (Figure 4). The lists of employee names and email addresses are derived from the human resources data (S5). Early testing revealed a common occurrence where the name an employee goes by is different from their 'official' name in the HR database (ex, Jon Smith vs. Jonathan Smith). To overcome this, a list of common name alternatives was used to generate a list of possible names for each person (ex, Jonathan Smith could be either "Jon Smith" or "Jonathan Smith") + +![01963e96-4198-7d64-9c0b-55b53a901671_4_302_1236_416_372_0.jpg](images/01963e96-4198-7d64-9c0b-55b53a901671_4_302_1236_416_372_0.jpg) + +Figure 4. Sample employee information card. + +Besides the employee's name, the card also displays the employee's headshot, job title, and work location. Additionally, a lists of the projects (S4) and code repositories (S3) the employee is actively contributing to are included. Finally, the most closely related employees using the computed employee network graph (discussed below) are presented as a list of "Frequent Collaborators". Combined, these lists give an overview of both what and who this employee knows. + +## Projects/Code Repositories + +The next two card types are project cards and repository cards (Figure 5), which are derived from the JFile (S4) and GitHub (S3) data sources. For each, the name and description of the project/repository is displayed, along with names of members or contributors, and in the case of repository cards, the list of programming languages used are also shown. + +![01963e96-4198-7d64-9c0b-55b53a901671_4_929_298_707_256_0.jpg](images/01963e96-4198-7d64-9c0b-55b53a901671_4_929_298_707_256_0.jpg) + +Figure 5. Sample project (left) and repository (right) cards. + +The project and repository cards are shown whenever the exact project or repository name is found on a slide. Additionally, these cards are displayed if the text of the slide contains many of the same keywords as the description of a project or repository. This is determined by transforming the descriptions of the projects and repositories into ${tf}$ -idf vectors [11], computing the cosine similarity between the descriptions and the recognized text, and displaying the projects/repositories with a cosine similarity $> {0.85}$ . + +## Contextually Similar Slides + +The final card type displays contextually relevant slides (Figure 6) from the collection of over 300,000 slides gathered from the internal slide deck repositories (S1). Besides an image of the contextually relevant slide itself, the card also presents the author of the slide deck, when it was created, and a list of employees who are mentioned somewhere in the deck. Clicking on the "more" icon at the top right of the card presents options to: generate an email containing a link to the contextually relevant presentation, open the presentation directly, or dismiss this card and have it no longer be suggested. The other card types each have similar menus. + +![01963e96-4198-7d64-9c0b-55b53a901671_4_944_1395_694_393_0.jpg](images/01963e96-4198-7d64-9c0b-55b53a901671_4_944_1395_694_393_0.jpg) + +Figure 6. Sample contextually similar slide card (left), and the associated context menu (right). + +Analogous to the process used to find contextually similar projects and repositories, the text for each of the slides in the slide deck repository are converted to tf-idf vectors and compared to captured text. To increase exposure to a wider range of content, if there are any slides with a cosine similarity $>$ 0.85 , one slide is chosen at random and displayed. + +## Personal Connections, Employee Relationships + +The information on the cards above is tailored to the content being presented, but does not change based on who is viewing the Ambient Assistant. By taking into account who is using the Ambient Assistant ("Clarence Mcevoy" in Figure 7), an additional "Personal Connection" section can be inserted into the cards which highlights a connection the logged in user has to a person or project (Figure 7). + +![01963e96-4198-7d64-9c0b-55b53a901671_5_161_461_710_335_0.jpg](images/01963e96-4198-7d64-9c0b-55b53a901671_5_161_461_710_335_0.jpg) + +Figure 7. Sample cards with additional personal connection information. + +This personal connection field is populated by looking at an employee network graph computed using the: Meeting Information (S2), Code Repository (S3), JFile Project Pages (S4), HR Data (S5), and Emails Lists (S6) data sets. For each data set, an adjacency matrix is computed based on the strength of the inter-personal connection for each pair of employees. + +For example, using the Meeting Information data set, we look at each meeting both employees were a part of, and calculate the weight of the edge between two employees $\left( {e}_{1}\right.$ and $\left. {e}_{2}\right)$ with the following formula: + +$$ +{\text{weight}}_{{e1},{e2}} = \mathop{\sum }\limits_{{\text{meetings}\left( {{e}_{1},{e}_{2}}\right) }}^{m}\left( \frac{{\text{length}}_{m}}{{\text{#attendees}}_{m}}\right) +$$ + +Where length ${h}_{m}$ is the length of the meeting, and #attendees ${}_{m}$ is the number of people in the meeting. This means longer meetings, and meetings with fewer attendees are weighted more heavily. Once all edge weights are computed, they are globally ranked from strongest to weakest, and re-mapped between 0 and 1 . Similar calculations are carried out for the repository, project page, and email list data sets. For employee data, edges are created between employees and managers. + +The edges from the individual data set adjacency matricies are then combined into one overall adjacency matrix by summing the weights from the individual edges to create a composite "weight" metric incorporating all the different measures of adjacency. Then, to find the strongest "personal connection" between any pair of employees, we look for the heaviest-weight, shortest path. That is, we take the set of all shortest paths between the two employee nodes, and then select the path in which the edge weights are the highest. + +The heaviest-weight, shortest path is then converted to a natural language sentence by looking at the strongest components of each edge. For example, "You are managed by Lester Carr; who frequently meets with William Topolin-ski", or "You are in some project groups (including UI Unification) with Alberta Santiago, who reports to Norma Stenn". The overall adjacency matrix represents a single connected component, so a path can be computed between any two employees. However, only paths with at most one intermediate node are displayed to emphasize "strong" connections. + +## FEEDBACK AND DISCUSSION + +To preliminarily test the design and usefulness of Meeting-Mate, the system was deployed and tested over a series of weekly group meetings. In several cases the system provided truly unknown information to the audience (in one case while sharing a research paper, an employee card for one of the authors showed up - prior to that, attendees at the meeting did not know the author had been hired). This deployment also indicated some ways the system produced spurious or unnecessary results - for example, the internal dictionary contains a definition for "User" as "someone who uses our software", and this definition appeared whenever the (very common) word 'user' appears on a slide. While it is easy for users to mark content to "never be shown again", it would be useful to reduce the number of low-utility results at the system level. In the future more advanced language modelling could look at the surrounding context of the slides, or recent slides, to better predict what results might be most relevant. It would also be interesting to explore recognizing content other than text, such as headshots of particular images, and using those as contextual cues. + +The next step is to deploy the system in a meeting room continuously for several months to collect more feedback about how the system performs and is received. This wider deployment will bring up some interesting issues. For meetings concerning highly sensitive topics, we plan to have a very clear, physical switch for presenters to "disable" Meeting-Mate if they are uncomfortable with the content of their presentation being captured. It will be interesting to see how frequently that functionality is utilized. + +While we are limiting ourselves to "publically" available internal data, there are still cases where information which is technically visible to all employees, probably shouldn't be. For example a meeting name could be "Discuss firing John Doe". The creator of such a meeting is probably unaware that the meeting name is publically viewable. Practically, the system will need a way to remove these sorts of sensitive entries, but hopefully the deployment of this system will make people more aware of what is visible to other employees. It would also be interesting to make use of non-public information in the system to allow for more personalized recommendations. + +Overall, we believe MeetingMate serves as a valuable solution for improving meeting effectiveness and knowledge sharing within an organization, and will serve as an example for further work in this area. + +REFERENCES + +1. Aastrand, G., Celebi, R. and Sauermann, L. 2010. Using Linked Open Data to Bootstrap Corporate Knowledge Management in the OrganiK Project. Proceedings of the 6th International Conference on Semantic Systems (I-SEMANTICS '10), ACM, 18:1- 18:8. http://doi.org/10.1145/1839707.1839730 + +2. Cadiz, J.J., Venolia, G., Jancke, G. and Gupta, A. 2002. Designing and Deploying an Information Awareness Interface. Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work (CSCW '02), ACM, 314-323. http://doi.org/10.1145/587078.587122 + +3. Chang Lee, K., Lee, S. and Kang, I.W. 2005. KMPI: measuring knowledge management performance. Information & Management 42,3: 469-482. http://doi.org/10.1016/j.im.2004.02.003 + +4. Chen, H. 2001. Knowledge Management Systems: A Text Mining Perspective. Knowledge Computing Corporation. Retrieved from http://arizona.openrepository.com/arizona/handle/10150/106481 + +5. Danninger, M., Flaherty, G., Bernardin, K., Ekenel, H.K., Köhler, T., Malkin, R., Stiefelhagen, R. and Wai-bel, A. 2005. The Connector: Facilitating Context-aware Communication. Proceedings of the 7th International Conference on Multimodal Interfaces (ICMI ’05), ACM, 69-75. + +http://doi.org/10.1145/1088463.1088478 + +6. Dourish, P. and Bly, S. 1992. Portholes: Supporting Awareness in a Distributed Work Group. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '92), ACM, 541-547. http://doi.org/10.1145/142750.142982 + +7. Doyle, M. 1993. How to Make Meetings Work! Berkley, New York. + +8. Geyer, W., Richter, H. and Abowd, G.D. 2005. Towards a Smarter Meeting Record-Capture and Access of Meetings Revisited. Multimedia Tools Appl. 27, 3: 393-410. http://doi.org/10.1007/s11042-005-3815-0 + +9. Hinds, P.J., Patterson, M. and Pfeffer, J. 2001. Bothered by abstraction: the effect of expertise on knowledge transfer and subsequent novice performance. The Journal of Applied Psychology 86, 6: 1232-1243. + +10. Jackson, P. and Klobas, J. 2008. Transactive memory systems in organizations: Implications for knowledge directories. Decision Support Systems 44, 2: 409-424. http://doi.org/10.1016/j.dss.2007.05.001 + +11. Jones, K.S. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation 28: 11-21. + +12. MacIntyre, B., Mynatt, E.D., Voida, S., Hansen, K.M., Tullio, J. and Corso, G.M. 2001. Support for Multitasking and Background Awareness Using Interactive Peripheral Displays. Proceedings of the 14th Annual + +ACM Symposium on User Interface Software and Technology (UIST '01), ACM, 41-50. http://doi.org/10.1145/502348.502355 + +13. Majchrzak, A., Cooper, L.P. and Neece, O.E. 2004. Knowledge Reuse for Innovation. Management Science 50, 2: 174-188. http://doi.org/10.1287/mnsc.1030.0116 + +14. Matejka, J., Grossman, T. and Fitzmaurice, G. 2011. Ambient Help. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11), ACM, 2751-2760. http://doi.org/10.1145/1978942.1979349 + +15. Matthews, T. 2006. Designing and Evaluating Glance-able Peripheral Displays. Proceedings of the 6th Conference on Designing Interactive Systems (DIS '06), ACM, 343-345. http://doi.org/10.1145/1142405.1142457 + +16. Monge, P.R., McSween, C. and Wyer, J. 1989. A profile of meetings in corporate America: Results of the ${3M}$ meeting effectiveness study. Annenberg School of Communications, University of Southern California. + +17. Mosvick, R.K. and Nelson, R.B. 1986. We'Ve Got to Start Meeting Like This!: A Guide to Successful Business Meeting Management. HarperCollins Canada / Scott F - Prof, Glenview, Ill. + +18. Murray, G. 2014. Learning How Productive and Unproductive Meetings Differ. In Advances in Artificial Intelligence, Marina Sokolova and Peter van Beek (eds.). Springer International Publishing, 191-202. Retrieved April 12, 2016 from http://link.springer.com/chapter/10.1007/978-3-319- 06483-3_17 + +19. Otto Kühn, A.A. 1997. Corporate Memories for Knowledge Management in Industrial Practice: Prospects and Challenges. J. UCS 3, 8: 929-954. http://doi.org/10.1007/978-3-662-03723-2_9 + +20. Panko, R.R. 1992. Managerial Communication Patterns. Journal of Organizational Computing 2, 1: 95- 122. http://doi.org/10.1080/10919399209540176 + +21. Plaue, C. and Stasko, J. 2007. Animation in a Peripheral Display: Distraction, Appeal, and Information Conveyance in Varying Display Configurations. Proceedings of Graphics Interface 2007 (GI '07), ACM, 135-142. http://doi.org/10.1145/1268517.1268541 + +22. Popescu-Belis, A., Boertjes, E., Kilgour, J., Poller, P., Castronovo, S., Wilson, T., Jaimes, A. and Carletta, J. 2008. The AMIDA automatic content linking device: Just-in-time document retrieval in meetings. 5237 LNCS: 272-283. http://doi.org/10.1007/978-3-540- 85853-9-25 + +23. Pousman, Z. and Stasko, J. 2006. A Taxonomy of Ambient Information Systems: Four Patterns of Design. Proceedings of the Working Conference on Advanced Visual Interfaces (AVI '06), ACM, 67-74. http://doi.org/10.1145/1133265.1133277 + +24. Ren, Y. and Argote, L. 2011. Transactive Memory Systems 1985-2010: An Integrative Framework of Key + +Dimensions, Antecedents, and Consequences. The Academy of Management Annals 5, 1: 189-229. http://doi.org/10.1080/19416520.2011.590300 + +25. Ren, Y., Carley, K.M. and Argote, L. 2006. The Contingent Effects of Transactive Memory: When Is It More Beneficial to Know What Others Know? Management Science 52, 5: 671-682. http://doi.org/10.1287/mnsc.1050.0496 + +26. Rienks, R., Nijholt, A. and Barthelmess, P. 2007. Proactive meeting assistants: attention please! ${AI}\&$ SOCIETY 23, 2: 213-231. http://doi.org/10.1007/s00146-007-0135-0 + +27. Rogelberg, S.G., Scott, C. and Kello, J. 2007. The science and fiction of meetings. MIT Sloan management review 48, 2: 18. + +28. Romano Jr, N. and Nunamaker Jr, J. 2001. Meeting Analysis: Findings from Research and Practice. Proceedings of the 34th Annual Hawaii International Conference on System Sciences ( HICSS-34)-Volume 1 - Volume 1 (HICSS '01), IEEE Computer Society, 1072-. Retrieved April 12, 2016 from http://dl.acm.org/citation.cfm?id=820557.820581 + +29. Sheng Wang, R.A.N. 2010. Knowledge Sharing: A Review and Directions for Future Research. Human Resource Management Review 20, 2: 115-131. http://doi.org/10.1016/j.hrmr.2009.10.001 + +30. Simon, P. Message Not Received. Retrieved April 11, 2016 from https://www.good-reads.com/work/best_book/42761341-message-not-received-why-business-communication-is-broken-and-how-to-fi + +31. Steve Offsey. 1997. Knowledge Management: Linking People to Knowledge for Bottom Line Results. Journal of Knowledge Management 1, 2: 113-122. http://doi.org/10.1108/EUM0000000004586 + +32. Team, 3M Meeting Management and Drew, J. 1994. Mastering Meetings: Discovering the Hidden Potential of Effective Business Meetings. Mcgraw-Hill, New York. + +33. Tur, G., Stolcke, A., Voss, L., Peters, S., Hakkani-Tur, D., Dowding, J., Favre, B., Fernandez, R., Frampton, M., Frandsen, M., Frederickson, C., Graciarena, M., Kintzing, D., Leveque, K., Mason, S., Niekrasz, J., Purver, M., Riedhammer, K., Shriberg, E., Jing Tien, Vergyri, D. and Fan Yang. 2010. The CALO Meeting Assistant System. IEEE Transactions on Audio, + +Speech, and Language Processing 18, 6: 1601-1611. http://doi.org/10.1109/TASL.2009.2038810 + +34. Turoff, M. and Hiltz, S.R. 1977. Telecommunications: Meeting through your computer: Information exchange and engineering decision-making are made easy through computer-assisted conferencing. IEEE Spectrum 14, 5: 58-64. http://doi.org/10.1109/MSPEC.1977.6367610 + +35. Vogel, D. and Balakrishnan, R. 2004. Interactive Public Ambient Displays: Transitioning from Implicit to Explicit, Public to Personal, Interaction with Multiple Users. Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST '04), ACM, 137-146. http://doi.org/10.1145/1029632.1029656 + +36. Xu, H., Yu, Z., Wang, Z. and Ni, H. 2014. SmartMic: a smartphone-based meeting support system. The Journal of Supercomputing 70, 3: 1318-1330. http://doi.org/10.1007/s11227-014-1229-3 + +37. Yu, Z. and Nakamura, Y. 2010. Smart Meeting Systems: A Survey of State-of-the-art and Open Issues. ACM Comput. Surv. 42, 2: 8:1-8:20. http://doi.org/10.1145/1667062.1667065 + +38. Zanker, M. and Gordea, S. 2006. Recommendation-based Browsing Assistance for Corporate Knowledge Portals. Proceedings of the 2006 ACM Symposium on Applied Computing (SAC '06), ACM, 1116-1117. http://doi.org/10.1145/1141277.1141541 + +39. SharePoint - Team Collaboration Software Tools. Mi-crosoft Office. Retrieved April 13, 2016 from https://products.office.com/en-us/sharepoint/collaboration + +40. SharePoint 2013. Retrieved April 13, 2016 from https://msdn.microsoft.com/en-us/library/office/jj162979.aspx + +41. GitHub API v3 | GitHub Developer Guide. Retrieved April 13, 2016 from https://developer.github.com/v3/ + +42. Microsoft Cognitive Services. Retrieved April 11, 2016 from https://www.microsoft.com/cognitive-services/en-us/computer-vision-api + +43. Why you should cool it with the corporate jargon - Fortune. Retrieved April 11, 2016 from http://fortune.com/2011/09/28/why-you-should-cool-it-with-the-corporate-jargon/ \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..fd474e653d5a77e508bfe96b80d5ffa24913df4c --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,205 @@ +§ MEETINGMATE: AN AMBIENT INTERFACE FOR IMPROVED MEETING EFFECTIVENESS AND CORPORATE KNOWLEDGE SHARING + + < g r a p h i c s > + +Figure 1. The MeetingMate system. The content being presented (A) is captured and interpreted, then relevant corporate knowledge information is displayed on the devices of meeting attendees. + +§ ABSTRACT + +We present MeetingMate, a system for improving meeting effectiveness and knowledge transfer within an organization. The system utilizes already existing content produced within the organization (slide decks, meeting information, HR databases, etc.) from which it generates and presents contextually relevant information in real-time to meeting participants through an ambient interface. Besides providing details about projects and content within the company, an employee relationship graph is created which supports increasing a user's "metaknowlege" about who knows what and who knows whom within the organization. + +§ INTRODUCTION + +The institutional knowledge of a corporation is an important resource [29], and for a corporation to be successful it is necessary for this knowledge to be shared and transferred from those who have it, to those who need it [9]. However, large workforces, distributed locations, and demanding schedules act as barriers to successful knowledge transfer. Companies often employ specific activities designed to improve knowledge sharing such as email newsletters, wiki pages, + +Submitted to GI 2021 and all-hands presentations, however, these require employees to do additional work beyond their normal job functions, for some unknown, and unsure, future benefit. + +Besides improving knowledge and awareness of what is going on within a company, it is valuable to improve knowledge of "who knows what" and "who knows whom" within an organization. Such knowledge is referred to as metaknowledge [24], and increases in metaknowledge have been linked to improved work performance [25], improved ability to create new innovations combining existing ideas [13], and reduced duplication of work [10]. + +In knowledge-based work environments it is common for workers to spend between 20% and 80% of their time in meetings $\left\lbrack {{17},{20},{28},{32}}\right\rbrack$ , and while meetings are considered important $\left\lbrack {7,{16}}\right\rbrack$ , they are also often deemed by the attendees to be inefficient and ineffective $\left\lbrack {{18},{27}}\right\rbrack$ . + +This paper describes MeetingMate, a system for improving meeting effectiveness and knowledge transfer in an organization through an ambient interface. The MeetingMate system utilizes already existing content produced within the organization as source material (slide decks, meeting information, HR databases, etc.) from which it generates and presents contextually relevant information in real-time to meeting participants. This work contributes a novel technique for extracting presented meeting content directly from an HDMI stream, and is unique in its goal of presenting not only corporate "knowledge" about topics within the company, but also improving employees' "metaknowledge" about who knows what and who knows whom within the organization. + +§ RELATED WORK + +§ MEETING ASSISTANCE + +The development of technology to support and enhance meetings has long been a popular topic of research [34]. Rienks et al. [26] summarize much of the work in "pro-active" meeting assistants, and divide systems into categories based on when they provide assistance: before the meeting, during the meeting, or after the meeting. + +Meeting assistants which record the audio and/or visual content of meetings for future viewing are often referred to as "Smart Meeting Systems", and include projects such as the CALO Meeting Assistant System [33] which distributes the task of meeting capture, annotation, and audio transcription, and work by Geyer et al. [8] exploring the idea of allowing meeting participants to create meaningful indies into the meeting timeline while the meeting is occurring to improve later navigation. For a more thorough listing of work on "after the meeting" assistance, see Yu and Nakumura [37]. + +Of the systems designed for in-meeting support, many of them make use of an audio channel. SmartMic [36] makes use of smartphones to capture the audio of a meeting, and the AMIDA system [22] uses microphones in an instrumented meeting room to listen for key words in the conversation of a meeting and pull up or suggest contextually relevant documents. The Connector [5], uses the audio and video channels of a smart meeting room to determine if someone is available to receive a message, and provides mechanisms to deliver the message using the meeting room facilities. Our system is similar in some ways to AMIDA in that both bring up relevant content based on meeting context, however while AMIDA uses the audio of a meeting, our system derives context from the material being sent to the meeting room's projector. We are unaware of any prior work which extracts the visual content being presented in a meeting as context for a real-time meeting assistant. + +§ CORPORATE KNOWLEDGE + +Some consider knowledge to be a company’s "greatest asset" [31]. Lee et al. developed the KMPI metric [3] to measure how well an organization performs in the area of Knowledge Management measured in five dimensions: knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. Our system aims to primarily improve knowledge sharing and knowledge utilization. + +For making better use of existing corporate knowledge resources, Zanker and Gordea [38] created a recommendation engine to help when manually searching through internal documents. Aastrand et al. [1] propose using open data to bootstrap the process of creating a hierarchical tagging structure for internal content, while Chen [4] looks at the process of text-mining through corporate documents to extract useful information. When these projects consider searching through and mining corporate data, they are considering "purposefully" created artifacts such as documents and web pages. + +Our work differs in that while we do mine purposefully created materials such as slide decks and project pages for data, we also make substantial use of "ancillary" corporate data such as meeting room records, mailing lists, and HR databases to generate a more complete picture of the corporate network. + +§ AMBIENT INFORMATION SYSTEMS + +Ambient interfaces $\left\lbrack {2,{15},{21},{35}}\right\rbrack$ can be characterized as systems which support the monitoring of noncritical information with the intent of not distracting or burdening the user. Ambient displays have been studied for many uses, including software learning [14], social awareness [6], and office work [12]. + +Pousman and Stasko [23] outline four dimensions in the design of an ambient display system: information capacity, notification level, representational fidelity, and aesthetic emphasis. In our system we are aiming for high information capacity and representational fidelity, while keeping distractions to a minimum with a low notification level. + +§ CORPORATE KNOWLEDGE CHALLENGES + +This work was developed at [COMPANY NAME] using [COMPANY NAME] internal data. [COMPANY NAME] is a multinational software company of $\sim {11},{000}$ employees. The workforce is widely distributed, with many distinct offices, and 17 of those offices house more than 150 employees. + +The company faces many of the challenges with corporate knowledge management [19], and results from the yearly employee survey suggest employees generally wish they had more awareness of what is going on in other parts of the company. [COMPANY NAME] has started a number of initiatives designed to improve awareness and knowledge sharing throughout the company such as wiki pages, project groups, mailing lists, and all-hands presentations. However, since these all require employees to do some additional work beyond their normal job duties without the guarantee of a particular future benefit, these initiatives have not had the desired effect on corporate knowledge sharing. + +The goal of our work is to take advantage of the vast amount of material already being produced, and information naturally available within an organization to improve efficiency and awareness. A primary design objective for the system is that there is no additional cost for someone to use the system; that is, it should be just as easy to use the system as it is to not use the system. + +§ MEETINGMATE + +There are many different times and activities through the day where employee corporate knowledge could be improved. We've chosen to focus on times when employees are attending meetings. Since employees are involved in a large number of meetings $\left\lbrack {{17},{20},{28},{32}}\right\rbrack$ , a system designed to augment the experience of attending meetings would have a broad reach within the organizations, and since those meetings are often considered ineffective $\left\lbrack {{18},{27}}\right\rbrack$ , a meeting augmentation system could have the dual benefit of increasing overall corporate knowledge, while simultaneously improving the effectiveness of the meeting. + +To this end, we've created MeetingMate, a system consisting of three main components: a Data Collector, a Presentation Capture System, and an Ambient Assistant (Figure 2). + + < g r a p h i c s > + +Figure 2. Architecture of the MeetingMate system. (components in light grey are part of the existing meeting room infrastructure). + +At a high level, the MeetingMate system uses the visual content presented at meeting as the "search query" for a corporate knowledge database, and presents contextually relevant information to the meeting attendees through an ambiently updating interface. We next describe the three main components of the MeetingMate system in more detail. + +§ DATA SOURCES/DATA COLLECTION + +This section describes a number of existing data sources within the organization, what information is available within these sources, and how the sources are processed to extract their content. For this work we only considered data which was "publically" available within the company, that is, data which everyone within the company has access to. By only using "publically" available data, we minimize the risk that someone using MeetingMate will see privileged or confidential information to which they should not have access. + +§ S1. SLIDE DECKS + +Within the company, there are two main locations where documents are stored: a Microsoft Sharepoint [39] server, and a JFile [name changed for anonymity] project management system. Between the two locations, there are 13,688 Power-Point (PPT) slide decks dating back to 1997, with 5,343 presentations created between 2014 and 2016. The decks cover a wide range of topics and have been submitted by authors in all divisions of the company. + +Processing the slide decks involves two main steps: collecting them from the servers, and analyzing the slides to extract relevant data. For the documents hosted on the Sharepoint server, the Sharepoint API [40] was used to search and download all files of either *.ppt or *.pptx file type. The JFile server does not have a useful API for this purpose, so a web-scraper was written in Python which iterates over each project and crawls through each sub-folder in the documents tree downloading *.ppt and *.pptx files. For both the Sharepoint and JFile based slide decks, high-level metadata such as the creation date, author, and file location are captured during the collection process. + +Once the PPTs are downloaded, a data extraction process begins. A C# program using the Office. Interop. PowerPoint libraries saves images of each slide in multiple resolutions as .png files, and the text on each slide is extracted and saved to a database. + +To download the full collection of 13,688 slide decks and extract the content from the 310,554 slides takes approximately 48 hours on a desktop computer. On a daily basis the Share-point and JFile systems are respectively searched and crawled, and newly added PPTs are downloaded and processed. This daily process takes approximately one hour. + +§ S2. MEETING INFORMATION + +Since the system is restricted internally-public data, we cannot access the calendars of individual employees for meeting records. However, the majority of meetings take place in meeting rooms, which have shared calendars. As the company uses a Microsoft Outlook mail and calendar system, a C# program using the Office.Interop.Outlook libraries was written to first collect a list of all meeting rooms, and then step through each of the past meetings which have occurred in the room. For each meeting we record the meeting's: name (which often indicates the topic of the meeting), location, length, and a list of attendees. + +In total there were 719 meeting rooms which held a total of 355,233 meetings between 2014 and 2016. Collection of the entire data set took approximately 36 hours. The process of accessing each of the individual calendars is relatively time-consuming, taking $\sim 5$ hours for incremental daily updates. + +§ S3. CODE REPOSITORIES + +The source code developed by the company is primarily managed through and internal GitHub Enterprise Server. Using a Python script with the GitHub API [41], data for the 6,251 internal git repositories are collected including: repository name, description, contributors, languages used, and bytes of code. 1,131 employees are listed as contributors to at least one git repository + +Data collection for the full set of repositories requires $\sim {3.5}$ hours. Incremental updates are not easily captured using the API, so the full set of repository data is collected each day. + +§ S4. JFILE PROJECT PAGES + +The JFile project management system is organized into individual "projects" which represent specific working or interest groups within the company. The system houses 3,405 groups, with a median member count of 8 . The same crawler used for collecting the PPTs from JFile is used to collect the project information, collecting information such as: project name, project description, and a list of group members. + +§ S5. INDIVIDUAL HUMAN RESOURCES DATA + +Each of the 11,615 employees (contingent and full-time) at the company has an entry in the internal employee search system. This data is also available in spreadsheet form with 42 columns of information for each employee. Among the most relevant ones are name, email address, work location, job title, and manager's name. From the employee name, and manager's name fields we are able to construct the formal organizational structure of the company. Headshot photos (which are available for ${58}\%$ of employees), follow a consistent naming pattern and location, and are easily downloaded and associated with the appropriate record. + +Updating the individual HR data entails copying the daily spreadsheet from the HR system and running the script to look for and download any new, or updated, headshots. This process takes approximately 30 minutes. + +§ S6. EMAIL GROUP MEMBERSHIPS + +To simplify sending emails and meeting requests to collections of people, the company makes use of email groups. There are a total of 15,054 email groups stored on the Mi-crosoft Outlook mail server, with between 1 and 4,316 members in each, with a median member count of 6 . + +The email group data (group name, and membership list) is again collected with a C# program using the Office.In-terop. Outlook library, and the collection completes in approximately 30 minutes. + +§ S7. CORPORATE DEFINITIONS + +Stored on the company intranet is an employee-maintained database of acronyms and terms frequently used within the organization. 322 acronyms and 776 terms are defined in this database which is downloaded on a daily basis. + +§ LIVE PRESENTATION CAPTURE + +In order to supplement the presentation material with relevant information, the MeetingMate system needs to be aware of what is being presented. One possible way to do this would be to write an extension for PowerPoint which uses the Interop. PowerPoint APIs to extract the data being presented and transfer that information to the MeetingMate server. However, this approach has a number of shortcomings. First, it would only work for presentation material from Microsoft PowerPoint. Second, and more significantly, it would require presenters to do the additional work of installing a plug-in on the machine from which they are presenting. Since a primary design concern of MeetingMate is to not require additional set-up work for people to make use of the system, this approach is undesirable. + +Our approach is to instead use an HDMI capture and pass through device (designed for live streaming video games) to capture a copy of exactly what is being displayed on the presentation screen. In this way, the presenter performs the exact same steps to present content as they usually (plug a video cable into their laptop), but rather than the cable going directly to the projector, it goes to the HDMI capture device, which passes the signal on to the projector (Figure 2). + +The content saved by the HDMI capture device is an image of what is currently being sent to the presentation screen. This image needs to be processed to find any text being displayed. + +The windows computer connected to the capture device uploads screenshots to the Project Oxford OCR [42] service for text extraction. The process of uploading the screenshot to the OCR server and receiving the extracted text takes an average of 2.0 seconds. Images are only sent to the OCR service when the projected slide has changed, and this is accomplished by comparing the most recent image with the previously uploaded one, and only uploading the new image if at least 15% of the pixels have changed. The extracted text is sent to the Ambient Assistant server (Figure 2). + +Using OCR for the text extraction not only allows for text within images to be recognized, but also enables content from any source (PDF, video, etc.) to be analyzed. This makes MeetingMate completely agnostic to the format of the presented material. + +§ AMBIENT ASSISTANT + +The final piece of the MeetingMate system is the Ambient Assistant server, which receives the extracted text from the content being presented, finds relevant corporate knowledge content, and serves the results as a responsive webpage. The server is written in Python with the Flask framework, and a Tornado server running on a Windows Server 2012 instance. + +The goal of the Ambient Assistant webpage is to be as unobtrusive and non-disruptive as possible, while still providing useful information which will enhance the audience's understanding of the presentation. + +The Ambient Assistant displays relevant knowledge content and is shown on the served page as a series of 'cards' (Figure 1). The individual cards are designed to present the most relevant information at a glance, without requiring input, or too much attention, from the user. As new cards become available they slowly fade in at the bottom of the screen (over a period of 5 seconds) while the page automatically scrolls to make the most recent cards visible. This webpage could be viewed on a number of devices - we have explored several including projecting the ambient assistant onto a secondary screen beside the main projection screen - but believe the most useful configuration is for individuals to view the Ambient Assistant on a personal device such as a phone, tablet, or laptop. + +The following sections describe the types of cards which are available, when they are displayed, and what information they contain. + +§ ACRONYMS AND DEFINITIONS + +Corporate communications are often riddled with acronyms and jargon, making the text unnecessarily difficult to understand $\left\lbrack {{30},{43}}\right\rbrack$ . As an example, in the 13,688 slide decks collected from the company servers, there are over 132,588 instances of acronyms on the 300,000 slides. However, only 1.7% of those acronyms are defined within the slide deck where they are used. + +The first two card types are internal technology definitions and acronym expansions. The presentation text is searched for any of the collected corporate definitions or acronyms (S7), and if they are found, a card is shown with the acronym expanded (if applicable) and the term defined (Figure 3). + + < g r a p h i c s > + +Figure 3. Sample internal technology definition (left) and acronym expansion (right) cards. + +§ EMPLOYEE INFORMATION + +The employee information card is displayed whenever an employee's name or email address is found on a slide (Figure 4). The lists of employee names and email addresses are derived from the human resources data (S5). Early testing revealed a common occurrence where the name an employee goes by is different from their 'official' name in the HR database (ex, Jon Smith vs. Jonathan Smith). To overcome this, a list of common name alternatives was used to generate a list of possible names for each person (ex, Jonathan Smith could be either "Jon Smith" or "Jonathan Smith") + + < g r a p h i c s > + +Figure 4. Sample employee information card. + +Besides the employee's name, the card also displays the employee's headshot, job title, and work location. Additionally, a lists of the projects (S4) and code repositories (S3) the employee is actively contributing to are included. Finally, the most closely related employees using the computed employee network graph (discussed below) are presented as a list of "Frequent Collaborators". Combined, these lists give an overview of both what and who this employee knows. + +§ PROJECTS/CODE REPOSITORIES + +The next two card types are project cards and repository cards (Figure 5), which are derived from the JFile (S4) and GitHub (S3) data sources. For each, the name and description of the project/repository is displayed, along with names of members or contributors, and in the case of repository cards, the list of programming languages used are also shown. + + < g r a p h i c s > + +Figure 5. Sample project (left) and repository (right) cards. + +The project and repository cards are shown whenever the exact project or repository name is found on a slide. Additionally, these cards are displayed if the text of the slide contains many of the same keywords as the description of a project or repository. This is determined by transforming the descriptions of the projects and repositories into ${tf}$ -idf vectors [11], computing the cosine similarity between the descriptions and the recognized text, and displaying the projects/repositories with a cosine similarity $> {0.85}$ . + +§ CONTEXTUALLY SIMILAR SLIDES + +The final card type displays contextually relevant slides (Figure 6) from the collection of over 300,000 slides gathered from the internal slide deck repositories (S1). Besides an image of the contextually relevant slide itself, the card also presents the author of the slide deck, when it was created, and a list of employees who are mentioned somewhere in the deck. Clicking on the "more" icon at the top right of the card presents options to: generate an email containing a link to the contextually relevant presentation, open the presentation directly, or dismiss this card and have it no longer be suggested. The other card types each have similar menus. + + < g r a p h i c s > + +Figure 6. Sample contextually similar slide card (left), and the associated context menu (right). + +Analogous to the process used to find contextually similar projects and repositories, the text for each of the slides in the slide deck repository are converted to tf-idf vectors and compared to captured text. To increase exposure to a wider range of content, if there are any slides with a cosine similarity $>$ 0.85, one slide is chosen at random and displayed. + +§ PERSONAL CONNECTIONS, EMPLOYEE RELATIONSHIPS + +The information on the cards above is tailored to the content being presented, but does not change based on who is viewing the Ambient Assistant. By taking into account who is using the Ambient Assistant ("Clarence Mcevoy" in Figure 7), an additional "Personal Connection" section can be inserted into the cards which highlights a connection the logged in user has to a person or project (Figure 7). + + < g r a p h i c s > + +Figure 7. Sample cards with additional personal connection information. + +This personal connection field is populated by looking at an employee network graph computed using the: Meeting Information (S2), Code Repository (S3), JFile Project Pages (S4), HR Data (S5), and Emails Lists (S6) data sets. For each data set, an adjacency matrix is computed based on the strength of the inter-personal connection for each pair of employees. + +For example, using the Meeting Information data set, we look at each meeting both employees were a part of, and calculate the weight of the edge between two employees $\left( {e}_{1}\right.$ and $\left. {e}_{2}\right)$ with the following formula: + +$$ +{\text{ weight }}_{{e1},{e2}} = \mathop{\sum }\limits_{{\text{ meetings }\left( {{e}_{1},{e}_{2}}\right) }}^{m}\left( \frac{{\text{ length }}_{m}}{{\text{ \#attendees }}_{m}}\right) +$$ + +Where length ${h}_{m}$ is the length of the meeting, and #attendees ${}_{m}$ is the number of people in the meeting. This means longer meetings, and meetings with fewer attendees are weighted more heavily. Once all edge weights are computed, they are globally ranked from strongest to weakest, and re-mapped between 0 and 1 . Similar calculations are carried out for the repository, project page, and email list data sets. For employee data, edges are created between employees and managers. + +The edges from the individual data set adjacency matricies are then combined into one overall adjacency matrix by summing the weights from the individual edges to create a composite "weight" metric incorporating all the different measures of adjacency. Then, to find the strongest "personal connection" between any pair of employees, we look for the heaviest-weight, shortest path. That is, we take the set of all shortest paths between the two employee nodes, and then select the path in which the edge weights are the highest. + +The heaviest-weight, shortest path is then converted to a natural language sentence by looking at the strongest components of each edge. For example, "You are managed by Lester Carr; who frequently meets with William Topolin-ski", or "You are in some project groups (including UI Unification) with Alberta Santiago, who reports to Norma Stenn". The overall adjacency matrix represents a single connected component, so a path can be computed between any two employees. However, only paths with at most one intermediate node are displayed to emphasize "strong" connections. + +§ FEEDBACK AND DISCUSSION + +To preliminarily test the design and usefulness of Meeting-Mate, the system was deployed and tested over a series of weekly group meetings. In several cases the system provided truly unknown information to the audience (in one case while sharing a research paper, an employee card for one of the authors showed up - prior to that, attendees at the meeting did not know the author had been hired). This deployment also indicated some ways the system produced spurious or unnecessary results - for example, the internal dictionary contains a definition for "User" as "someone who uses our software", and this definition appeared whenever the (very common) word 'user' appears on a slide. While it is easy for users to mark content to "never be shown again", it would be useful to reduce the number of low-utility results at the system level. In the future more advanced language modelling could look at the surrounding context of the slides, or recent slides, to better predict what results might be most relevant. It would also be interesting to explore recognizing content other than text, such as headshots of particular images, and using those as contextual cues. + +The next step is to deploy the system in a meeting room continuously for several months to collect more feedback about how the system performs and is received. This wider deployment will bring up some interesting issues. For meetings concerning highly sensitive topics, we plan to have a very clear, physical switch for presenters to "disable" Meeting-Mate if they are uncomfortable with the content of their presentation being captured. It will be interesting to see how frequently that functionality is utilized. + +While we are limiting ourselves to "publically" available internal data, there are still cases where information which is technically visible to all employees, probably shouldn't be. For example a meeting name could be "Discuss firing John Doe". The creator of such a meeting is probably unaware that the meeting name is publically viewable. Practically, the system will need a way to remove these sorts of sensitive entries, but hopefully the deployment of this system will make people more aware of what is visible to other employees. It would also be interesting to make use of non-public information in the system to allow for more personalized recommendations. + +Overall, we believe MeetingMate serves as a valuable solution for improving meeting effectiveness and knowledge sharing within an organization, and will serve as an example for further work in this area. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..5a9218906a7f5daac7f812103e1035fa86b610b8 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,421 @@ +# Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant + +Anonymous Authors + +affiliation + +## Abstract + +We present PaperPal, a wearable smartphone assistant which blind people can use to fill out paper forms independently. Unique features of PaperPal include: a novel 3D-printed attachment that transforms a conventional smartphone into a wearable device with adjustable camera angle; capability to work on both flat stationary tables and portable clipboards; real-time video tracking of pen and paper which is coupled to an interface that generates real-time audio read outs of the form's text content and instructions to guide the user to the form fields; and support for filling out these fields without signature guides. The paper primarily focuses on an essential aspect of PaperPal, namely an accessible design of the wearable elements of PaperPal and the design, implementation and evaluation of a novel user interface for the filling of paper forms by blind people. PaperPal distinguishes itself from a recent work on smartphone-based assistant for blind people for filling paper forms that requires the smartphone and the paper to be placed on a stationary desk, needs the signature guide for form filling, and has no audio read outs of the form's text content. PaperPal, whose design was informed by a separate wizard-of-oz study with blind participants, was evaluated with 8 blind users. Results indicate that they can fill out form fields at the correct locations with an accuracy reaching 96.7%. + +Index Terms: Human-centered computing-Accessibility-Accessibility technologies—; Human-centered computing—Human computer interaction (HCI) + +## 1 INTRODUCTION + +Paper documents continue to persist in our daily lives, notwithstanding the paperless digitally connected world we live in. People still continue to encounter paper-based transactions that require reading, writing and signing paper documents. Examples include paper receipts, mails, checks, bank documents, hospital forms and legal agreements. A recent survey shows that over ${33}\%$ of transactions in organizations are still done with paper documents [4]. Many of these paper documents, at the very least, require affixing signatures on them. While it is straightforward for sighted people to write and affix their signatures on paper, for people who are blind this is challenging, if not impossible to do independently. When it comes to writing, blind people invariably rely on sighted people for assistance. Such assistance may not always be readily available, but more troublingly, having to depend on others for writing always comes with a loss of privacy. To make matters worse, unlike reading assistants for blind people, of which there are quite a few (e.g., [2,8]), there are hardly any computer-assisted aids that can help them to write on paper independently, a problem that has taken on added significance due to the recent pandemic-driven upsurge in mail-in balloting. In fact, a recent lawsuit was brought by blind plaintiffs on the discriminatory nature of mail-in paper ballots since they could not be filled out without compromising confidentiality [5]. + +There are two essential aspects to a form-filling assistant for blind people: (1) document annotation which includes capturing the image of the document with a camera, and automatic identification of all its items, namely, text segments, form fields and their labels; and (2) the design and implementation of an interface to enable blind people to access and read all the items of the document and fill out the fields independently. + +![01963e9a-1992-7f06-82fd-faae739533cb_0_975_482_611_438_0.jpg](images/01963e9a-1992-7f06-82fd-faae739533cb_0_975_482_611_438_0.jpg) + +Figure 1: A blind user filling out a form using PaperPal. An interaction scenario: (A) The pen is pointing to a text item and is read out, (B) Rotate the pen right to read the next item, (C) Bi-directional rotate to navigation to the form field, (D) Fill out the field. + +In so far as (1) is concerned, the existence of several smartphone reading apps for them (e.g., SeeingAI [8], KNFB reader [2], and voice dream scanner [26]) has established the feasibility of acquiring images of paper documents by blind people using a smartphone. These apps demonstrate that blind users can independently use their audio interface to capture the image of the document. The aforementioned apps also extract text segments from the captured images using OCR and document segmentation methods, which are then read out to the user. In so far as forms are concerned, it is possible to extract form fields and their labels from document images using extant vision-based systems such as Adobe [10], and AWS Textract [9]. In contrast, the HCI aspects of an interface for a form-filling assistant for blind people is a challenging and relatively understudied research problem, and is the primary focus of this paper. + +Of late, research on HCI aspects of writing aids for blind people is beginning to emerge. A recent work describes a first-of-its-kind smartphone-based writing-aid, called WiYG, for assisting blind people to fill out paper forms by themselves [28]. WiYG uses a 3D printed attachment to keep the phone upright on a flat table and redirects the focus of the phone's camera to the document that is placed in front of the phone. The paper and phone in WiYG are kept stationary; The user receives audio instructions to slide the signature guide - a card similar in size to regular credit cards that has a rectangular opening in the middle to help blind people sign on papers - to different form fields on the paper. All the form fields are manually annotated apriori. In addition, visual markers are affixed to the signature guide for tracking its locations with the camera. + +WiYG work has opened up new design questions and challenges that could form the basis for next generation computer-aided paper-form-filling writing assistants for blind people. We explore some of these questions here. Firstly, WiYG provides no readouts of the text in the form documents which arguably is desirable, especially for documents that require signatures. Secondly, WiYG simply steps through each form field in the document one by one without backtracking. In practice one would like to seamlessly switch back and forth between the fields and fill them in any order. Thirdly, WiYG requires a flat table to keep the paper as well as the phone stationary during use. The ability to operate in different situational contexts such as documents on non-stationary portable surfaces such as clipboards makes for a more flexible computer-aided reading/writing wearable assistant. In fact, often times blind users find themselves in situations where the documents they are asked to review and sign such as forms at hospitals and doctor's offices are on clipboards. + +To explore these questions we employed a user-centered design approach. We started with a Wizard of Oz (WoZ) pilot study with eight blind participants to understand the feasibility of filling paper forms on a clipboard. The study included paper forms placed on both flat desks and portable clipboards with the wearable cameras worn over the chest or attached to glasses, to mimic smart glasses. The study was designed to elicit data on several key questions including: (1) How do blind people write on paper attached to a portable clipboard? (2) Where can the camera be worn conveniently and in a way that the pen and paper are visible within the camera's field of view? (3) Considering all the camera-clipboard movements, how can blind people coordinate the clipboard and the wearable camera to maintain the pen and paper inside the camera's field of view while writing? The study was also intended to elicit user feedback and gather design requirements. The findings from the WoZ study informed the design of PaperPal, a wearable smartphone assistant for non-visual interaction with paper forms in more general scenarios than only a stationary desk, such as portable clipboards. + +There are several unique aspects to the design of PaperPal. First, its novel 3D-printed attachment transforms a conventional smart-phone into a wearable device with a mechanism to adjust the camera angle with one hand. Second, PaperPal is flexible to where it can be used: stationary tables as well as non-stationary surfaces, specifically, portable clipboards. Third, PaperPal enables users to write without having to use their signature guides - a key requirement that emerged from the WoZ study. Fourth, PaperPal leverages real-time video processing techniques to track the paper and pen and accordingly provides appropriated audio feedback. Lastly, both reading and writing are tightly integrated in PaperPal, with users being able to easily switch between them while accessing different items on the document. Our evaluation with 8 blind users showed that PaperPal could successfully assist people who are blind to fill in various paper forms, such as bank checks, restaurant receipts, lease agreement, and informed consent forms. They independently filled out these forms with an accuracy reaching ${96.7}\%$ . We summarize our contributions as follows: + +- The results of a wizard of OZ study with blind participants to uncover requirements for independently interacting with paper forms in portable settings. + +- The design of a novel 3D attachment that can turn a smartphone into a wearable with adjustable camera angle. This can also be used for other wearable vision-based applications that require adjustment of the camera angle. + +- The design and implementation of PaperPal, a new smartphone application, to assist blind users to independently read and fill out paper forms both on flat tables as well as portable clipboards. + +- The results of a user study with blind participants to assess the efficacy of PaperPal in filling out various paper forms. + +Following WiYG [28], we also assume annotated paper forms. As mentioned earlier, there exist smartphone applications and known techniques for document image capture by blind people and for automatic annotations. While the annotation problem is orthogonal to the design and implementation of the user interface explored in this paper, in Section 5.11 we describe our experiences with automated annotation of paper forms and discuss its envisioned integration in PaperPal to realize a fully automated paper-form-filling assistant. + +## 2 RELATED WORK + +The research underlying PaperPal has broad connections to assistive technologies for reading and writing on paper documents, particularly for blind people, 3D printed artifacts and image acquisition and processing in accessibility. What follows is a review of existing research on these broad topics. + +Reading and Writing: For well over a century, Braille has been the standard assistive tool for reading and writing for blind people. It is a tactile-based system made up of raised dots that encode characters. The use of braille has been declining in the computing era which ushered a major paradigm shift to digital assistive technologies [48]. Examples of digital technolgies for reading printed documents include some CCTVs [63] and Kurzweil Scanner [3] which reads off the text in scanned documents. + +The smartphone revolution has witnessed a surge in mobile reading aids. Notable examples include the KNFB reader [2], Seein-gAI [8], Voice Dream Scanner [26], Text Detective [14], and Tap-TapSee [60]. The smartphone-based solutions (e.g., [47]) as well as other hand-held solutions (e.g., SYPOLE [30]) require the user to position the camera for getting the document in its field of view. In recent years, wearable reading aids are emerging (e.g., finger reader [56], Hand Sight [58], and Orcam [7]). Although finger-centric wearables such as $\left\lbrack {{56},{58}}\right\rbrack$ do not require positioning of the camera, the drawback is their interference with writing. Reading paper documents using crowd sourced services is another option for blind people (e.g., be my eyes [1] and Aira [11]). These have the obvious drawback of lacking privacy. + +In contrast to reading aids, research on assistive writing on physical paper is at a nascent stage. A wizard of OZ study to explore the kinds of audio-haptic signals that would be useful for navigation on a paper form was reported in [17]. In this study, the form was placed on a flat table and the wizard generated the audio-haptic signals that was received on a smartwatch worn by the participant. + +A recent paper describes WiYG, a smartphone-based assistant for blind people to fill out paper forms [28]. In WiYG the user places the phone on a stationary table in an upright position using a 3D-printed attachment. The paper form is placed on the desk in front of the smartphone. The user slides the signature guide over the paper form to each form field, guided by audio instructions provided by the smartphone app. To write into the form field the user uses both the hands, one to keep the signature guide in place over the form field and the other hand to write into it with the pen. As mentioned earlier in Section 1, WiYG provides no readouts of the text, simply steps through each form field and can only be used with a flat table where both the paper and the phone are kept stationary. The PaperPal system described in this paper integrates both reading of the document's text and writing in the form fields. It has the capability to operate on both stationary tables and portable clipboards. + +3D Printing in Assistive Technologies: The increasing availability of 3D printers has increased the potential for rapid 3D printing for assistive technology artifacts $\left\lbrack {{20},{38}}\right\rbrack$ . $\left\lbrack {31}\right\rbrack$ shows that it is feasible for blind users to do 3D printing of models by themselves and [23] list organizations that use 3D printing tools to serve people with disabilities. Other examples of 3D printing applications are custom 3D printed assistive artifacts $\left\lbrack {{22},{35}}\right\rbrack ,3\mathrm{D}$ printed markers attached to appliances [33], and applications in accessibility of educational content $\left\lbrack {{19},{21},{24},{37}}\right\rbrack$ , graphical design [46], and learning programming languages [39]. 3D printing is also used to convey visual content [59], art [25], and map information [55] to blind people. Interactive 3D printed objects is yet another way 3D printing is utilized for accessibility [51-53]. Other examples of 3D printing include generating tactile children’s books $\left\lbrack {{40},{57}}\right\rbrack$ to promote literacy in children. In addition, [41] studies how children with disabilities can use 3D printing. In [36] it is mentioned that children with disabilities can also utilize 3D printing in the context of DIY projects. 3D printing is also used to utilize already existing technologies (e.g., making wearable smartphones [44]). In this paper we utilize 3D printing to design a phone case and a pocketable attachment to turn a smartphone into a wearable that allows the camera's angle to be adjusted. + +Image Acquisition and Processing in Accessibility: Accessible image acquisition tools such as $\left\lbrack {{15},{43},{62}}\right\rbrack$ instruct blind users to position the camera at the correct angle and distance from the target for capturing an image. The work in [32] illustrates the practical deployment of such tools in an assistive technology for image acquisition by blind people. In terms of capturing images of paper documents, assistive reading apps, namely, SeeingAI [8], KNFB reader [2], and voice dream scanner [26] demonstrate that blind people can independently use the apps' interface to direct the smartphone camera on the paper document and capture its image. + +The post-processing of the document image is a well-established research topic and can range from local OCR processing [12] to other computer vision methods such as document segmentation $\left\lbrack {{13},{29}}\right\rbrack$ to form labeling techniques $\left\lbrack {9,{10},{49},{61}}\right\rbrack$ . + +Another topic related to camera-based assistive technologies is the use of visual markers for tracking objects in the environment. For example, in [27] different types of visual tracking methods are studied to make shopping easy for blind people. The work in [45] studies color-coded markers for use in a way finding application for blind people. Visual markers are especially beneficial when computer vision methods do not provide satisfactory accuracy. Examples of assistive technologies that utilize visual markers are $\left\lbrack {{28},{54},{55}}\right\rbrack$ . In PaperPal we also use visual markers to track the tip of the pen and the paper. To track the latter, PaperPal uses visual markers similar to the ones used for tracking the signature guide in [28]. To track the pen, PaperPal uses visual markers attached to a 3D printed pen topper, inspired by previous work on pen tracking that also use visual markers [21,64-66]. + +## 3 A Wizard of Oz Pilot Study + +To the best of our knowledge, there is no previous research on how blind people write on paper documents attached to non-stationary surfaces, namely, portable clipboards. To this end, we did a pilot study to assess the feasibility of an assistive tool that uses a wearable device for filling paper forms attached to clipboards. In the study these specific questions were explored: (1) How do blind people write on non-stationary surfaces like clipboards with a wearable? (2) How do blind people coordinate their hand and body movements to keep the pen and paper within the camera's field of view? (3) What is the most suitable on-body location for a wearable camera between the proposed locations of head vs. chest. In addition, the study was also intended to gather requirements for the wearable camera attachment. Details of the study follows. + +### 3.1 Participants + +Eight (8) participants ( 3 males, 5 females) whose ages ranged form 35 to 77 (average age 50) were recruited for the pilot study. All participants were completely blind; all knew how to write on paper; none had any motor impairments that would have affected their full participation in the study. + +### 3.2 Apparatus + +The study used a standard ball point pen, a portable clipboard, and a credit-card sized signature guide. Each form, printed on standard letter-sized paper, had 5 randomly placed equal-sized fields, with the same distance between consecutive fields. The wizard used a Nexus phone to send instructions to an iPhone8+. Participants wore the iPhone8+ on two on-body locations using two holders. The first holder had the iPhone attached to Ski goggles. Participants wore this on the head akin to smart glasses - Wearable ${}_{\text{head }}$ (figure 2 B) Using the second holder, the participants wore a lanyard around their necks with the phone rested on their chests. The holder had a reflective mirror to redirect the camera's field of view and served as the wearable on the chest - Wearable chest (figure 2 A, C). + +![01963e9a-1992-7f06-82fd-faae739533cb_2_980_147_612_333_0.jpg](images/01963e9a-1992-7f06-82fd-faae739533cb_2_980_147_612_333_0.jpg) + +Figure 2: The Apparatus for the pilot study. A: The phone case with reflective mirror in front of the camera to be worn on the chest using the lanyard in C. B: Ski goggles for wearing the phone as glasses. D: Paper with Aruco markers where users mark ’ $x$ ’ in the numbered form fields. + +### 3.3 Study Design + +In this within-subjects study every participant filled out a total of 4 random forms corresponding to four different conditions namely, $< {\text{Wearable}}_{\text{head }}$ , form on desk>, $< {\text{Wearable}}_{\text{head }}$ , form on clipboard>,, and . + +A total of 32 forms were filled out ( 8 participants, 4 conditions). The order of the four form-filling tasks was randomized to minimize the learning effect. The wizard app was used by the experimenter (i.e., the wizard) to manually direct the participant to the form fields by sending directional audio instructions such as "move left", "move right", and so on. The participant's phone had an app to track the paper based on the markers that were printed on the paper (see figure 2 D). The participant's phone was also instrumented to gather study data throughout the duration of each study session which was also video recorded. + +The accuracy was measured as the percentage of overlap between the annotated rectangular region of a given form-field (a priori) and the rectangle enclosing the participant's written text in that field (annotated after the study) [28]. + +### 3.4 Procedure + +Each form filling task began with the wizard directing the participant to go to the first form field. We regarded this as the initialization phase and discounted it from our measurements so as to exclude any confounding variables that might arise due to starting off from a random position. Upon reaching the form field the participant would initial the field with a " $\times$ " using the signature guide. If at any time during this process the paper disappears from the camera's field of view, the iPhone app would raise a 'paper not visible' audio alert In response the participant would make adjustments by shifting the paper on or the wearable to bring it back into focus, which gets acknowledged by the 'paper is visible' shout-out by the app. The participant received the navigational instructions only when the paper was visible by the camera. The experimenter monitored the participant's navigational progress and sent audio directions in real time to guide the user's signature guide to each form field. At the conclusion of the session, users would compare and contrast writing on the desk vs clipboard, the wearable's location on the chest vs the head, and other experiences, in an open-ended discussion. + +### 3.5 Key Takeaways + +Chest vs. Head location for the Wearable: 6 out of 8 participants preferred the chest wearable, one participant preferred the head location and one participant had no preference. With the camera on the head, the paper went out of focus far more often - by a factor of 4 - than when it was worn on the chest. + +The differences in percentage of overlap among the 4 conditions were found to be statistically significant (repeated measures ANOVA, ${F}_{3,{124}} = {5.32}, p = {0.002}$ ). Pairwise comparisons with posthoc Tukey test showed that under the "head, clipboard" condition the field overlaps are significantly less than when user wears the phone on the chest and writes either on desk $\left( {p = {0.0102}}\right)$ or clipboard $\left( {p = {0.0171}}\right)$ . This suggests that when wearing the phone on the chest, user can better write on the correct location for both papers that are placed both on the desk and clipboard. + +These are excerpts of select participant regarding the (1) Wearable ${}_{\text{head }}$ : "Looking downward is not comfortable and this task requires a lot of looking down." and (2) Wearable chest: "Around the neck is more comfortable and you can focus on the direction of the paper and hand.". Overall, the chest location was better suited for placement of the wearable, and was adopted in the design of PaperPal. + +Writing Surface: Unsurprisingly, all participants deemed writing on the desk was easier than on the clipboard. However, pairwise comparisons did not show any statistically significant difference in the accuracy or form fill-out time when only the paper placement variable changed from desk to clipboard. During the discussion, all participants mentioned real-life situations where they had to use clipboards. + +Navigational Differences: On desks, we observed that the user moved the signature guide with one hand while using the other hand to feel the edge of the paper as a means to get a sense of the relative orientation of the signature guide w. r. t. the orientation of the paper. Such a behaviour was also reported in [28]. On the other hand, when holding the clipboard, participants could not feel the paper's edge in the same way as they did on flat desks. In fact, we observed that participants had difficulty moving the signature guide on a trajectory aligned with the paper's orientation. Despite this, the wizard was able to adapt the instructions to lead the participant to the target fields. + +Reflective Mirror: There was a lot of variability in how participants held the clipboards. This means there is no one perfect angle for attaching the mirror to the holder that can cover all these variations. A wide angle camera increases the likelihood of the paper staying in its field of view. However, this will require the use of a large sized reflective mirror which is not practical. Thus, a smartphone holder without a reflective mirror whose orientation can be adjusted to position the camera, is desirable for a wearable on the chest. + +Signature Guide: When using the clipboard participants had to hold the clipboard throughout the interaction process. On the other hand, writing with the signature guide requires both the hands. This became a difficult juggling act for the participants. These difficulties are reflected in the feedback of all the participants (e.g.,: P5 mentioned "specially you have to pickup your pen while using signature guide"). Hence using signature guides with clipboards is not an option. + +## 4 THE PAPERPAL WEARABLE ASSISTANT 4.1 Design of 3D Printed Phone Holder + +Informed by our pilot study, we designed a 3D-printed holder to convert an off-the-shelf smartphone (iPhone 8+) into a hands-free wearable on the chest. In addition, the holder design had to meet these requirements: (a) Support one-handed tilting of the phone to different angles so that differences in how the clipboard is held by different users can be accommodated. They will hold the clipboard with one hand and appropriately adjust the angle of the phone with the other hand to capture the paper in the camera's field of view; (b) Use minimal number of component pieces that are compact enough to fit in a pocket; and (c) Ease of assembly/disassembly. + +![01963e9a-1992-7f06-82fd-faae739533cb_3_1075_156_432_440_0.jpg](images/01963e9a-1992-7f06-82fd-faae739533cb_3_1075_156_432_440_0.jpg) + +Figure 3: The iPhone holder. The L-shaped half fork can be attached to the phone case by rotating the screw inside the threaded bearing. When tightened the L-shaped half fork can rotate. + +![01963e9a-1992-7f06-82fd-faae739533cb_3_969_714_628_210_0.jpg](images/01963e9a-1992-7f06-82fd-faae739533cb_3_969_714_628_210_0.jpg) + +Figure 4: PaperPal's interaction. The interaction automaton. + +These requirements led to the design of the Adjustable iPhone Holder shown in fig 3 which went through several rounds of experimentation. This design is comprised of two pieces: The first piece is a phone case that has a small threaded bearing on the side. The second piece is a L-shaped half fork which facilitates tilting of the phone. This piece can be attached to the phone case by rotating the screw into the threaded bearing and can be rotated 360 degrees. A lanyard is attached to the lifting lug to wear the holder with its phone around the neck. The user can change the lanyard ribbon's length. This L-shaped fork can be rotated by the user to adjust the tilt angle of the wearable phone. Furthermore, the tilt angle can be adjusted so that it can support upright placement of the phone on a desk. Thus it can operate both as a wearable as well as serve as a stationary holder for writing on a flat desk. + +### 4.2 PaperPal: An Operational Overview + +The PaperPal system runs as an iPhone app. The user interacts with the items of the paper, namely, text segments, form fields and their labels, by moving the pen over the paper like a pointer and making gestures with the pen. Two types of gestures are used: (1) unidirectional rotate left or right of the pen around its longitudinal axis, and (2) bidirectional rotate made up of two rotations done consecutively in the opposite directions. + +PaperPal's response to the user's pen movements and pen gestures is governed by a two-state interaction automaton shown in Figure 4. + +The application starts in the "select item and read" state which is inspired by the smartphone screen reader interface. In this state the user can move the pen like a pointer to simultaneously select an item and hear an audio readout of the item that is associated with the location pointed by the pen. This interaction is analogous to the "touch exploration" on the smartphone screen reader. + +The unidirectional rotate left (right) selects the previous (next) item on the document and its content is read aloud. This interaction is analogous to "swiping" on the smartphone screen reader. + +The bidirectional rotate switches between the two states of the application namely "select item and read" and "navigate to item and write". + +The "navigate to item and write" state handles two situations: (1) If the item selected is a text segment it reads aloud the item's text content; (2) if the item selected is a form field it reads aloud its label and generates navigational instructions to direct the user's pen to the location of the field. No readouts of any intermediate items take place when a user is being navigated to a form field. Upon reaching the field it reads out the label of the form field once more to refresh the user's memory and directs the user to write in the field, alerting the user when the pen strays out of the field and giving instructions on how to to move the pen back into the field and continue writing. In this state the user can do a bidirectional rotate at any time to "move to the select item and read" state or continue in the current sate and continue on to the other form fields or other items via unidirectional rotate gestures. + +### 4.3 PaperPal implementation + +PaperPal uses the phone's camera to observe the user's actions. The application is implemented as an iOS app and uses OpenCV library [6] for real time video processing. Specifically, PaperPal: (a) tracks the physical location of the pen tip over the paper, and (b) detects pen gestures namely uni-directional and bi-directional rotates. The pen location and gestures determine the audio responses, namely, text readouts, navigation and writing instructions, that are generated in real time. Figure 5 is a high-level workflow of the process. + +#### 4.3.1 Visual Markers + +To enable accurate tracking of the pen and paper, Aruco markers [50] with known size are used for paper and pen tracking. + +The paper tracker is a credit-card sized rectangular card ( ${85.60}\mathrm{\;{mm}}$ $\times {53.98}\mathrm{\;{mm}}$ ) wrapped by an Aruco board of 24 markers. It has a narrow diagonal groove that serves as a tangible guide for attaching the the paper into the card. The user slides the paper's upper left corner into this groove. + +The pen tracker is a cube-shaped pen topper with Aruco markers affixed to each face of the cube. It can be easily attached to any regular ball point pen and is resilient to hand occlusions. + +#### 4.3.2 Locating Pen Tip on Paper + +For each image frame containing the pen and paper, two transformations $H$ and $P$ are estimated: (1) $H$ is a homography transformation that maps each image pixel to its corresponding location on the paper's coordinate system. This is estimated based on the paper tracker via the DLT algorithm [34], and (2) $P$ is a projective transformation [34] between any 3D location in the pen tracker's coordinate system and their corresponding 2D image pixel. $P$ is comprised of the intrinsic camera calibration, which is measured once for the camera, and the extrinsic camera calibration which is estimated based on the pen markers with the EPnP method [42]. + +For a pen that is touching the paper, PaperPal starts with the physical location of the pen tip whose distance is a constant w. r. t. the pen tracker coordinate system, and applies $P$ followed by the $H$ transformations to estimate the pen tip location on the paper’s coordinate system. + +We only estimated the pen location if the pen tip is close to the paper. To this end, we developed a heuristic based on the observation that in PaperPal as the pen moves away from the paper it gets closer to the camera. specifically, in the image, the observed size of the pen markers should not be more than twice the size of the paper markers. The criteria was based on experimenting with various thresholds and the one candidate with the lowest average re-projection error [34] was selected. Furthermore we remove outlier pen tip locations whose distance from the previous observed pen tip is more than ${18}\mathrm{\;{mm}}$ on the paper. This threshold was selected based on the fastest pen movements that could be captured using the pen's visual markers. + +![01963e9a-1992-7f06-82fd-faae739533cb_4_955_140_656_799_0.jpg](images/01963e9a-1992-7f06-82fd-faae739533cb_4_955_140_656_799_0.jpg) + +Figure 6: A: rectilinear path along the paper's coordinate system, B: non-rectilinear path along the paper's coordinate system, and C: the rectilinear path on the rotated paper's coordinate system. + +#### 4.3.3 Detecting Pen Gestures + +A previous work on pen rolling gestures had shown that when writing on a paper, unintended pen rotations were observed with high speeds and small angles [16]. We used this insight to increase the duration of intended rotation gestures and thus distinguish them from unintended ones. + +To this end, through experimentation the duration for intended gestures, denoted ${T}_{\text{gesture }}$ , was set to ${600}\mathrm{\;{ms}}$ . Rotation gestures are performed with the pen tip on or close to the paper surface. To detect the rotation gestures we choose a 3D point $R$ w. r. t. the pen coordinate system that is close to the pen tip. We find $R$ ’s corresponding 2D location $r$ on the paper coordinate system using the same process that was used for estimating the position of the pen tip in Section 4.3 above. To detect a rotational movement along the longitudinal axis of the pen, the angle of $r$ relative to the tip of the pen is measured for each frame using simple trigonometry. We record the direction of the rotational angle (left/right) between each two consecutive frames. If in the time window of ${T}_{\text{gesture }}$ , the majority (over 90%) of pen rotations are in the same direction (left/right), the rotation gesture in the corresponding direction is detected. For detecting bidirectional rotations, first note that it involve two quick rotation in both direction. A bidirectional rotate is detected if within the sliding time window of ${T}_{\text{gesture }}$ , majority (over 90%) of rotations in one half in one direction and in the opposite direction in the other half. + +#### 4.3.4 Generating Audio Responses + +PaperPal responds to the user's pen movements by generating four kinds of audio responses: coordination alerts, audio readouts, writing and navigation instructions. + +Coordination alerts: Alerts when the paper and/or the pen move out of the camera's field of view. To avoid needless alerts for momentary movements out of the field of view, we alert when the pen and paper are not in field of view for a duration that is longer than 2 seconds. Readouts: The textual content of an item selected by the user is readout to the user in audio. + +Table 1: Participants demographic and habits regarding braille, writing, and smartphone applications (SG stands for signature guide). + +
IDPilot studyAge (Sex)Diagnosis (Light perception)Braille usage (Level)Braille scenariosWriting usage (Level)Writing scenariosSG Own (Carry)Smartphone (experience)Smartphone apps for papers
P1yes34 (F)retrograde optic atrophy (yes)daily (advanced)papers at work, at the librarydaily (beginner)doctor's office, legal forms, banksyes (always)iphone 10 S (advanced)seeingAI, KNFB reader, voice dream reader. voice dream writer
P2yes63 (F)Acute congenital glaucoma (no)daily (advanced)taking notes for myselfrarely (beginner)Leaving notes for sighted peersyes (always)iPhone 8 (advanced)be my eyes
P3no32 (F)medical malpractice (no)daily (beginner)elevators. remote controlweekly (advanced)doctor's office. checksnoiPhone 10 (advanced)None
P4no55 (M)retinal detachment (no)monthly (advanced)elevators, mailsweekly (beginner)timesheet signature, checks. legal documentsnoiPhone (advanced)KNFB reader
P5yes54 (M)glaucoma (no)--weekly (beginner)shopping receipts. credit card billsnoiPhone (beginner)seeingAI, tap tap see
P6yes46 (F)retinitis pigmentosa (yes)never (beginner)elevators, mailsmonthly (beginner)legall documents. doctor's officenoiPhone 8 (advanced)seeing AI
P7no38 (M)optic atrophy, retinitis pigmentosa (yes)monthly (advanced)reading documentsdaily (advanced)documents at work. taking notes for sighted peersyes (often)iPhone 11 pro (advanced)KNFB reader. seeingAI. Aira voice dream scanner
P8no61 (M)retinitis pigmentosa (yes)daily (advanced)elevator. calendardaily (advanced)timesheet signature. checks. legal documentsves (always)flip phone (beginner)None
+ +Writing instructions: While writing is in progress, instructions to maintain the pen within the field is given whenever the estimated pen tip falls out of the rectangular boundary of the field. For example, if the user's pen position has strayed above (below) the field the user is guided back to field with a "Move down (up)" instruction. + +Navigation instruction: Navigational instructions consists of four basic directives namely up, down, left and right. With these four directives the user is guided to any field on the paper. In [28] a simple navigation algorithm was used to guide the user along a rectilinear path that corresponded to the Manhattan distance between the pen and field, see figure 6 A. Recall that our pilot study revealed that the user's navigational movements, in response to the audio instructions can deviate from the intended axes when the paper is placed on a clipboard. Figure $6\mathrm{\;B}$ , demonstrates how the user’s pen tip trajectory can deviate from the expected path along the paper's coordinate system with simple rectilinear navigational instructions. + +To address this problem we estimate the deviation angle of the pen tip's trajectory w.r.t. the paper's coordinate system. To compensate for this deviation, we rotate the paper's coordinate system to the same degree but in the opposite direction, so that the pen tip's trajectory is aligned with the transformed axes - see figure $6\mathrm{C}$ . The navigation directives are generated w.r.t. the transformed axes. Observe that the pen tip trajectory now follows a rectilinear path w.r.t. the transformed axes. To estimate the deviation angle, we use the pen tip’s estimated location $t$ on the paper and find the angle between $t$ and the intended axis (horizontal axis when the navigation instruction is left or right, and vertical when the navigation is up or down). To avoid noise and jitters of the transformed axes, the deviation angle is averaged over a sliding window of one second. + +## 5 EVALUATION + +We conducted an IRB-approved user study of PaperPal to evaluate its effectiveness as a form-filling assistant for blind people. To this end, the study was designed to answer the following questions: (a) How accurately can users fill out forms in terms of writing on the correct location? (b) How long does it take to fill out forms? (c) What is the overall user experience of using PaperPal to fill out paper forms of different sizes, layouts, and texts? + +### 5.1 Participants + +Ten fully blind participants were recruited. However, two participants could not attend and the study was conducted with the remaining eight participants whose ages ranged from 32 to 63 (average $= {47.88}$ , std $= {12.16}$ ,4 females and 4 males). Note that 4 out of the 8 participants were also part of the WoZ pilot study discussed in Section 3. Table 1 is the demographic data of the participants. The participants were compensated \$50 per hour. All the participants were right handed, and none had any motor impairments that impeded their full participation in the study. All the participants (except P5) were familiar with braille and all of them affirmed that they knew how to write on paper. All participants stated that in real-life they always asked a sighted peer to do the form fill out for them except for affixing their signatures. For that, they were led by the sighted peer to the signature field where they would sign by themselves. + +### 5.2 Apparatus + +The PaperPal application was running on an iPhone 8+. The 3D-printed holder, lanyard, paper tracker, pen tracker, a regular ball point pen and a clipboard were provided to the user, see figure 1 . Finally, each participant was given 4 paper forms to fill out. + +Forms: The forms were selected to have different properties and to reflect realistic scenarios. Specifically, the forms were: + +- (F1): A regular-size check that consists of six fields namely pay to the order of, date, $\$$ , dollars, memo, and signature. + +- (F2): A restaurant receipt that consists of three fields namely tip, total, and signature. + +- (F3): A template for a lease agreement that consists of the following six fields: landlord's first name, landlord's last name, tenant's first name, tenant's last name, landlord's signature, date, tenant's signature, and date. + +- (F4): An informed consent form that requires the participant to fill out four fields which are full name, date of birth, participant's signature, and date. + +The two forms (F1 and F2) are quiet similar to the ones used in the evaluation of the WiYG [28]. Two additional forms were selected to evaluate more complex forms in terms of the number of fields (F3) and text items (F4) - see Figure 7. These forms have different paper sizes, orientation, and form layouts. Specifically, F1 and F2 forms are smaller than the standard letter size pages used in F3 and F4. The fields in F2 are vertically aligned and placed below one another, F3 fields are placed on horizontally on a table-like layout, and F1 and F4 have more complex layouts. + +![01963e9a-1992-7f06-82fd-faae739533cb_6_323_147_1149_474_0.jpg](images/01963e9a-1992-7f06-82fd-faae739533cb_6_323_147_1149_474_0.jpg) + +Figure 7: The four forms used in the user study. Note that the scale of the images does not represent their relative size (refer to the dimensions in the figure). The participants' hand writings are annotated with blue (brown) as correct (incorrect) by human evaluators. + +### 5.3 Design + +The study was designed as a repeated measures within-subject study. Each participant was required to fill out each of the 4 forms ( 4 tasks) with PaperPal in a counterbalanced order using a latin square [18]. The task completion time was the elapsed duration from the moment the pen was detected over the paper for the first time and the moment the user finished writing on the last form field. Accuracy was measured as the percentage of overlap between the ground truth annotated rectangular region of a given form field (a priori) and the rectangle enclosing the participant's written text for that same field - see figure 7. + +### 5.4 Procedure + +To start with, we draw attention to the circumstances surrounding the study. It was conducted after the gradual re-opening of businesses shut down due to the COVID-19 pandemic. Consequently, the study procedure was adapted to follow CDC recommended safety measures. Specifically, both the participant and the experimenter wore face masks and kept the recommended social distance from each other. Therefore, the experimenter relied on verbal communications instead of physical demonstrations to conduct this study. + +Each session began with a semi-structured interview to gather demographic data, reading/writing habits, and prior experiences with assistive smartphone apps ( $\approx {20}$ minutes) - see table 1 . + +Following this step, the participant was instructed on how to set up the PaperPal apparatus. Towards that, the participant was asked to pick up each piece of the apparatus and the experimenter would provide a verbal description of the pieces, after which the participant began assembling the pieces with the experimenter giving step-by-step assembly instructions until the participant was able to attach the paper tracker to the paper, the pen tracker to the pen, the L-shaped fork to the phone case, clip paper to the clipboard and the lanyard to wear the phone around the neck. After that, the experimenter described the user interface. The participant was asked to practice reading and writing with PaperPal with a set of test forms that were different from those used in the study. During the practice the experimenter observed the progress of the participant and intervened with instructions and explanations as needed. The entire process of assembling and practicing use of the application took about an hour. + +The participant was next asked to fill out the four forms F1, F2, F3 and F4, followed by a single ease question. A maximum of 10 minutes per form was allocated. An open-ended discussion with the experimenter took place upon completion of the tasks. The entire study session per participant lasted 2.5 hours, with the experimenter making notes throughout the video recorded session. + +### 5.5 Results: Task Completion Time + +The task completion time is indicative of the efficiency of PaperPal as a form-filling assistant for blind people. On average, the total time spent to fill out forms F1 to F4 was 169.38, 73.91, 229.824, and 164.84 seconds respectively. The task completion time is divided into: (a) Navigation time, which is the time taken to navigate the to the target field; (c) Writing time, which is the time taken by the participant to fill in the field; (d) Coordination time, which is the time taken by the participant to bring the paper back in the camera's field of view. Figure 8 shows both the navigation time and writing time for each field. + +For the coordination time the repeated measures ANOVA showed statistically significant difference in the time spent in coordination between the 4 forms with $F = {7.00}$ and $p = {0.002}$ . The pairwise analysis with post-hoc Tukey test showed that F2 coordination time is significantly less than F3 and F4 (with $p < {0.01}$ for both pairs) This can be explained as follows: working with larger papers such as F3 and F4 increases the likelihood for the paper or the pen to stray out of the camera's field of view which adds to the coordination time compared to a small-sized form like F2 where straying out is less likely. + +### 5.6 Results: Assembly Time + +The holder assembly time was includes time spent to attach the L-shaped half fork to the phone case and wear the phone around the neck. All participants were able to assemble the holder with an average time of 16.55 seconds (std 4.33 seconds). In addition, the average time to attach the paper tracker card to the top-left corner of an A4 page was 18.75 seconds (std $= {6.45}$ seconds). One difficulty that arose was that most participants were wearing protective gloves which made it difficult for them to attach the paper to the paper tracker card. + +### 5.7 Results: Form Filling Accuracy + +Overlap Percentage: Recall that the overlap percentage is defined as the percentage of the rectangular bounding box enclosing the participant's writing that falls inside the ground truth rectangular region of the field, see figure 7. The average percentage of field overlap for forms F1 to F4 was ${61.84}\% ,{66.02}\% ,{87.69}\%$ , and ${64.23}\%$ respectively. + +Out of all the form fields in this study ( 8 participants x 21 fields = 168) participants attempted to fill out 156 of them. Out of these 156 fields, 117 (75%) of them had an overlap region of 50% or higher. Figure 8 (Field Overlap) shows the overlap percentages. + +![01963e9a-1992-7f06-82fd-faae739533cb_7_165_154_694_354_0.jpg](images/01963e9a-1992-7f06-82fd-faae739533cb_7_165_154_694_354_0.jpg) + +Figure 8: left: navigation time and writing time per field. middle: accuracy in terms of the field overlap percentage. right: human assessment of the filled-out fields + +Human assessment: We asked three human evaluators to assess whether each of the form fields were correctly filled out by the participants. The final verdict was rendered via majority voting. The inter-annotator agreement was high (Fleiss ${}^{\prime } = {0.77}$ ). Figure 8 (Human Assessment) shows the percentage of fields that were deeded as correct (81.55%), incorrect(10.71%), or missed(7.74%). + +The human assessment of form-filling accuracy for forms F1 to F4 was ${88.09}\% ,{69.56}\% ,{96.77}\%$ , and 85.71% respectively. Note that human assessment shows higher accuracy compared to the average overlap. Akin to [28], this phenomenon is due to the fact that even when the written content is not perfectly inside the form field, human annotators still counted it as acceptable. + +### 5.8 Results: PaperPal vs. WiYG + +The standard check (F1) and receipt (F2) forms were similar to the forms used in WiYG's user study in that it also used a check and receipt of similar size and layout and identical set of fields. Although the differences in the setup and study participants preclude a rigorous comparison of the performance between PaperPal and WiYG, we can still get a sense of the differences in their performance via an informal comparison shown in table 2. This comparison suggests that in spite of the complexities of writing on a non-stationary clipboard with a wearable, users could fill out forms in a shorter time without compromising accuracy with PaperPal. + +### 5.9 Results: Subjective Feedback + +We administered a single ease question to each participant to rate the difficulty of assembling the holder and completing each form, on a scale of 1 to 7 with 1 being very difficult and 7 being very easy. The median rating for holder assembly was 7 which suggested that the assembly process was viewed as being easy. The median rating for forms were F1 : 6, F2: 5, F3: 4, F4: 3.5. In the open-ended discussion, participants mentioned that filling out forms that had long text content ( such as F4) or had more fields were more difficult. + +All participants liked the fact that PaperPal lets them work with paper documents independently - quoting P5:" I like the ability to fill out my own forms and checks". All of them mentioned that PaperPal fills an unmet need for preserving privacy when filling out forms and affixing signatures. They all appreciated the integrated reading and writing feature in PaperPal as they got to hear what was in the form that they were filling out prior to affixing their signatures. + +### 5.10 Discussion + +In the user study participants missed filling 12 out of 168 fields. We observed most of the missing cases were associated with F3's two date fields, which were next to each other and had the same label, causing ambiguity. The last two fields in F4 were also missed by some participants because the long text gave them the false impression that there were no more fields left in the form. One way to address this in future work is to apriori notify the user of the total number of form fields. + +Table 2: Comparison of PaperPal to WiYG (accuracy: overlap%) + +
Standard Check (F1)Receipt (F2)
average time (s)average accuracy (%)average time (s)average accuracy (%)
$\mathrm{{WiYG}}\left\lbrack {28}\right\rbrack$249.75 (s)64.85%91.25 (s)63.90%
PaperPal159.38 (s)61.84%73.91(s)66.02%
+ +All participants mentioned that doing several consecutive rotate gestures required re-adjustments to their grip on the pen. Gestures with subtle finger movements such as finger flicks and taps on the pen are possible alternatives that can address this problem. This will require the use of computer vision recognition algorithms to detect subtle finger movements and is a topic for future research. + +A unique aspect of PaperPal is that users can simply work with the paper and pen without having to hold any other objects like the phone in reading apps $\left\lbrack {2,8}\right\rbrack$ or signature guide while filling out paper forms as in WiYG. Finally, assembly of the 3D-printed attachment, attaching the trackers, and wearing the phone were all done independently by the study participants, affirming that the design of the apparatus associated with PaperPal was highly accessible for blind users. + +The results of the study showed that blind participants were able to fill out forms in a few minutes (ranging from 1 min and 23 sec to $3\mathrm{\;{mins}}$ and ${83}\mathrm{{sec}}$ ) with high accuracy, measured in terms of the average overlap percentage, which was more than ${60}\%$ for all the forms. In addition, accuracy of the filled out form fields as judged by humans was also as high as 96.77%. + +### 5.11 Future Work + +Use of Markers: The PaperPal's accuracy depends on accurate tracking of pen and paper. Which is why it is done with visual markers attached to these objects. Eliminating these markers is a challenging open computer vision research. + +Document Annotation: While the focus of this paper has been the accessible HCI interface for filling paper forms with a wearable, we envision a front-end to PaperPal consisting of an app like KNFB reader or voice dream scanner to acquire the image of the form document which subsequently will be dispatched to the augmented AWS Textract services for automatic annotation of the form elements. We conducted preliminary experiments with this service. To this end, we took pictures of the 4 forms used in the study (Section 5) with the voice dream scanner app. These images were rectified using the "image to paper" transformation (section 4.3) and these rectified images were processed by AWS Textract augmented with human-in-the-loop workflow. Out of the 21 fields in the 4 forms, 17 fields and their labels were detected correctly by Textract and 2 were erroneously recognized and were marked as such through intervention via the the human-in-the-loop workflow. Integration of this process to PaperPal and its end-to-end evaluation is a topic of future work. + +## 6 CONCLUSION + +PaperPal is a wearable reading and form-filling assistant for blind people. Wearability is achieved by transforming a smartphone, specifically an iPhone 8+, into a wearable around the chest with a 3D printed phone holder that can adjust the phone's viewing angle. PaperPal operates on both stationary flat tables as well as non-stationary portable clipboards. A preliminary study with blind participants demonstrated the feasibility and promise of PaperPal: blind users could fill out form fields at the correct locations with an accuracy of ${96.7}\%$ . PaperPal has potential to enhance their independence at home, at work and on the go and at school. + +## REFERENCES + +[1] Be my eyes: Bringing sight to blind and low vision people, 2018. + +[2] Knfb reader, 2018. + +[3] Kurzweil 1000 for windows, 2018. + +[4] Data exchange in the era of digital transformation, 2019. + +[5] Blind voters are suing north carolina and texas, arguing that mail ballots are discriminatory, 2020. + +[6] Opencv: Open source computer vision library, 2020. + +[7] Orcam, 2020. + +[8] Seeing ai, 2020. + +[9] Aws textract form extraction documentation, 2021. + +[10] Adobe. Work with automatic field detection, 2021. + +[11] Aira. Aira, 2018. + +[12] Apple. Recognizing text in images, 2020. + +[13] Apple. Vision, apply computer vision algorithms to perform a variety of tasks on input images and video, 2020. + +[14] AppleVis. Text detective, 2018. + +[15] J. Balata, Z. Mikovec, and L. Neoproud. Blindcamera: Central and golden-ratio composition for blind photographers. In Proceedings of the Mulitimedia, Interaction, Design and Innnovation, pp. 1-8. 2015. + +[16] X. Bi, T. Moscovich, G. Ramos, R. Balakrishnan, and K. Hinckley. An exploration of pen rolling for pen-based interaction. In Proceedings of the 21st annual ACM symposium on User interface software and technology, pp. 191-200, 2008. + +[17] S. M. Billah, V. Ashok, and I. Ramakrishnan. Write-it-yourself with the aid of smartwatches: A wizard-of-oz experiment with blind people. In 23rd International Conference on Intelligent User Interfaces, IUI '18, pp. 427-431. ACM, New York, NY, USA, 2018. doi: 10.1145/3172944 .3173005 + +[18] J. V. Bradley. Complete counterbalancing of immediate sequential effects in a latin square design. Journal of the American Statistical Association, 53(282):525-528, 1958. + +[19] C. Brown and A. Hurst. Viztouch: Automatically generated tactile visualizations of coordinate spaces. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, TEI '12, p. 131-138. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2148131.2148160 + +[20] E. Buehler, S. Branham, A. Ali, J. J. Chang, M. K. Hofmann, A. Hurst, and S. K. Kane. Sharing is caring: Assistive technology designs on thin-giverse. CHI '15, p. 525-534. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123.2702525 + +[21] E. Buehler, N. Comrie, M. Hofmann, S. McDonald, and A. Hurst. Investigating the implications of $3\mathrm{\;d}$ printing in special education. ${ACM}$ Trans. Access. Comput., 8(3), Mar. 2016. doi: 10.1145/2870640 + +[22] E. Buehler, A. Hurst, and M. Hofmann. Coming to grips: 3d printing for accessibility. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility, pp. 291-292, 2014. + +[23] E. Buehler, S. K. Kane, and A. Hurst. Abc and 3d: Opportunities and obstacles to $3\mathrm{\;d}$ printing in special education environments. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility, ASSETS '14, pp. 107-114. ACM, New York, NY, USA, 2014. doi: 10.1145/2661334.2661365 + +[24] E. Buehler, S. K. Kane, and A. Hurst. Abc and 3d: Opportunities and obstacles to $3\mathrm{\;d}$ printing in special education environments. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers amp; Accessibility, ASSETS '14, p. 107-114. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/ 2661334.2661365 + +[25] L. Cavazos Quero, J. Iranzo Bartolomé, S. Lee, E. Han, S. Kim, and J. Cho. An interactive multimodal guide to improve art accessibility for blind people. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 346-348, 2018. + +[26] V. Dream. Voice dream scanner, 2020. + +[27] M. Elgendy, C. Sik-Lanyi, and A. Kelemen. Making shopping easy for people with visual impairment using mobile assistive technologies. Applied Sciences, 9(6):1061, 2019. + +[28] S. Feiz, S. M. Billah, V. Ashok, R. Shilkrot, and I. Ramakrishnan. Towards enabling blind people to independently write on printed forms. + +In the ACM Conference on Human Factors in Computing Systems (CHI'19), 2019. + +[29] P. Forczmański, A. Smoliński, A. Nowosielski, and K. Mafecki. Segmentation of scanned documents using deep-learning approach. In + +International Conference on Computer Recognition Systems, pp. 141- 152. Springer, 2019. + +[30] V. Gaudissart, S. Ferreira, C. Thillou, and B. Gosselin. Sypole: mobile reading assistant for blind people. In 9th Conference Speech and Computer, 2004. + +[31] T. Götzelmann, L. Branz, C. Heidenreich, and M. Otto. A personal computer-based approach for $3\mathrm{\;d}$ printing accessible to blind people. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, pp. 1-4, 2017. + +[32] A. Guo, X. Chen, H. Qi, S. White, S. Ghosh, C. Asakawa, and J. P. Bigham. Vizlens: A robust and interactive screen reader for interfaces in the real world. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp. 651-664, 2016. + +[33] A. Guo, J. Kim, X. A. Chen, T. Yeh, S. E. Hudson, J. Mankoff, and J. P. Bigham. Facade: Auto-generating tactile interfaces to appliances. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 5826-5838. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3025453. 3025845 + +[34] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. + +[35] M. Hofmann, J. Harris, S. E. Hudson, and J. Mankoff. Helping hands: Requirements for a prototyping methodology for upper-limb prosthetics users. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 1769-1780. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/ 2858036.2858340 + +[36] J. Hook, S. Verbaan, A. Durrant, P. Olivier, and P. Wright. A study of the challenges related to diy assistive technology in the context of children with disabilities. In Proceedings of the 2014 Conference on Designing Interactive Systems, DIS '14, pp. 597-606. ACM, New York, NY, USA, 2014. doi: 10.1145/2598510.2598530 + +[37] M. Hu. Exploring new paradigms for accessible 3d printed graphs. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, pp. 365-366, 2015. + +[38] A. Hurst and J. Tobias. Empowering individuals with do-it-yourself assistive technology. In The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '11, pp. 11-18. ACM, New York, NY, USA, 2011. doi: 10.1145/2049536. 2049541 + +[39] S. K. Kane and J. P. Bigham. Tracking@ stemxcomet: teaching programming to blind students via 3d printing, crisis management, and twitter. In Proceedings of the 45th ACM technical symposium on Computer science education, pp. 247-252, 2014. + +[40] J. Kim and T. Yeh. Toward 3d-printed movable tactile pictures for children with visual impairments. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, p. 2815-2824. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123.2702144 + +[41] B. Leduc-Mills, J. Dec, and J. Schimmel. Evaluating accessibility in fabrication tools for children. In Proceedings of the 12th International Conference on Interaction Design and Children, pp. 617-620, 2013. + +[42] V. Lepetit, F. Moreno-Noguer, and P. Fua. Epnp: Efficient perspective-n-point camera pose estimation. International Journal of Computer Vision, 81(2):155-166, 2009. + +[43] J. Lim, Y. Yoo, H. Cho, and S. Choi. Touchphoto: Enabling independent picture taking and understanding for visually-impaired users. In 2019 International Conference on Multimodal Interaction, pp. 124-134. ACM, 2019. + +[44] Z. Lv. Wearable smartphone: Wearable hybrid framework for hand and foot gesture interaction on smartphone. In Proceedings of the IEEE international conference on computer vision workshops, pp. 436-443, 2013. + +[45] R. Manduchi. Mobile vision as assistive technology for the blind: An experimental study. In International Conference on Computers for Handicapped Persons, pp. 9-16. Springer, 2012. + +[46] S. McDonald, J. Dutterer, A. Abdolrahmani, S. K. Kane, and A. Hurst. Tactile aids for visually impaired graphical design education. In Proceedings of the 16th International ACM SIGACCESS Conference on + +Computers amp; Accessibility, ASSETS '14, p. 275-276. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10. 1145/2661334.2661392 + +[47] R. Neto and N. Fonseca. Camera reading for blind people. Procedia Technology, 16:1200-1209, 2014. + +[48] NFB. Statistical facts about blindness in the united states, 2016. + +[49] H. Nguyen, T. Nguyen, and J. Freire. Learning to extract form labels. Proceedings of the VLDB Endowment, 1(1):684-694, 2008. + +[50] F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer. Speeded up detection of squared fiducial markers. Image and Vision Computing, 2018. + +[51] L. Shi, H. Lawson, Z. Zhang, and S. Azenkot. Designing interactive 3d printed models with teachers of the visually impaired. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2019. + +[52] L. Shi, R. McLachlan, Y. Zhao, and S. Azenkot. Magic touch: Interacting with 3d printed graphics. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 329-330, 2016. + +[53] L. Shi, Y. Zhao, and S. Azenkot. Markit and talkit: a low-barrier toolkit to augment $3\mathrm{\;d}$ printed models with audio annotations. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 493-506, 2017. + +[54] L. Shi, Y. Zhao, and S. Azenkot. Markit and talkit: A low-barrier toolkit to augment $3\mathrm{\;d}$ printed models with audio annotations. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST '17, pp. 493-506. ACM, New York, NY, USA, 2017. doi: 10.1145/3126594.3126650 + +[55] L. Shi, Y. Zhao, R. Gonzalez Penuela, E. Kupferstein, and S. Azenkot. Molder: An accessible design tool for tactile maps. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2020. + +[56] R. Shilkrot, J. Huber, W. Meng Ee, P. Maes, and S. C. Nanayakkara. Fingerreader: A wearable device to explore printed text on the go. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, pp. 2363-2372. ACM, New York, NY, USA, 2015. doi: 10.1145/2702123.2702421 + +[57] A. Stangl, J. Kim, and T. Yeh. 3d printed tactile picture books for children with visual impairments: A design probe. In Proceedings of the 2014 Conference on Interaction Design and Children, IDC '14, p. 321-324. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2593968.2610482 + +[58] L. Stearns, R. Du, U. Oh, C. Jou, L. Findlater, D. A. Ross, and J. E. Froehlich. Evaluating haptic and auditory directional guidance to assist blind people in reading printed text using finger-mounted cameras. ACM Trans. Access. Comput., 9(1):1:1-1:38, Oct. 2016. doi: 10.1145/ 2914793 + +[59] S. Swaminathan, T. Roumen, R. Kovacs, D. Stangl, S. Mueller, and P. Baudisch. Linespace: A sensemaking platform for the blind. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 2175-2185. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2858036.2858245 + +[60] TapTapSee. Taptapsee, 2020. + +[61] U. Uckun, A. S. Aydin, V. Ashok, and I. Ramakrishnan. Breaking the accessibility barrier in non-visual interaction with pdf forms. Proceedings of the ACM on Human-Computer Interaction, 4(EICS):1-16, 2020. + +[62] M. Vázquez and A. Steinfeld. An assisted photography framework to help visually impaired users properly aim a camera. ACM Transactions on Computer-Human Interaction (TOCHI), 21(5):25, 2014. + +[63] E. vision. Davinci hd/ocr all-in-one desktop magnifier, 2020. + +[64] P. Wacker, O. Nowak, S. Voelker, and J. Borchers. Arpen: Mid-air object manipulation techniques for a bimanual ar system with pen & smartphone. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2019. + +[65] P. Wacker, O. Nowak, S. Voelker, and J. Borchers. Evaluating menu techniques for handheld ar with a smartphone & mid-air pen. In 22nd + +International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1-10, 2020. + +[66] P. Wacker, A. Wagner, S. Voelker, and J. Borchers. Heatmaps, shadows, bubbles, rays: Comparing mid-air pen position visualizations in handheld ar. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-11, 2020. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..e3f73d719f83fb82940c463512e99771ffbd717c --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,320 @@ +§ TOWARDS ENABLING BLIND PEOPLE TO FILL OUT PAPER FORMS WITH A WEARABLE SMARTPHONE ASSISTANT + +Anonymous Authors + +affiliation + +§ ABSTRACT + +We present PaperPal, a wearable smartphone assistant which blind people can use to fill out paper forms independently. Unique features of PaperPal include: a novel 3D-printed attachment that transforms a conventional smartphone into a wearable device with adjustable camera angle; capability to work on both flat stationary tables and portable clipboards; real-time video tracking of pen and paper which is coupled to an interface that generates real-time audio read outs of the form's text content and instructions to guide the user to the form fields; and support for filling out these fields without signature guides. The paper primarily focuses on an essential aspect of PaperPal, namely an accessible design of the wearable elements of PaperPal and the design, implementation and evaluation of a novel user interface for the filling of paper forms by blind people. PaperPal distinguishes itself from a recent work on smartphone-based assistant for blind people for filling paper forms that requires the smartphone and the paper to be placed on a stationary desk, needs the signature guide for form filling, and has no audio read outs of the form's text content. PaperPal, whose design was informed by a separate wizard-of-oz study with blind participants, was evaluated with 8 blind users. Results indicate that they can fill out form fields at the correct locations with an accuracy reaching 96.7%. + +Index Terms: Human-centered computing-Accessibility-Accessibility technologies—; Human-centered computing—Human computer interaction (HCI) + +§ 1 INTRODUCTION + +Paper documents continue to persist in our daily lives, notwithstanding the paperless digitally connected world we live in. People still continue to encounter paper-based transactions that require reading, writing and signing paper documents. Examples include paper receipts, mails, checks, bank documents, hospital forms and legal agreements. A recent survey shows that over ${33}\%$ of transactions in organizations are still done with paper documents [4]. Many of these paper documents, at the very least, require affixing signatures on them. While it is straightforward for sighted people to write and affix their signatures on paper, for people who are blind this is challenging, if not impossible to do independently. When it comes to writing, blind people invariably rely on sighted people for assistance. Such assistance may not always be readily available, but more troublingly, having to depend on others for writing always comes with a loss of privacy. To make matters worse, unlike reading assistants for blind people, of which there are quite a few (e.g., [2,8]), there are hardly any computer-assisted aids that can help them to write on paper independently, a problem that has taken on added significance due to the recent pandemic-driven upsurge in mail-in balloting. In fact, a recent lawsuit was brought by blind plaintiffs on the discriminatory nature of mail-in paper ballots since they could not be filled out without compromising confidentiality [5]. + +There are two essential aspects to a form-filling assistant for blind people: (1) document annotation which includes capturing the image of the document with a camera, and automatic identification of all its items, namely, text segments, form fields and their labels; and (2) the design and implementation of an interface to enable blind people to access and read all the items of the document and fill out the fields independently. + + < g r a p h i c s > + +Figure 1: A blind user filling out a form using PaperPal. An interaction scenario: (A) The pen is pointing to a text item and is read out, (B) Rotate the pen right to read the next item, (C) Bi-directional rotate to navigation to the form field, (D) Fill out the field. + +In so far as (1) is concerned, the existence of several smartphone reading apps for them (e.g., SeeingAI [8], KNFB reader [2], and voice dream scanner [26]) has established the feasibility of acquiring images of paper documents by blind people using a smartphone. These apps demonstrate that blind users can independently use their audio interface to capture the image of the document. The aforementioned apps also extract text segments from the captured images using OCR and document segmentation methods, which are then read out to the user. In so far as forms are concerned, it is possible to extract form fields and their labels from document images using extant vision-based systems such as Adobe [10], and AWS Textract [9]. In contrast, the HCI aspects of an interface for a form-filling assistant for blind people is a challenging and relatively understudied research problem, and is the primary focus of this paper. + +Of late, research on HCI aspects of writing aids for blind people is beginning to emerge. A recent work describes a first-of-its-kind smartphone-based writing-aid, called WiYG, for assisting blind people to fill out paper forms by themselves [28]. WiYG uses a 3D printed attachment to keep the phone upright on a flat table and redirects the focus of the phone's camera to the document that is placed in front of the phone. The paper and phone in WiYG are kept stationary; The user receives audio instructions to slide the signature guide - a card similar in size to regular credit cards that has a rectangular opening in the middle to help blind people sign on papers - to different form fields on the paper. All the form fields are manually annotated apriori. In addition, visual markers are affixed to the signature guide for tracking its locations with the camera. + +WiYG work has opened up new design questions and challenges that could form the basis for next generation computer-aided paper-form-filling writing assistants for blind people. We explore some of these questions here. Firstly, WiYG provides no readouts of the text in the form documents which arguably is desirable, especially for documents that require signatures. Secondly, WiYG simply steps through each form field in the document one by one without backtracking. In practice one would like to seamlessly switch back and forth between the fields and fill them in any order. Thirdly, WiYG requires a flat table to keep the paper as well as the phone stationary during use. The ability to operate in different situational contexts such as documents on non-stationary portable surfaces such as clipboards makes for a more flexible computer-aided reading/writing wearable assistant. In fact, often times blind users find themselves in situations where the documents they are asked to review and sign such as forms at hospitals and doctor's offices are on clipboards. + +To explore these questions we employed a user-centered design approach. We started with a Wizard of Oz (WoZ) pilot study with eight blind participants to understand the feasibility of filling paper forms on a clipboard. The study included paper forms placed on both flat desks and portable clipboards with the wearable cameras worn over the chest or attached to glasses, to mimic smart glasses. The study was designed to elicit data on several key questions including: (1) How do blind people write on paper attached to a portable clipboard? (2) Where can the camera be worn conveniently and in a way that the pen and paper are visible within the camera's field of view? (3) Considering all the camera-clipboard movements, how can blind people coordinate the clipboard and the wearable camera to maintain the pen and paper inside the camera's field of view while writing? The study was also intended to elicit user feedback and gather design requirements. The findings from the WoZ study informed the design of PaperPal, a wearable smartphone assistant for non-visual interaction with paper forms in more general scenarios than only a stationary desk, such as portable clipboards. + +There are several unique aspects to the design of PaperPal. First, its novel 3D-printed attachment transforms a conventional smart-phone into a wearable device with a mechanism to adjust the camera angle with one hand. Second, PaperPal is flexible to where it can be used: stationary tables as well as non-stationary surfaces, specifically, portable clipboards. Third, PaperPal enables users to write without having to use their signature guides - a key requirement that emerged from the WoZ study. Fourth, PaperPal leverages real-time video processing techniques to track the paper and pen and accordingly provides appropriated audio feedback. Lastly, both reading and writing are tightly integrated in PaperPal, with users being able to easily switch between them while accessing different items on the document. Our evaluation with 8 blind users showed that PaperPal could successfully assist people who are blind to fill in various paper forms, such as bank checks, restaurant receipts, lease agreement, and informed consent forms. They independently filled out these forms with an accuracy reaching ${96.7}\%$ . We summarize our contributions as follows: + + * The results of a wizard of OZ study with blind participants to uncover requirements for independently interacting with paper forms in portable settings. + + * The design of a novel 3D attachment that can turn a smartphone into a wearable with adjustable camera angle. This can also be used for other wearable vision-based applications that require adjustment of the camera angle. + + * The design and implementation of PaperPal, a new smartphone application, to assist blind users to independently read and fill out paper forms both on flat tables as well as portable clipboards. + + * The results of a user study with blind participants to assess the efficacy of PaperPal in filling out various paper forms. + +Following WiYG [28], we also assume annotated paper forms. As mentioned earlier, there exist smartphone applications and known techniques for document image capture by blind people and for automatic annotations. While the annotation problem is orthogonal to the design and implementation of the user interface explored in this paper, in Section 5.11 we describe our experiences with automated annotation of paper forms and discuss its envisioned integration in PaperPal to realize a fully automated paper-form-filling assistant. + +§ 2 RELATED WORK + +The research underlying PaperPal has broad connections to assistive technologies for reading and writing on paper documents, particularly for blind people, 3D printed artifacts and image acquisition and processing in accessibility. What follows is a review of existing research on these broad topics. + +Reading and Writing: For well over a century, Braille has been the standard assistive tool for reading and writing for blind people. It is a tactile-based system made up of raised dots that encode characters. The use of braille has been declining in the computing era which ushered a major paradigm shift to digital assistive technologies [48]. Examples of digital technolgies for reading printed documents include some CCTVs [63] and Kurzweil Scanner [3] which reads off the text in scanned documents. + +The smartphone revolution has witnessed a surge in mobile reading aids. Notable examples include the KNFB reader [2], Seein-gAI [8], Voice Dream Scanner [26], Text Detective [14], and Tap-TapSee [60]. The smartphone-based solutions (e.g., [47]) as well as other hand-held solutions (e.g., SYPOLE [30]) require the user to position the camera for getting the document in its field of view. In recent years, wearable reading aids are emerging (e.g., finger reader [56], Hand Sight [58], and Orcam [7]). Although finger-centric wearables such as $\left\lbrack {{56},{58}}\right\rbrack$ do not require positioning of the camera, the drawback is their interference with writing. Reading paper documents using crowd sourced services is another option for blind people (e.g., be my eyes [1] and Aira [11]). These have the obvious drawback of lacking privacy. + +In contrast to reading aids, research on assistive writing on physical paper is at a nascent stage. A wizard of OZ study to explore the kinds of audio-haptic signals that would be useful for navigation on a paper form was reported in [17]. In this study, the form was placed on a flat table and the wizard generated the audio-haptic signals that was received on a smartwatch worn by the participant. + +A recent paper describes WiYG, a smartphone-based assistant for blind people to fill out paper forms [28]. In WiYG the user places the phone on a stationary table in an upright position using a 3D-printed attachment. The paper form is placed on the desk in front of the smartphone. The user slides the signature guide over the paper form to each form field, guided by audio instructions provided by the smartphone app. To write into the form field the user uses both the hands, one to keep the signature guide in place over the form field and the other hand to write into it with the pen. As mentioned earlier in Section 1, WiYG provides no readouts of the text, simply steps through each form field and can only be used with a flat table where both the paper and the phone are kept stationary. The PaperPal system described in this paper integrates both reading of the document's text and writing in the form fields. It has the capability to operate on both stationary tables and portable clipboards. + +3D Printing in Assistive Technologies: The increasing availability of 3D printers has increased the potential for rapid 3D printing for assistive technology artifacts $\left\lbrack {{20},{38}}\right\rbrack$ . $\left\lbrack {31}\right\rbrack$ shows that it is feasible for blind users to do 3D printing of models by themselves and [23] list organizations that use 3D printing tools to serve people with disabilities. Other examples of 3D printing applications are custom 3D printed assistive artifacts $\left\lbrack {{22},{35}}\right\rbrack ,3\mathrm{D}$ printed markers attached to appliances [33], and applications in accessibility of educational content $\left\lbrack {{19},{21},{24},{37}}\right\rbrack$ , graphical design [46], and learning programming languages [39]. 3D printing is also used to convey visual content [59], art [25], and map information [55] to blind people. Interactive 3D printed objects is yet another way 3D printing is utilized for accessibility [51-53]. Other examples of 3D printing include generating tactile children’s books $\left\lbrack {{40},{57}}\right\rbrack$ to promote literacy in children. In addition, [41] studies how children with disabilities can use 3D printing. In [36] it is mentioned that children with disabilities can also utilize 3D printing in the context of DIY projects. 3D printing is also used to utilize already existing technologies (e.g., making wearable smartphones [44]). In this paper we utilize 3D printing to design a phone case and a pocketable attachment to turn a smartphone into a wearable that allows the camera's angle to be adjusted. + +Image Acquisition and Processing in Accessibility: Accessible image acquisition tools such as $\left\lbrack {{15},{43},{62}}\right\rbrack$ instruct blind users to position the camera at the correct angle and distance from the target for capturing an image. The work in [32] illustrates the practical deployment of such tools in an assistive technology for image acquisition by blind people. In terms of capturing images of paper documents, assistive reading apps, namely, SeeingAI [8], KNFB reader [2], and voice dream scanner [26] demonstrate that blind people can independently use the apps' interface to direct the smartphone camera on the paper document and capture its image. + +The post-processing of the document image is a well-established research topic and can range from local OCR processing [12] to other computer vision methods such as document segmentation $\left\lbrack {{13},{29}}\right\rbrack$ to form labeling techniques $\left\lbrack {9,{10},{49},{61}}\right\rbrack$ . + +Another topic related to camera-based assistive technologies is the use of visual markers for tracking objects in the environment. For example, in [27] different types of visual tracking methods are studied to make shopping easy for blind people. The work in [45] studies color-coded markers for use in a way finding application for blind people. Visual markers are especially beneficial when computer vision methods do not provide satisfactory accuracy. Examples of assistive technologies that utilize visual markers are $\left\lbrack {{28},{54},{55}}\right\rbrack$ . In PaperPal we also use visual markers to track the tip of the pen and the paper. To track the latter, PaperPal uses visual markers similar to the ones used for tracking the signature guide in [28]. To track the pen, PaperPal uses visual markers attached to a 3D printed pen topper, inspired by previous work on pen tracking that also use visual markers [21,64-66]. + +§ 3 A WIZARD OF OZ PILOT STUDY + +To the best of our knowledge, there is no previous research on how blind people write on paper documents attached to non-stationary surfaces, namely, portable clipboards. To this end, we did a pilot study to assess the feasibility of an assistive tool that uses a wearable device for filling paper forms attached to clipboards. In the study these specific questions were explored: (1) How do blind people write on non-stationary surfaces like clipboards with a wearable? (2) How do blind people coordinate their hand and body movements to keep the pen and paper within the camera's field of view? (3) What is the most suitable on-body location for a wearable camera between the proposed locations of head vs. chest. In addition, the study was also intended to gather requirements for the wearable camera attachment. Details of the study follows. + +§ 3.1 PARTICIPANTS + +Eight (8) participants ( 3 males, 5 females) whose ages ranged form 35 to 77 (average age 50) were recruited for the pilot study. All participants were completely blind; all knew how to write on paper; none had any motor impairments that would have affected their full participation in the study. + +§ 3.2 APPARATUS + +The study used a standard ball point pen, a portable clipboard, and a credit-card sized signature guide. Each form, printed on standard letter-sized paper, had 5 randomly placed equal-sized fields, with the same distance between consecutive fields. The wizard used a Nexus phone to send instructions to an iPhone8+. Participants wore the iPhone8+ on two on-body locations using two holders. The first holder had the iPhone attached to Ski goggles. Participants wore this on the head akin to smart glasses - Wearable ${}_{\text{ head }}$ (figure 2 B) Using the second holder, the participants wore a lanyard around their necks with the phone rested on their chests. The holder had a reflective mirror to redirect the camera's field of view and served as the wearable on the chest - Wearable chest (figure 2 A, C). + + < g r a p h i c s > + +Figure 2: The Apparatus for the pilot study. A: The phone case with reflective mirror in front of the camera to be worn on the chest using the lanyard in C. B: Ski goggles for wearing the phone as glasses. D: Paper with Aruco markers where users mark ’ $x$ ’ in the numbered form fields. + +§ 3.3 STUDY DESIGN + +In this within-subjects study every participant filled out a total of 4 random forms corresponding to four different conditions namely, $< {\text{ Wearable }}_{\text{ head }}$ , form on desk>, $< {\text{ Wearable }}_{\text{ head }}$ , form on clipboard>,, and . + +A total of 32 forms were filled out ( 8 participants, 4 conditions). The order of the four form-filling tasks was randomized to minimize the learning effect. The wizard app was used by the experimenter (i.e., the wizard) to manually direct the participant to the form fields by sending directional audio instructions such as "move left", "move right", and so on. The participant's phone had an app to track the paper based on the markers that were printed on the paper (see figure 2 D). The participant's phone was also instrumented to gather study data throughout the duration of each study session which was also video recorded. + +The accuracy was measured as the percentage of overlap between the annotated rectangular region of a given form-field (a priori) and the rectangle enclosing the participant's written text in that field (annotated after the study) [28]. + +§ 3.4 PROCEDURE + +Each form filling task began with the wizard directing the participant to go to the first form field. We regarded this as the initialization phase and discounted it from our measurements so as to exclude any confounding variables that might arise due to starting off from a random position. Upon reaching the form field the participant would initial the field with a " $\times$ " using the signature guide. If at any time during this process the paper disappears from the camera's field of view, the iPhone app would raise a 'paper not visible' audio alert In response the participant would make adjustments by shifting the paper on or the wearable to bring it back into focus, which gets acknowledged by the 'paper is visible' shout-out by the app. The participant received the navigational instructions only when the paper was visible by the camera. The experimenter monitored the participant's navigational progress and sent audio directions in real time to guide the user's signature guide to each form field. At the conclusion of the session, users would compare and contrast writing on the desk vs clipboard, the wearable's location on the chest vs the head, and other experiences, in an open-ended discussion. + +§ 3.5 KEY TAKEAWAYS + +Chest vs. Head location for the Wearable: 6 out of 8 participants preferred the chest wearable, one participant preferred the head location and one participant had no preference. With the camera on the head, the paper went out of focus far more often - by a factor of 4 - than when it was worn on the chest. + +The differences in percentage of overlap among the 4 conditions were found to be statistically significant (repeated measures ANOVA, ${F}_{3,{124}} = {5.32},p = {0.002}$ ). Pairwise comparisons with posthoc Tukey test showed that under the "head, clipboard" condition the field overlaps are significantly less than when user wears the phone on the chest and writes either on desk $\left( {p = {0.0102}}\right)$ or clipboard $\left( {p = {0.0171}}\right)$ . This suggests that when wearing the phone on the chest, user can better write on the correct location for both papers that are placed both on the desk and clipboard. + +These are excerpts of select participant regarding the (1) Wearable ${}_{\text{ head }}$ : "Looking downward is not comfortable and this task requires a lot of looking down." and (2) Wearable chest: "Around the neck is more comfortable and you can focus on the direction of the paper and hand.". Overall, the chest location was better suited for placement of the wearable, and was adopted in the design of PaperPal. + +Writing Surface: Unsurprisingly, all participants deemed writing on the desk was easier than on the clipboard. However, pairwise comparisons did not show any statistically significant difference in the accuracy or form fill-out time when only the paper placement variable changed from desk to clipboard. During the discussion, all participants mentioned real-life situations where they had to use clipboards. + +Navigational Differences: On desks, we observed that the user moved the signature guide with one hand while using the other hand to feel the edge of the paper as a means to get a sense of the relative orientation of the signature guide w. r. t. the orientation of the paper. Such a behaviour was also reported in [28]. On the other hand, when holding the clipboard, participants could not feel the paper's edge in the same way as they did on flat desks. In fact, we observed that participants had difficulty moving the signature guide on a trajectory aligned with the paper's orientation. Despite this, the wizard was able to adapt the instructions to lead the participant to the target fields. + +Reflective Mirror: There was a lot of variability in how participants held the clipboards. This means there is no one perfect angle for attaching the mirror to the holder that can cover all these variations. A wide angle camera increases the likelihood of the paper staying in its field of view. However, this will require the use of a large sized reflective mirror which is not practical. Thus, a smartphone holder without a reflective mirror whose orientation can be adjusted to position the camera, is desirable for a wearable on the chest. + +Signature Guide: When using the clipboard participants had to hold the clipboard throughout the interaction process. On the other hand, writing with the signature guide requires both the hands. This became a difficult juggling act for the participants. These difficulties are reflected in the feedback of all the participants (e.g.,: P5 mentioned "specially you have to pickup your pen while using signature guide"). Hence using signature guides with clipboards is not an option. + +§ 4 THE PAPERPAL WEARABLE ASSISTANT 4.1 DESIGN OF 3D PRINTED PHONE HOLDER + +Informed by our pilot study, we designed a 3D-printed holder to convert an off-the-shelf smartphone (iPhone 8+) into a hands-free wearable on the chest. In addition, the holder design had to meet these requirements: (a) Support one-handed tilting of the phone to different angles so that differences in how the clipboard is held by different users can be accommodated. They will hold the clipboard with one hand and appropriately adjust the angle of the phone with the other hand to capture the paper in the camera's field of view; (b) Use minimal number of component pieces that are compact enough to fit in a pocket; and (c) Ease of assembly/disassembly. + + < g r a p h i c s > + +Figure 3: The iPhone holder. The L-shaped half fork can be attached to the phone case by rotating the screw inside the threaded bearing. When tightened the L-shaped half fork can rotate. + + < g r a p h i c s > + +Figure 4: PaperPal's interaction. The interaction automaton. + +These requirements led to the design of the Adjustable iPhone Holder shown in fig 3 which went through several rounds of experimentation. This design is comprised of two pieces: The first piece is a phone case that has a small threaded bearing on the side. The second piece is a L-shaped half fork which facilitates tilting of the phone. This piece can be attached to the phone case by rotating the screw into the threaded bearing and can be rotated 360 degrees. A lanyard is attached to the lifting lug to wear the holder with its phone around the neck. The user can change the lanyard ribbon's length. This L-shaped fork can be rotated by the user to adjust the tilt angle of the wearable phone. Furthermore, the tilt angle can be adjusted so that it can support upright placement of the phone on a desk. Thus it can operate both as a wearable as well as serve as a stationary holder for writing on a flat desk. + +§ 4.2 PAPERPAL: AN OPERATIONAL OVERVIEW + +The PaperPal system runs as an iPhone app. The user interacts with the items of the paper, namely, text segments, form fields and their labels, by moving the pen over the paper like a pointer and making gestures with the pen. Two types of gestures are used: (1) unidirectional rotate left or right of the pen around its longitudinal axis, and (2) bidirectional rotate made up of two rotations done consecutively in the opposite directions. + +PaperPal's response to the user's pen movements and pen gestures is governed by a two-state interaction automaton shown in Figure 4. + +The application starts in the "select item and read" state which is inspired by the smartphone screen reader interface. In this state the user can move the pen like a pointer to simultaneously select an item and hear an audio readout of the item that is associated with the location pointed by the pen. This interaction is analogous to the "touch exploration" on the smartphone screen reader. + +The unidirectional rotate left (right) selects the previous (next) item on the document and its content is read aloud. This interaction is analogous to "swiping" on the smartphone screen reader. + +The bidirectional rotate switches between the two states of the application namely "select item and read" and "navigate to item and write". + +The "navigate to item and write" state handles two situations: (1) If the item selected is a text segment it reads aloud the item's text content; (2) if the item selected is a form field it reads aloud its label and generates navigational instructions to direct the user's pen to the location of the field. No readouts of any intermediate items take place when a user is being navigated to a form field. Upon reaching the field it reads out the label of the form field once more to refresh the user's memory and directs the user to write in the field, alerting the user when the pen strays out of the field and giving instructions on how to to move the pen back into the field and continue writing. In this state the user can do a bidirectional rotate at any time to "move to the select item and read" state or continue in the current sate and continue on to the other form fields or other items via unidirectional rotate gestures. + +§ 4.3 PAPERPAL IMPLEMENTATION + +PaperPal uses the phone's camera to observe the user's actions. The application is implemented as an iOS app and uses OpenCV library [6] for real time video processing. Specifically, PaperPal: (a) tracks the physical location of the pen tip over the paper, and (b) detects pen gestures namely uni-directional and bi-directional rotates. The pen location and gestures determine the audio responses, namely, text readouts, navigation and writing instructions, that are generated in real time. Figure 5 is a high-level workflow of the process. + +§ 4.3.1 VISUAL MARKERS + +To enable accurate tracking of the pen and paper, Aruco markers [50] with known size are used for paper and pen tracking. + +The paper tracker is a credit-card sized rectangular card ( ${85.60}\mathrm{\;{mm}}$ $\times {53.98}\mathrm{\;{mm}}$ ) wrapped by an Aruco board of 24 markers. It has a narrow diagonal groove that serves as a tangible guide for attaching the the paper into the card. The user slides the paper's upper left corner into this groove. + +The pen tracker is a cube-shaped pen topper with Aruco markers affixed to each face of the cube. It can be easily attached to any regular ball point pen and is resilient to hand occlusions. + +§ 4.3.2 LOCATING PEN TIP ON PAPER + +For each image frame containing the pen and paper, two transformations $H$ and $P$ are estimated: (1) $H$ is a homography transformation that maps each image pixel to its corresponding location on the paper's coordinate system. This is estimated based on the paper tracker via the DLT algorithm [34], and (2) $P$ is a projective transformation [34] between any 3D location in the pen tracker's coordinate system and their corresponding 2D image pixel. $P$ is comprised of the intrinsic camera calibration, which is measured once for the camera, and the extrinsic camera calibration which is estimated based on the pen markers with the EPnP method [42]. + +For a pen that is touching the paper, PaperPal starts with the physical location of the pen tip whose distance is a constant w. r. t. the pen tracker coordinate system, and applies $P$ followed by the $H$ transformations to estimate the pen tip location on the paper’s coordinate system. + +We only estimated the pen location if the pen tip is close to the paper. To this end, we developed a heuristic based on the observation that in PaperPal as the pen moves away from the paper it gets closer to the camera. specifically, in the image, the observed size of the pen markers should not be more than twice the size of the paper markers. The criteria was based on experimenting with various thresholds and the one candidate with the lowest average re-projection error [34] was selected. Furthermore we remove outlier pen tip locations whose distance from the previous observed pen tip is more than ${18}\mathrm{\;{mm}}$ on the paper. This threshold was selected based on the fastest pen movements that could be captured using the pen's visual markers. + + < g r a p h i c s > + +Figure 6: A: rectilinear path along the paper's coordinate system, B: non-rectilinear path along the paper's coordinate system, and C: the rectilinear path on the rotated paper's coordinate system. + +§ 4.3.3 DETECTING PEN GESTURES + +A previous work on pen rolling gestures had shown that when writing on a paper, unintended pen rotations were observed with high speeds and small angles [16]. We used this insight to increase the duration of intended rotation gestures and thus distinguish them from unintended ones. + +To this end, through experimentation the duration for intended gestures, denoted ${T}_{\text{ gesture }}$ , was set to ${600}\mathrm{\;{ms}}$ . Rotation gestures are performed with the pen tip on or close to the paper surface. To detect the rotation gestures we choose a 3D point $R$ w. r. t. the pen coordinate system that is close to the pen tip. We find $R$ ’s corresponding 2D location $r$ on the paper coordinate system using the same process that was used for estimating the position of the pen tip in Section 4.3 above. To detect a rotational movement along the longitudinal axis of the pen, the angle of $r$ relative to the tip of the pen is measured for each frame using simple trigonometry. We record the direction of the rotational angle (left/right) between each two consecutive frames. If in the time window of ${T}_{\text{ gesture }}$ , the majority (over 90%) of pen rotations are in the same direction (left/right), the rotation gesture in the corresponding direction is detected. For detecting bidirectional rotations, first note that it involve two quick rotation in both direction. A bidirectional rotate is detected if within the sliding time window of ${T}_{\text{ gesture }}$ , majority (over 90%) of rotations in one half in one direction and in the opposite direction in the other half. + +§ 4.3.4 GENERATING AUDIO RESPONSES + +PaperPal responds to the user's pen movements by generating four kinds of audio responses: coordination alerts, audio readouts, writing and navigation instructions. + +Coordination alerts: Alerts when the paper and/or the pen move out of the camera's field of view. To avoid needless alerts for momentary movements out of the field of view, we alert when the pen and paper are not in field of view for a duration that is longer than 2 seconds. Readouts: The textual content of an item selected by the user is readout to the user in audio. + +Table 1: Participants demographic and habits regarding braille, writing, and smartphone applications (SG stands for signature guide). + +max width= + +ID Pilot study Age (Sex) Diagnosis (Light perception) Braille usage (Level) Braille scenarios Writing usage (Level) Writing scenarios SG Own (Carry) Smartphone (experience) Smartphone apps for papers + +1-11 +P1 yes 34 (F) retrograde optic atrophy (yes) daily (advanced) papers at work, at the library daily (beginner) doctor's office, legal forms, banks yes (always) iphone 10 S (advanced) seeingAI, KNFB reader, voice dream reader. voice dream writer + +1-11 +P2 yes 63 (F) Acute congenital glaucoma (no) daily (advanced) taking notes for myself rarely (beginner) Leaving notes for sighted peers yes (always) iPhone 8 (advanced) be my eyes + +1-11 +P3 no 32 (F) medical malpractice (no) daily (beginner) elevators. remote control weekly (advanced) doctor's office. checks no iPhone 10 (advanced) None + +1-11 +P4 no 55 (M) retinal detachment (no) monthly (advanced) elevators, mails weekly (beginner) timesheet signature, checks. legal documents no iPhone (advanced) KNFB reader + +1-11 +P5 yes 54 (M) glaucoma (no) - - weekly (beginner) shopping receipts. credit card bills no iPhone (beginner) seeingAI, tap tap see + +1-11 +P6 yes 46 (F) retinitis pigmentosa (yes) never (beginner) elevators, mails monthly (beginner) legall documents. doctor's office no iPhone 8 (advanced) seeing AI + +1-11 +P7 no 38 (M) optic atrophy, retinitis pigmentosa (yes) monthly (advanced) reading documents daily (advanced) documents at work. taking notes for sighted peers yes (often) iPhone 11 pro (advanced) KNFB reader. seeingAI. Aira voice dream scanner + +1-11 +P8 no 61 (M) retinitis pigmentosa (yes) daily (advanced) elevator. calendar daily (advanced) timesheet signature. checks. legal documents ves (always) flip phone (beginner) None + +1-11 + +Writing instructions: While writing is in progress, instructions to maintain the pen within the field is given whenever the estimated pen tip falls out of the rectangular boundary of the field. For example, if the user's pen position has strayed above (below) the field the user is guided back to field with a "Move down (up)" instruction. + +Navigation instruction: Navigational instructions consists of four basic directives namely up, down, left and right. With these four directives the user is guided to any field on the paper. In [28] a simple navigation algorithm was used to guide the user along a rectilinear path that corresponded to the Manhattan distance between the pen and field, see figure 6 A. Recall that our pilot study revealed that the user's navigational movements, in response to the audio instructions can deviate from the intended axes when the paper is placed on a clipboard. Figure $6\mathrm{\;B}$ , demonstrates how the user’s pen tip trajectory can deviate from the expected path along the paper's coordinate system with simple rectilinear navigational instructions. + +To address this problem we estimate the deviation angle of the pen tip's trajectory w.r.t. the paper's coordinate system. To compensate for this deviation, we rotate the paper's coordinate system to the same degree but in the opposite direction, so that the pen tip's trajectory is aligned with the transformed axes - see figure $6\mathrm{C}$ . The navigation directives are generated w.r.t. the transformed axes. Observe that the pen tip trajectory now follows a rectilinear path w.r.t. the transformed axes. To estimate the deviation angle, we use the pen tip’s estimated location $t$ on the paper and find the angle between $t$ and the intended axis (horizontal axis when the navigation instruction is left or right, and vertical when the navigation is up or down). To avoid noise and jitters of the transformed axes, the deviation angle is averaged over a sliding window of one second. + +§ 5 EVALUATION + +We conducted an IRB-approved user study of PaperPal to evaluate its effectiveness as a form-filling assistant for blind people. To this end, the study was designed to answer the following questions: (a) How accurately can users fill out forms in terms of writing on the correct location? (b) How long does it take to fill out forms? (c) What is the overall user experience of using PaperPal to fill out paper forms of different sizes, layouts, and texts? + +§ 5.1 PARTICIPANTS + +Ten fully blind participants were recruited. However, two participants could not attend and the study was conducted with the remaining eight participants whose ages ranged from 32 to 63 (average $= {47.88}$ , std $= {12.16}$ ,4 females and 4 males). Note that 4 out of the 8 participants were also part of the WoZ pilot study discussed in Section 3. Table 1 is the demographic data of the participants. The participants were compensated $50 per hour. All the participants were right handed, and none had any motor impairments that impeded their full participation in the study. All the participants (except P5) were familiar with braille and all of them affirmed that they knew how to write on paper. All participants stated that in real-life they always asked a sighted peer to do the form fill out for them except for affixing their signatures. For that, they were led by the sighted peer to the signature field where they would sign by themselves. + +§ 5.2 APPARATUS + +The PaperPal application was running on an iPhone 8+. The 3D-printed holder, lanyard, paper tracker, pen tracker, a regular ball point pen and a clipboard were provided to the user, see figure 1 . Finally, each participant was given 4 paper forms to fill out. + +Forms: The forms were selected to have different properties and to reflect realistic scenarios. Specifically, the forms were: + + * (F1): A regular-size check that consists of six fields namely pay to the order of, date, $\$$ , dollars, memo, and signature. + + * (F2): A restaurant receipt that consists of three fields namely tip, total, and signature. + + * (F3): A template for a lease agreement that consists of the following six fields: landlord's first name, landlord's last name, tenant's first name, tenant's last name, landlord's signature, date, tenant's signature, and date. + + * (F4): An informed consent form that requires the participant to fill out four fields which are full name, date of birth, participant's signature, and date. + +The two forms (F1 and F2) are quiet similar to the ones used in the evaluation of the WiYG [28]. Two additional forms were selected to evaluate more complex forms in terms of the number of fields (F3) and text items (F4) - see Figure 7. These forms have different paper sizes, orientation, and form layouts. Specifically, F1 and F2 forms are smaller than the standard letter size pages used in F3 and F4. The fields in F2 are vertically aligned and placed below one another, F3 fields are placed on horizontally on a table-like layout, and F1 and F4 have more complex layouts. + + < g r a p h i c s > + +Figure 7: The four forms used in the user study. Note that the scale of the images does not represent their relative size (refer to the dimensions in the figure). The participants' hand writings are annotated with blue (brown) as correct (incorrect) by human evaluators. + +§ 5.3 DESIGN + +The study was designed as a repeated measures within-subject study. Each participant was required to fill out each of the 4 forms ( 4 tasks) with PaperPal in a counterbalanced order using a latin square [18]. The task completion time was the elapsed duration from the moment the pen was detected over the paper for the first time and the moment the user finished writing on the last form field. Accuracy was measured as the percentage of overlap between the ground truth annotated rectangular region of a given form field (a priori) and the rectangle enclosing the participant's written text for that same field - see figure 7. + +§ 5.4 PROCEDURE + +To start with, we draw attention to the circumstances surrounding the study. It was conducted after the gradual re-opening of businesses shut down due to the COVID-19 pandemic. Consequently, the study procedure was adapted to follow CDC recommended safety measures. Specifically, both the participant and the experimenter wore face masks and kept the recommended social distance from each other. Therefore, the experimenter relied on verbal communications instead of physical demonstrations to conduct this study. + +Each session began with a semi-structured interview to gather demographic data, reading/writing habits, and prior experiences with assistive smartphone apps ( $\approx {20}$ minutes) - see table 1 . + +Following this step, the participant was instructed on how to set up the PaperPal apparatus. Towards that, the participant was asked to pick up each piece of the apparatus and the experimenter would provide a verbal description of the pieces, after which the participant began assembling the pieces with the experimenter giving step-by-step assembly instructions until the participant was able to attach the paper tracker to the paper, the pen tracker to the pen, the L-shaped fork to the phone case, clip paper to the clipboard and the lanyard to wear the phone around the neck. After that, the experimenter described the user interface. The participant was asked to practice reading and writing with PaperPal with a set of test forms that were different from those used in the study. During the practice the experimenter observed the progress of the participant and intervened with instructions and explanations as needed. The entire process of assembling and practicing use of the application took about an hour. + +The participant was next asked to fill out the four forms F1, F2, F3 and F4, followed by a single ease question. A maximum of 10 minutes per form was allocated. An open-ended discussion with the experimenter took place upon completion of the tasks. The entire study session per participant lasted 2.5 hours, with the experimenter making notes throughout the video recorded session. + +§ 5.5 RESULTS: TASK COMPLETION TIME + +The task completion time is indicative of the efficiency of PaperPal as a form-filling assistant for blind people. On average, the total time spent to fill out forms F1 to F4 was 169.38, 73.91, 229.824, and 164.84 seconds respectively. The task completion time is divided into: (a) Navigation time, which is the time taken to navigate the to the target field; (c) Writing time, which is the time taken by the participant to fill in the field; (d) Coordination time, which is the time taken by the participant to bring the paper back in the camera's field of view. Figure 8 shows both the navigation time and writing time for each field. + +For the coordination time the repeated measures ANOVA showed statistically significant difference in the time spent in coordination between the 4 forms with $F = {7.00}$ and $p = {0.002}$ . The pairwise analysis with post-hoc Tukey test showed that F2 coordination time is significantly less than F3 and F4 (with $p < {0.01}$ for both pairs) This can be explained as follows: working with larger papers such as F3 and F4 increases the likelihood for the paper or the pen to stray out of the camera's field of view which adds to the coordination time compared to a small-sized form like F2 where straying out is less likely. + +§ 5.6 RESULTS: ASSEMBLY TIME + +The holder assembly time was includes time spent to attach the L-shaped half fork to the phone case and wear the phone around the neck. All participants were able to assemble the holder with an average time of 16.55 seconds (std 4.33 seconds). In addition, the average time to attach the paper tracker card to the top-left corner of an A4 page was 18.75 seconds (std $= {6.45}$ seconds). One difficulty that arose was that most participants were wearing protective gloves which made it difficult for them to attach the paper to the paper tracker card. + +§ 5.7 RESULTS: FORM FILLING ACCURACY + +Overlap Percentage: Recall that the overlap percentage is defined as the percentage of the rectangular bounding box enclosing the participant's writing that falls inside the ground truth rectangular region of the field, see figure 7. The average percentage of field overlap for forms F1 to F4 was ${61.84}\% ,{66.02}\% ,{87.69}\%$ , and ${64.23}\%$ respectively. + +Out of all the form fields in this study ( 8 participants x 21 fields = 168) participants attempted to fill out 156 of them. Out of these 156 fields, 117 (75%) of them had an overlap region of 50% or higher. Figure 8 (Field Overlap) shows the overlap percentages. + + < g r a p h i c s > + +Figure 8: left: navigation time and writing time per field. middle: accuracy in terms of the field overlap percentage. right: human assessment of the filled-out fields + +Human assessment: We asked three human evaluators to assess whether each of the form fields were correctly filled out by the participants. The final verdict was rendered via majority voting. The inter-annotator agreement was high (Fleiss ${}^{\prime } = {0.77}$ ). Figure 8 (Human Assessment) shows the percentage of fields that were deeded as correct (81.55%), incorrect(10.71%), or missed(7.74%). + +The human assessment of form-filling accuracy for forms F1 to F4 was ${88.09}\% ,{69.56}\% ,{96.77}\%$ , and 85.71% respectively. Note that human assessment shows higher accuracy compared to the average overlap. Akin to [28], this phenomenon is due to the fact that even when the written content is not perfectly inside the form field, human annotators still counted it as acceptable. + +§ 5.8 RESULTS: PAPERPAL VS. WIYG + +The standard check (F1) and receipt (F2) forms were similar to the forms used in WiYG's user study in that it also used a check and receipt of similar size and layout and identical set of fields. Although the differences in the setup and study participants preclude a rigorous comparison of the performance between PaperPal and WiYG, we can still get a sense of the differences in their performance via an informal comparison shown in table 2. This comparison suggests that in spite of the complexities of writing on a non-stationary clipboard with a wearable, users could fill out forms in a shorter time without compromising accuracy with PaperPal. + +§ 5.9 RESULTS: SUBJECTIVE FEEDBACK + +We administered a single ease question to each participant to rate the difficulty of assembling the holder and completing each form, on a scale of 1 to 7 with 1 being very difficult and 7 being very easy. The median rating for holder assembly was 7 which suggested that the assembly process was viewed as being easy. The median rating for forms were F1 : 6, F2: 5, F3: 4, F4: 3.5. In the open-ended discussion, participants mentioned that filling out forms that had long text content ( such as F4) or had more fields were more difficult. + +All participants liked the fact that PaperPal lets them work with paper documents independently - quoting P5:" I like the ability to fill out my own forms and checks". All of them mentioned that PaperPal fills an unmet need for preserving privacy when filling out forms and affixing signatures. They all appreciated the integrated reading and writing feature in PaperPal as they got to hear what was in the form that they were filling out prior to affixing their signatures. + +§ 5.10 DISCUSSION + +In the user study participants missed filling 12 out of 168 fields. We observed most of the missing cases were associated with F3's two date fields, which were next to each other and had the same label, causing ambiguity. The last two fields in F4 were also missed by some participants because the long text gave them the false impression that there were no more fields left in the form. One way to address this in future work is to apriori notify the user of the total number of form fields. + +Table 2: Comparison of PaperPal to WiYG (accuracy: overlap%) + +max width= + +2*X 2|c|Standard Check (F1) 2|c|Receipt (F2) + +2-5 + average time (s) average accuracy (%) average time (s) average accuracy (%) + +1-5 +$\mathrm{{WiYG}}\left\lbrack {28}\right\rbrack$ 249.75 (s) 64.85% 91.25 (s) 63.90% + +1-5 +PaperPal 159.38 (s) 61.84% 73.91(s) 66.02% + +1-5 + +All participants mentioned that doing several consecutive rotate gestures required re-adjustments to their grip on the pen. Gestures with subtle finger movements such as finger flicks and taps on the pen are possible alternatives that can address this problem. This will require the use of computer vision recognition algorithms to detect subtle finger movements and is a topic for future research. + +A unique aspect of PaperPal is that users can simply work with the paper and pen without having to hold any other objects like the phone in reading apps $\left\lbrack {2,8}\right\rbrack$ or signature guide while filling out paper forms as in WiYG. Finally, assembly of the 3D-printed attachment, attaching the trackers, and wearing the phone were all done independently by the study participants, affirming that the design of the apparatus associated with PaperPal was highly accessible for blind users. + +The results of the study showed that blind participants were able to fill out forms in a few minutes (ranging from 1 min and 23 sec to $3\mathrm{\;{mins}}$ and ${83}\mathrm{{sec}}$ ) with high accuracy, measured in terms of the average overlap percentage, which was more than ${60}\%$ for all the forms. In addition, accuracy of the filled out form fields as judged by humans was also as high as 96.77%. + +§ 5.11 FUTURE WORK + +Use of Markers: The PaperPal's accuracy depends on accurate tracking of pen and paper. Which is why it is done with visual markers attached to these objects. Eliminating these markers is a challenging open computer vision research. + +Document Annotation: While the focus of this paper has been the accessible HCI interface for filling paper forms with a wearable, we envision a front-end to PaperPal consisting of an app like KNFB reader or voice dream scanner to acquire the image of the form document which subsequently will be dispatched to the augmented AWS Textract services for automatic annotation of the form elements. We conducted preliminary experiments with this service. To this end, we took pictures of the 4 forms used in the study (Section 5) with the voice dream scanner app. These images were rectified using the "image to paper" transformation (section 4.3) and these rectified images were processed by AWS Textract augmented with human-in-the-loop workflow. Out of the 21 fields in the 4 forms, 17 fields and their labels were detected correctly by Textract and 2 were erroneously recognized and were marked as such through intervention via the the human-in-the-loop workflow. Integration of this process to PaperPal and its end-to-end evaluation is a topic of future work. + +§ 6 CONCLUSION + +PaperPal is a wearable reading and form-filling assistant for blind people. Wearability is achieved by transforming a smartphone, specifically an iPhone 8+, into a wearable around the chest with a 3D printed phone holder that can adjust the phone's viewing angle. PaperPal operates on both stationary flat tables as well as non-stationary portable clipboards. A preliminary study with blind participants demonstrated the feasibility and promise of PaperPal: blind users could fill out form fields at the correct locations with an accuracy of ${96.7}\%$ . PaperPal has potential to enhance their independence at home, at work and on the go and at school. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..abaeff41af75af74d285e05e89e5b41cc8fa99a6 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,397 @@ +# Fast Monte Carlo Rendering via Multi-Resolution Sampling + +Category: Research + +## Abstract + +Monte Carlo rendering algorithms are widely used to produce photo-realistic computer graphics images. However, these algorithms need to sample a substantial amount of rays per pixel to enable proper global illumination and thus require an immense amount of computation. In this paper, we present a hybrid rendering method to speed up Monte Carlo rendering algorithms. Our method first generates two versions of a rendering: one at a low resolution with a high sample rate (LRHS) and the other at a high resolution with a low sample rate (HRLS). We then develop a deep convolutional neural network to fuse these two renderings into a high-quality image as if it were rendered at a high resolution with a high sample rate. Specifically, we formulate this fusion task as a super resolution problem that generates a high resolution rendering from a low resolution input (LRHS), assisted with the HRLS rendering. The HRLS rendering provides critical high frequency details which are difficult to recover from the LRHS for any super resolution methods. Our experiments show that our hybrid rendering algorithm is significantly faster than the state-of-the-art Monte Carlo denoising methods while rendering high-quality images when tested on both our own BCR dataset and the Gharbi dataset [14]. + +Index Terms: Computing methodologies-Computer graphics-Ray tracing + +## 1 INTRODUCTION + +Physically-based image synthesis has attracted considerable attention due to its wide applications in visual effects, video games, design visualization, and simulation [26]. Among them, ray tracing methods have achieved remarkable success as the most practical realistic image synthesis algorithms. For each pixel, they cast numerous rays that are bounced back from the environment to collect photons from light sources and integrate them to compute the color of that pixel. In this way, ray tracing methods are able to generate images with a very high degree of visual realism. However, obtaining visually satisfactory renderings with ray tracing algorithms often requires casting a large number of rays and thus takes a vast amount of computations. The extensive computational and memory requirements of ray tracing methods pose a challenge especially when running these rendering algorithms on resource-constrained platforms and impede their applications that require high resolutions and refresh rates. + +To speed up ray tracing, Monte Carlo rendering algorithms are used to reduce ray samples per pixel (spp) that a ray tracing method needs to cast [10]. For instance, adaptive reconstruction methods control sampling densities according to the reconstruction error estimation from existing ray samples [58]. However, when the ray sample rate is not sufficiently high, the rendering results from a Monte Carlo algorithm are often noisy. Therefore, the ray tracing results are usually post-processed to reduce the noise using algorithms like bilateral filtering and guided image filtering $\left\lbrack {{28},{38},{43},{45},{49},{52},{57}}\right\rbrack$ . Recently, deep learning-based denoising approaches are developed to reduce the noise from Monte Carlo rendering algorithms $\left\lbrack {2,9,{23},{29}}\right\rbrack$ . These methods achieved high-quality results with impressive time reduction and some of them are incorporated into commercial tools, such as VRay Renderer, Corona Renderer, and RenderMan, and open source renders like Blender. However, real-time ray tracing is still a challenging problem, especially on devices with limited computing resources. + +Our idea to speed up ray tracing is to reduce the number of pixels that we need to estimate color values. For instance, upsampling by $2 \times 2$ can reduce ${75}\%$ of pixels that need ray tracing to estimate color for. There are two main challenges in super-resolving a Monte Carlo rendering. First, it is still a fundamentally ill-posed problem to recover the high-frequency visual details that are missing from the low-resolution input. Second, a Monte Carlo rendering is subject to sampling noise, especially when it is produced at a low spp rate. Upsampling a noisy image will often amplify the noise level as well. To address these challenges, we propose to generate two versions of rendering: a low-resolution rendering but at a reasonable high ssp rate (LRHS) and a high-resolution rendering but at a lower ssp rate (HRLS). LRHS is less noisy while the more noisy HRLS can potentially provide high-frequency visual details that are inherently difficult to recover from the low resolution image. + +We accordingly develop a hybrid rendering method dedicated for images rendered by a Monte Carlo rendering algorithm. Our neural network takes both LRHS and HRLS renderings as input. We use a de-shuffle layer to downsample the HRLS rendering to make it the same size as LRHS and to reduce the computational cost. Then we concatenate the features from both LRHS and HRLS and feed them to the rest of the network to generate the high-quality high resolution rendering. Our experiments show that given the hybrid input, our method outperforms the state-of-the-art Monte-Carlo rendering algorithms significantly. + +As there is no large scale ray-tracing dataset available to train our network, we collected the first large scale Blender Cycles Ray-tracing (BCR) dataset, which contains 2449 high-quality images rendered from 1463 models. The dataset consists of various factors that affect the Monte Carlo noise distribution, such as depth of field, motion blur, and reflections. We render the images at a range of spp rates, including $1 - 8,{12},{32},{64},{250},{1000}$ , and 4000 spp. All the images are rendered at the resolution of ${1080}\mathrm{p}$ . Each image contains not only the final rendered result but also the intermediate render layers, including albedo, normal, diffuse, glossy, and so on. + +This paper contributes to the research on photo-realistic image synthesis by integrating Monte Carlo rendering and image super resolution for efficient high-quality image rendering. First, we explore super resolution to reduce the number of pixels that need ray tracing. Second, we use multi-resolution sampling to both reduce noises and create visual details. Third, we develop the first large ray-tracing image dataset, which will be made publicly available. + +## 2 RELATED WORK + +Monte Carlo rendering is an important technology for photo-realistic rendering. It aims to reduce the number of rays that a ray tracing algorithm needs to cast and integrate while synthesizing a high quality image $\left\lbrack {{10},{22}}\right\rbrack$ . Conventional Monte Carlo rendering algorithms investigate various ways to adaptively distribute ray samples [8, 13, 20, 33, 39-42, 47, 48]. When only a small number of rays are casted, the rendered images are often noisy. They are typically filtered using various algorithms $\left\lbrack {{11},{21},{30},{31},{37},{43} - {45},{50}}\right\rbrack$ . Due to the space limit, we refer readers to a recent survey on Monte Carlo rendering [58]. + +Our research is more related to the recent deep learning approaches to Monte Carlo rendering denoising. Kalantari et al. trained a multilayer perceptron neural network to learn the parameters of filters before applying these filters to the noisy images [23]. Bako et al. extended this method by employing filters with spatially + +![01963eaa-2d32-7628-bac9-658d51cf5e66_1_161_178_1477_427_0.jpg](images/01963eaa-2d32-7628-bac9-658d51cf5e66_1_161_178_1477_427_0.jpg) + +Figure 1: This paper presents a hybrid rendering method to speed up Monte Carlo rendering. Our method takes a low resolution with a high sample rate rendering (LRHS) and a high resolution with a low sample rate rendering (HRLS) as inputs, and produces the high resolution high quality result. + +adaptive kernels to denoise Monte Carlo renderings [2]. They developed a convolutional neural network method to estimate spatially adaptive filter kernels. Chaitanya et al. developed an encoder-decoder network with recurrent connections to denoise a Monte Carlo image sequence [9]. Recently, Kuznetsov et al. [29] developed a deep convolutional neural network approach that combines adaptive sampling and image denoising to optimize the rendering performance. Different from the above methods, Gharbi et al. argued that splatting samples to relevant pixels is more effective than gathering relevant samples for each pixel for denoising. Accordingly they developed a novel kernel-splatting architecture that estimates the splatting kernel for each sample, which was shown particularly effective when only a small number of samples were used [14]. Compared to these methods, our method improves the speed of Monte Carlo rendering by reducing the number of pixels that we need to cast rays for. + +Our work also builds upon the success of deep image super resolution methods $\left\lbrack {1,{12},{15},{19},{27},{34} - {36},{51},{53},{56}}\right\rbrack$ . Dong et al. developed the first deep learning approach to image super resolution [12]. They designed a three-layer fully convolutional neural network and showed that a neural network could be trained end to end for super resolution. Since that, a variety of neural network architectures, such as residual network [16], densely connected network [18], and squeeze-and-excitation network [17], are introduced to the task of image super resolution. For instance, Kim et al. developed a deep neural network that employs residual architectures and obtained promising results [27]. Lim et al. further improved super resolution results by removing batch norm layers and increasing the depth of networks [35]. Zhang et al. developed a residual densely connected network that is able to explore intermediate features via local dense connections for better image super resolution [55]. Zhang et al. recently reported that a channel-wise attention network which is able to learn attention as guidance to model channel-wise features can more effectively super resolve a low resolution image [54]. While these image super resolution methods achieved promising results, recovering visual details that do not exist in the input image is necessarily an ill-posed problem. Our method addresses this fundamentally challenging problem by leveraging a high-resolution image but rendered at a low ray sample rate. Such an auxiliary rendering can be quickly rendered and yet provide visual details that do not exist in the low resolution input rendered at a high sample rate. + +## 3 THE BLENDER CYCLES RAY-TRACING DATASET + +To the best of our knowledge, there is no large scale ray-tracing dataset that is publicly available for training a deep neural network. Therefore, we develop the Blender Cycles Ray-tracing dataset (BCR) that consists of a large number of high quality scenes together with the ray-tracing images and the intermediate rendering layers. We will share BCR with our community. + +### 3.1 Source Scenes + +Blender's Cycles is a popular ray tracing engine that is capable of high-quality production rendering. It has an open and active community where thousands of artists share their work. Using the Blender community assets, we collected over 8000 scenes under Creative Commons licenses ${}^{123}$ , which allow us to share our dataset with the research community. We rendered these scenes at 4000 spp and manually checked the rendered images and all the rendering layers. We eliminated scenes with missing materials, lack of high frequency information, or with noticeable rendering noises even rendered at 4000 spp. This culling process reduced the total number of source scenes to 1465. These remaining scenes produced 2449 images by rendering from 1 to 10 viewpoints per scene. We split the dataset into 3 subsets: 2126 images from 1283 scenes as the training set, 193 images from 76 scenes as the validation set, and 130 images from 104 scenes as the test set. There is no overlap scene among them. As shown in Figure 2, our dataset covers various optical phenomena, such as motion blur, depth of field, and complex light transport effects. It covers a variety of scene contents, including indoor scenes, buildings, landscapes, fruits, plants, vehicles, animals. glass, and so on. + +### 3.2 Rendering Settings + +To generate the high-quality "ground-truth" renderings, we rendered each scene at 4000 spp. As described previously, we noticed that the rendered images for some scenes still contain noticeable noises even when rendered at 4000 spp and we removed them through manual inspection. On average, it took around 20 minutes to render an image on an Nvidia Titan X Pascal GPU. We set the rendering resolution to ${1920} \times {1080}$ or ${1080} \times {1080}$ to cover the most content of scenes. For each image, we provide both the final rendered image and the render layers, which are essential for Monte Carlo rendering $\left\lbrack {2,3,9,{23},{25},{29},{32},{33}}\right\rbrack$ . In total, each image has 33 rendering layers, including albedo, normals, depth, diffuse color, diffuse direct, diffuse indirect, glossy color and so on. All images in the BCR dataset can be produced using the render layers as follows [6]: + +$$ +{I}_{HR} = {I}_{\text{Diff }} + {I}_{\text{Gloss }} + {I}_{\text{Sub }} + {I}_{\text{Trans }} + {I}_{\text{Env }} + {I}_{\text{Emit }}, \tag{1} +$$ + +where the diffuse, gloss, subsurface, trans layers can be generated with their color, direct light and indirect light layers + +$$ +{I}_{\text{Diff }} = {I}_{\text{DiffCol }} * \left( {{I}_{\text{DiffDir }} + {I}_{\text{DiffInd }}}\right) , +$$ + +$$ +{I}_{\text{Gloss }} = {I}_{\text{GlossCol }} * \left( {{I}_{\text{GlossDir }} + {I}_{\text{GlossInd }}}\right) , \tag{2} +$$ + +$$ +{I}_{\text{Sub }} = {I}_{\text{SubCol }} * \left( {{I}_{\text{SubDir }} + {I}_{\text{SubInd }}}\right) , +$$ + +$$ +{I}_{\text{Trans }} = {I}_{\text{TransCol }} * \left( {{I}_{\text{TransDir }} + {I}_{\text{TransInd }}}\right) . +$$ + +--- + +${}^{1}$ http://www.blendswap.com + +${}^{2}$ https://blenderartists.org + +${}^{3}$ https://gumroad.com/senad + +--- + +![01963eaa-2d32-7628-bac9-658d51cf5e66_2_155_152_1491_433_0.jpg](images/01963eaa-2d32-7628-bac9-658d51cf5e66_2_155_152_1491_433_0.jpg) + +Figure 2: Examples from our BCR dataset. + +![01963eaa-2d32-7628-bac9-658d51cf5e66_2_156_667_709_292_0.jpg](images/01963eaa-2d32-7628-bac9-658d51cf5e66_2_156_667_709_292_0.jpg) + +Figure 3: Pixel value distribution of our BCR dataset. The rendered images use the scene linear color space and the pixel value is represented in float32. We use the logarithmic scale for the $y$ axis. While most pixels are in the range of $\left\lbrack {0,1}\right\rbrack$ , the distribution has a long tail. + +Besides rendering 4000-spp images as ground truth, we rendered each scene at $1 - 8,{12},{16},{32},{64},{128},{250}$ , and 1000 spp as input for Monte Carlo rendering enhancement algorithms, including ours. The rendered images and the auxiliary results in the scene were saved in the scene linear color space, which closely corresponds to natural colors [5]. These images were rendered with a high dynamic range. The pixel values were represented in Float32. As shown in Figure 3, most of the pixel values were in the range of $\left\lbrack {0,1}\right\rbrack$ . However. the pixel value distribution had a long tail. We also noticed that many of the very large values come from the firefly rendering artifacts. Therefore we removed these outliers by clipping at value 100 . An image in the scene linear space can be converted to sRGB space for visualization in this paper as follows. + +$$ +s = \left\{ \begin{array}{ll} 0 & \text{ if }l \leq 0, \\ {12.92} \times l & \text{ if }0 < l \leq {0.0031308}, \\ {1.055} \times {l}^{\frac{1}{2.4}} - {0.055} & \text{ if }{0.0031308} < l < 1, \\ 1 & \text{ if }l \geq 1, \end{array}\right. \tag{3} +$$ + +where $l$ and $s$ indicate the pixel value in scene linear color space and sRGB respectively [4]. + +### 3.3 Low Resolution Image Generation + +A straightforward way to generate low resolution images is to change the output resolution in Cycles. However, directly rendering a low resolution image does not always work [7]. For example, some scenes are modelled using a subdivision technology and changing the rendering resolution will disrupt the inherent relationship among the material and geometry settings in the scene files and thus cause mismatch between images rendered at different resolutions. Therefore, we generate low resolution images by downsampling the corresponding high resolution rendered images via the nearest neighbour degradation. We did not use bilinear or bicubic sampling as the nearest neighbor degradation more accurately simulates the real-world rendering engine. That is, rays for low resolution renderings are sampled at a sparse grid compared with high resolution ones. + +3.4 Monte Carlo Rendering Dataset Comparison + +
DatasetImagesScenesSPPLayers
Kalantari [24]500204,8,16,32,64,320005
KPCN [2]600-32, 128, 10246
Chaitanya [9]-31,4,8,16,32,256,20003
Kuznetsov [29] 700501,2,3,4,10244
BCR dataset244914631-8, 12, 16, 32, 64, 128,250,1000,400033
+ +Table 1: Monte Carlo rendering dataset comparison. + +We compare our dataset with those used in recent deep learning-based Monte Carlo rendering denoising algorithms, including [2, 9, ${23},{29}\rbrack$ . As reported in Table 1, our dataset has over $3 \times$ the amount of images and over ${25} \times$ the number of scenes than the other datasets. Moreover, all these existing datasets are private and we will make our dataset public. + +## 4 METHOD + +Our method takes a low-resolution-high-spp image ${I}_{LRHS}$ and its corresponding high-resolution-low-spp image ${I}_{HRLS}$ as input and aims to estimate a corresponding HR image ${I}_{SR}.{I}_{LRHS}$ contains the RGB channel, while ${I}_{HRLS}$ is composed of RGB channel and extra layers, including Albedo, Normal, Diffuse, Specular, Variance layer as these extra layers can provide high-frequency visual details. + +As shown in Figure 4, we design a two-encoder-one-decoder network to estimate the HR image. Given ${I}_{LRHS}$ and ${I}_{HRLS}$ , our network firstly extracts the features ${F}_{LRHS}$ and ${F}_{HRLS}$ , respectively. We leverage a downscale module with de-shuffle layers [46] instead of pooling layers to downscale the feature maps as de-shuffle layers can keep the high-frequency information. Compared with upscaling LRHS features, downsampling HRLS features to the same size of ${F}_{LRHS}$ can reduce the computational complexity of the network significantly. It also enables the features to fuse in the earlier layer of the network. We obtain the fused feature ${F}_{0}$ by combining ${F}_{HRLS}$ with ${F}_{LRHS}$ through a fusion module and feed it to a sequence of residual dense groups (RDG) $\left\lbrack {{54},{55}}\right\rbrack$ . With the feature ${F}_{G}$ from RDGs, we combine it with ${F}_{LRHS}$ by element-wise adding. Finally, we upscale the resulting dense feature ${F}_{DF}$ and predict the final $\mathrm{{HR}}$ image ${I}_{SR}$ through a convolutional layer. Below we describe the network in details. + +![01963eaa-2d32-7628-bac9-658d51cf5e66_3_156_152_1489_407_0.jpg](images/01963eaa-2d32-7628-bac9-658d51cf5e66_3_156_152_1489_407_0.jpg) + +Figure 4: The architecture of our network. Our network takes a low-resolution-high- spp rendering (LRHS) and its corresponding high-resolution-low-spp rendering (HRLS) as input and predicts the final high-resolution-high-quality image. + +![01963eaa-2d32-7628-bac9-658d51cf5e66_3_172_638_673_233_0.jpg](images/01963eaa-2d32-7628-bac9-658d51cf5e66_3_172_638_673_233_0.jpg) + +Figure 5: Deshuffle layer for downscaling feature maps. + +LRHS shallow feature ${F}_{LRHS}$ . Following [35,54,55], we adopt a convolutional layer to get the shallow feature ${F}_{LRHS}$ + +$$ +{F}_{LRHS} = {H}_{lrhs}\left( {I}_{LRHS}\right) , \tag{4} +$$ + +where $H\left( \cdot \right)$ indicates the convolution operation. + +HRLS shallow feature ${F}_{HRLS}$ . We first extract the shallow feature from ${I}_{HRLS}$ with a convolutional layer, + +$$ +{F}_{HRLS}^{0} = {H}_{\text{hrls }}\left( {I}_{HRLS}\right) . \tag{5} +$$ + +Inspired by ESPCN [46], we design a deshuffle layer to downscale the features. As shown in Figure 5, we downscale the feature map with a stride of $\alpha$ . In our network, we set $\alpha = 2$ . To downscale the feature map, we stack deshuffle layers together. Supposing our network has $D$ deshuffle layers, we can get the output ${F}_{HRLS}$ + +$$ +{F}_{HRLS} = {DS}{F}^{D}\left( {{DS}{F}^{D - 1}\left( {\cdots {DS}{F}^{1}\left( {F}_{HRLS}^{0}\right) \cdots }\right) }\right) , \tag{6} +$$ + +where ${DSF}\left( \cdot \right)$ indicates the operation of the deshuffle layer. By downscaling auxiliary features, our network can work in the size of the LRHS image, which can significantly reduce the computational complexity of the overall network. + +We concatenate ${F}_{LRHS}$ from LRHS image and ${F}_{HRLS}$ from HRLS into a combined feature map ${F}_{0}$ . + +Residual densely connected block. We employ the densely connected network and residual groups to build the backbone of our neural network as they are shown effective for image super resolution [54, 55]. In our network, we use 4 convolutional layers in each residual densely connected block (RDB). By stacking $B = 5\mathrm{{RDBs}}$ , we build a residual densely connected group (RDG) as follows, + +$$ +{F}_{g} = {RD}{B}_{B}\left( {{RD}{B}_{B - 1}\left( {\cdots {RD}{B}_{1}\left( {F}_{g - 1}\right) \cdots }\right) }\right) + {F}_{g - 1} \tag{7} +$$ + +We predict the dense feature ${F}_{DF}$ with $G = 3\mathrm{{RDGs}}$ as follows, + +$$ +{F}_{DF} = {RD}{G}_{G}\left( {{RD}{G}_{G - 1}\left( {\cdots {RD}{B}_{1}\left( {F}_{0}\right) \cdots }\right) }\right) + {F}_{0} \tag{8} +$$ + +Upscale. In our network, we adopt the shuffle layer from ES-PCN [46] to upscale the features and estimate the high resolution prediction ${I}_{SR}$ , + +$$ +{I}_{SR} = {H}_{Rec}\left( {{UP}\left( {F}_{DF}\right) }\right) , \tag{9} +$$ + +where ${UP}\left( \cdot \right)$ indicates the operation of upscale [46]. + +Loss function. The BCR dataset is in the scene linear color space. As shown in Figure 3, the pixel value distribution of this BCR dataset has a long tail. ${\ell }_{1}$ loss can not handle it well because it might be biased to the extremely large pixel values. To handle this problem, we adopt the following robust loss + +$$ +{\ell }_{r} = \frac{1}{N}\mathop{\sum }\limits_{{p \in {I}_{HR}}}\frac{\left| {I}_{HR}^{p} - {I}_{SR}^{p}\right| }{\beta + \left| {{I}_{HR}^{p} - {I}_{SR}^{p}}\right| }, \tag{10} +$$ + +where $\beta$ indicates the robust factor. For the small difference, ${\ell }_{r}$ works quite similar to ${\ell }_{1}$ . For the extremely large difference, ${\ell }_{r}$ will be close to but always below 1 . This will prevent our network from the bias towards rare but extremely large pixel values. We set $\beta = {0.1}$ in our experiments. + +Implement details. We set the kernel size of all convolutional layers to $3 \times 3$ , except for the fuse convolutional layer, whose kernel size is $1 \times 1$ . Every convolutional layer is followed by a RELU layer, except for the last convolutional layer. The shallow features, fusion features, and dense features have 64 channels. During each iteration of the training, we randomly select the spp of ${I}_{HRLS}$ from the set of $\left\lbrack {1 - 8,{12},{16},{32}}\right\rbrack$ and the spp of ${I}_{LRHS}$ from the set of $\lbrack 2 - 8,{12},{16},{32}$ , ${64},{128},{250},{1000},{4000}\rbrack$ while making sure that the spp of ${I}_{HRLS}$ is smaller than that of ${I}_{LRHS}$ . + +We use Pytorch to implement our network. We use a mini-batch size of 16 and train the network for 500 epochs. It takes about 1 week on one Nvidia Titan Xp for training. We use the SGD optimizer with the learning rate of ${10}^{-4}$ . We also perform data augmentation on-the-fly by randomly cropping patches. In order to save data loading time, we pre-crop training HR images into ${300} \times {300}$ large patches. During training, we further crop smaller patches on those large patches. The final patch size of HR is set to 96 for $\times 2,{192}$ for $\times 4$ , and 256 for $\times 8$ . We select the model that works best on the validation set. + +## 5 EXPERIMENTS + +We evaluate our method by comparing it representative state-of-the-art denoising methods for Monte Carlo rendering and image super resolution algorithms. We also conduct ablation studies to further examine our method. We use two metrics to evaluate our results. First, we adopt RelMSE (Relative Mean Square Error) to report the results in the scene linear color space, which is defined as + +$$ +\operatorname{RelMSE} = {\lambda }_{1} * \frac{{\left( {I}_{SR} - {I}_{HR}\right) }^{2}}{{I}_{HR}^{2} + {\lambda }_{2}}, \tag{11} +$$ + +where ${\lambda }_{1} = {0.5}$ and ${\lambda }_{2} = {0.01}$ when experimenting on our BCR dataset following KPCN [2]. For the Gharbi dataset, we use the evaluation code from its authors [14] where ${\lambda }_{1} = 1$ and ${\lambda }_{2} = {10}^{-4}$ . + +We also use PSNR to evaluate the results in the ${sRGB}$ space. For our BCR dataset, we convert images to sRGB to calculate PSNR use + +
Method2spp4spp8spp
PSNRRelMSEPSNRRelMSEPSNRRelMSE
Input18.120.295321.510.140024.750.0646
KPCN [2]25.870.039027.310.029928.110.0276
KPCN-ft [2]31.030.007833.690.004335.830.0026
Bitterli [3]26.670.029327.220.025227.450.0226
Gharbi [14]30.730.006831.610.005732.290.0050
Ours $\times 2$(4 - 1)(8 - 2)(16-4)
33.270.004435.150.002736.740.0019
Ours $\times 4$(16 - 1)(32 - 2)(64 - 4)
33.940.003935.210.002836.310.0022
Ours $\times 8$(64 - 1)(128 - 2)(250 - 4)
31.370.007532.350.005733.140.0049
+ +Table 2: Comparison on our BCR dataset. Ours $\times 2$ indicates that our method performs $\times 2$ super resolution and (4 - 1) indicates that our method takes 4 spp LRHS and 1 spp HRLS as input, which is effectively 2 spp on average for all the pixels. + +
Method4 spp8 spp16 spp
PSNRRelMSEPSNRRelMSEPSNRRelMSE
Input19.5817.535821.917.568224.1711.2189
Sen [45]28.231.048428.000.574427.640.3396
Rousselle [42]30.011.940732.321.966034.361.9446
Kalantari [24]31.331.557333.001.663534.431.8021
Bitterli [3]28.981.102430.920.929732.400.9640
KPCN [2]29.751.061630.567.077431.0020.2309
KPCN-ft [2]29.860.500431.660.861633.390.2981
Gharbi [14]33.110.048634.450.038535.360.0318
Ours $\times 2$(8 - 2)(16 - 4)(32 - 8)
34.021.502535.301.490236.431.4748
Ours $\times 4$(32 - 2)(64 - 4)(128 - 8)
33.945.558635.225.678135.975.7436
Ours $\times 8$(128 - 2)(16 - 8)(32 - 16)
31.563.722832.604.230033.224.5045
+ +Table 3: Comparison on the Gharbi dataset [14]. + +Equation 3. For the Gharbi dataset, we convert images to the sRGB space using codes provided by its authors as follows [14], + +$$ +s = \min \left( {1,\max \left( {0, l}\right) }\right) \text{,} \tag{12} +$$ + +where $l$ indicates images in the scene linear space, $s$ indicates images in the sRGB space. + +### 5.1 Comparison with Denoising Methods + +We compare our method to both state-of-the-art traditional denoising methods, including Sen et al. [45], Rousselle et al. [42], Kalantari et al. [23], Bitterli et al. [3], and recent representative deep learning based methods, including KPCN [2] and Gharbi et al. [14]. Unlike other methods, our method takes both a LRHS rendering and a HRLS rendering as input. Therefore, we compute the average ssp for our input as ${sp}{p}_{avg} = {sp}{p}_{LRHS}/{s}^{2} + {sp}{p}_{HRLS}$ , where $s$ indicates the super resolution scale. For instance, in Table 2,"Ours $\times 2$ " indicates that our method performs $\times 2$ super resolution and (4 - 1) indicates that our method takes 4 spp LRHS and 1 spp HRLS as input, which is effectively $2\mathrm{{spp}}$ on average. We conducted on the comparisons on both the Gharbi dataset and our BCR dataset. + +Table 2 compares our method to Bitterli et al. [3], KPCN [2], and Gharbi et al. [14]. We used the code / model shared by their authors in this experiments. For KPCN [2], we provide another version of the results produced by their neural network but fine-tuned on our BCR dataset. This experiment showes that our method, especially ours $\times 2$ and $\times 4$ , outperform the state-of-the-art methods by a large margin. Specifically, our $\times 4$ method wins ${2.91}\mathrm{\;{dB}}$ on PSNR and 0.0039 on RelMSE when the spp is 2 . When spp is relatively high, our $\times 2$ method wins ${0.91}\mathrm{\;{dB}}$ on PSNR and 0.0007 on RelMSE. Figure 7 shows several visual examples on the BCR dataset. Our results contain fewer artifacts than the other methods. + +![01963eaa-2d32-7628-bac9-658d51cf5e66_4_929_150_719_215_0.jpg](images/01963eaa-2d32-7628-bac9-658d51cf5e66_4_929_150_719_215_0.jpg) + +Figure 6: Error map visualization. + +
spp48163264128
Rousselle [42]13.3
Kalantari [24]10.4
Bitterli [3]21.9
KPCN [2]14.6
Sen [45]281.2638.11603.14847.8--
Gharbi [14]6.010.118.935.967.0156.5
Ours $\times 2$0.362
Ours $\times 4$0.118
Ours $\times 8$0.052
+ +Table 4: Comparison of runtime cost (second) to denoise a ${1024} \times$ 1024 image. The data is from Gharbi [14]. If the runtime is constant, we report it in the last column. Our $\times 2, \times 4$ and $\times 8$ method are at least ${17} \times ,{51} \times$ and ${115} \times$ faster than the start-of-the-art methods, respectively. + +We were not able to compare to additional methods on our BCR dataset as these methods work with other rendering engines or use very different input format. We compare to these methods on the Gharbi dataset, as reported in Table 3. We obtained the results for the comparing methods from Gharbi et al. [14]. For our results, we directly used our neural network trained on our BCR dataset without fine-tuning on the Gharbi training dataset. ${}^{4}$ As shown in Table 3, our method outperforms all the other methods in terms of PSNR. + +However, the RelMSE of our results is higher than some of existing methods, such as KPCN [2] and Gharbi [14]. We looked into the discrepancy between the results measured using PSNR and RelMSE. We found that the RelMSE metric is heavily affected by a small number of pixels with abnormally large errors in our results. Figure 6 shows the RelMSE error map of one of our results with a much larger error than Gharbi. Figure 6 shows an example of our result where the errors concentrate in the region around the bright light, with 16 pixels having errors larger than ${10}^{6}$ , which contribute to most of the error of the whole image. After excluding these 16 pixels, while our error is still larger than Gharbi, the difference is much smaller. We would like to point out that our method was trained on our BCR dataset only and was not fine-tuned on the Gharbi dataset as its training set is not available. Moreover, BCR and the Gharbi dataset were rendered using different engines and thus contained different intermediate layers. To test on the Gharbi examples, we had to set the variance layer to a constant value, which compromises our results. + +Figure 8 shows visual comparisons between our method and several existing methods. Although our RelMSE is higher than Gharbi [14], our results look more plausible, which is consistent with our higher PSNR values measured in the sRGB space. In the first example, the seat in our result contains much fewer artifacts. In the second example, the highlight in our result is more accurate than others. It shows that our model and BCR dataset have a great generalization capability. + +--- + +${}^{4}$ We removed one image from the Gharbi testing set as its source model is also included into our BCR training set. + +--- + +![01963eaa-2d32-7628-bac9-658d51cf5e66_5_154_169_1527_1907_0.jpg](images/01963eaa-2d32-7628-bac9-658d51cf5e66_5_154_169_1527_1907_0.jpg) + +Figure 8: Visual comparison on the Gharbi dataset [14]. + +
Methods$\times 2$$\times 4$$\times 8$
PSNRRelMSEPSNRRelMSEPSNRRelMSE
Bicubic30.570.014125.390.085822.360.2473
EDSR [35]32.010.007930.700.011927.970.0241
RCAN [54]32.030.008430.730.011727.920.0253
Ours38.400.001534.270.003931.080.0079
+ +Table 5: Comparison with super resolution algorithms on the BCR dataset. + +
HRLS LRHS1spp2spp4spp
PSNRRelMSEPSNRRelMSEPSNRRelMSE
2spp32.140.0056----
4spp32.940.004833.760.0038--
8spp33.520.004234.410.003335.200.0027
16spp33.940.003934.880.003035.710.0025
32spp34.220.003735.210.002836.060.0023
64spp34.420.003535.440.002736.310.0022
128spp34.560.003535.600.002636.490.0021
+ +Table 6: The effect of spp values on the final rendering results. + +Speed. We report the speeds of the above methods in Table 4. We use the same setting as Gharbi [14] and obtain the timing data for the comparing methods from them as well. For our method, We report the aggregated spp for our method by combining samples used to render both HRLS and LRHS. Since all the methods use the same spp, we only include the time needed for denoising. We report the time of processing one ${1024} \times {1024}$ image on one Nvidia Xp GPU. We can find that our $\times 2, \times 4$ and $\times 8$ method are at least ${17} \times ,{51} \times$ and ${115} \times$ faster than the state-of-the-art method Gharbi [14]. + +### 5.2 Comparisons with Super Resolution Methods + +We also compare our method with several baseline methods that use super resolution to upsample the low-resolution-high-spp renderings to the target size. In this experiment, we used the trained models shared by the authors of these super resolution methods $\left\lbrack {{35},{54}}\right\rbrack$ and fine-tuned them on our BCR dataset. As reported in Table 5, our method generates significantly better results than these super resolution methods. While this comparison is unfair to these baseline methods, it indeed shows the benefits of taking an extra high-resolution-low-spp rendering as input. As shown in Figure 11, our results contain more fine details that are missing from the super resolution results. + +### 5.3 Ablation study + +## We now examine several key components of our method. + +Input layers of ${I}_{LRHS}$ and ${I}_{HRLS}$ . We examine how the rendering layers affect the final results. In this experiment, we use 1 spp for ${I}_{HRLS}$ and 4000 spp for ${I}_{LRHS}$ . The upsampling scale is set to $\times 4$ . Our neural network contains two input branches, one for ${I}_{LRHS}$ and the other for ${I}_{HRLS}$ . In this experiment, we fix the input layer of one branch to RGB while changing the input layers of the other. For the model with "None", we remove this branch. As shown in Figure 9, compared with no inputs, ${I}_{HRLS}$ can greatly improve the results. Among various input layers, RGB improves the results by a large margin. The result can be further improved if the ${I}_{HRLS}$ takes all rendering layers. We believe that these improvements come from the high frequency information in the ${I}_{HRLS}$ . For the ${I}_{LRHS}$ , while all the input layers still help, the RGB result alone can achieve the best result. We conjecture that since ${I}_{LRHS}$ is rendered with a high spp, its RGB layer is already of a very high quality and the other intermediate layers do not further contribute. On the other hand, the intermediate layers for ${I}_{HRLS}$ provide useful information for denoising, which is consistent with the findings of the previous denoising methods $\left\lbrack {2,{14}}\right\rbrack$ . + +![01963eaa-2d32-7628-bac9-658d51cf5e66_6_891_149_751_693_0.jpg](images/01963eaa-2d32-7628-bac9-658d51cf5e66_6_891_149_751_693_0.jpg) + +Figure 10: Comparison between ${\ell }_{r}$ and ${\ell }_{1}$ . + +Robust loss ${\ell }_{r}$ . We examine the effect of the parameter $\beta$ in our robust loss. We also compare it to the standard ${\ell }_{1}$ loss. In this experiment, we use 4000 spp for ${I}_{LRHS}$ and 1 spp for ${I}_{HRLS}$ . The upsampling scale is set to $\times 4$ . The input channels of ${I}_{LRHS}$ and ${I}_{HRLS}$ are set to RGB. Figure 10 shows that the robust loss ${\ell }_{r}$ with $\beta = {0.1}$ outperforms ${\ell }_{1}$ by a large margin as it can avoid the bias towards a very small number of pixels with extremely large pixel values. We also find that using a too large or too small $\beta$ will be harmful to the results. This is because a very large $\beta$ value reduces the robust loss to the ${\ell }_{1}$ loss while a very small beta value makes the loss always close to 1 without regard to the error between the output and the ground truth. + +SPP of ${I}_{LRHS}$ and ${I}_{HRLS}$ . We examine how our method works with different spp values used to render ${I}_{LRHS}$ and ${I}_{HRLS}$ . In the experiment, we set the upsampling scale to $\times 4$ . Table 6 shows rendering at high spp values consistently leads to better final results. + +## 6 CONCLUSION + +This paper presented a hybrid rendering method to speed up Monte Carlo rendering algorithms. We designed a two-encoder-one-decoder network for this task. Our network takes a low resolution image with a high spp and a high resolution image with a low spp as inputs, and estimates the high resolution high quality images. We built a large-scale ray-tracing dataset Blender Cycles Ray-tracing dataset. Our experiments showed that our method is able to generate high quality high resolution images quickly. Our experiments also showed that HRLS and the robust loss are helpful to generate high quality results. + +## REFERENCES + +[1] N. Ahn, B. Kang, and K.-A. Sohn. Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the European Conference on Computer Vision, pp. 252-268, 2018. + +[2] S. Bako, T. Vogels, B. McWilliams, M. Meyer, J. Novák, A. Harvill, P. Sen, T. Derose, and F. Rousselle. Kernel-predicting convolutional networks for denoising monte carlo renderings. ACM Transactions on Graphics (TOG), 36(4):97, 2017. + +[3] B. Bitterli, F. Rousselle, B. Moon, J. A. Iglesias-Guitián, D. Adler, K. Mitchell, W. Jarosz, and J. Novák. Nonlinearly weighted first-order + +![01963eaa-2d32-7628-bac9-658d51cf5e66_7_142_147_1528_1432_0.jpg](images/01963eaa-2d32-7628-bac9-658d51cf5e66_7_142_147_1528_1432_0.jpg) + +regression for denoising monte carlo renderings. In Computer Graphics Forum, vol. 35, pp. 107-117. Wiley Online Library, 2016. + +[4] Blender. Blender color linear to srgb. https://github.com/blender/blender/blob/ 6c9178b183f5267e@7a6c55497b6d496e468a709/intern/ cycles/util/util_color.h#L77. Accessed: 2021-04-01. + +[5] Blender. Blender color management. https://docs.blender.org/ manual/en/dev/render/color_management.html, 2020. Accessed: 2020-03-05. + +[6] Blender. Blender passes. https://docs.blender.org/manual/ en/latest/render/layers/passes.html, 2020. Accessed: 2020- 03-06. + +[7] Blender. Cycles design goals. https://wiki.blender.org/wiki/ Source/Render/Cycles/DesignGoals, 2020. Accessed: 2021-04- 01. + +[8] M. R. Bolin and G. W. Meyer. A perceptually based adaptive sampling algorithm. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pp. 299-309. ACM, 1998. + +[9] C. R. A. Chaitanya, A. S. Kaplanyan, C. Schied, M. Salvi, A. Lefohn, D. Nowrouzezahrai, and T. Aila. Interactive reconstruction of monte carlo image sequences using a recurrent denoising autoencoder. ${ACM}$ Transactions on Graphics (TOG), 36(4):98, 2017. + +[10] R. L. Cook, T. Porter, and L. Carpenter. Distributed ray tracing. In ACM SIGGRAPH computer graphics, vol. 18, pp. 137-145, 1984. + +[11] H. Dammertz, D. Sewtz, J. Hanika, and H. Lensch. Edge-avoiding à-trous wavelet transform for fast global illumination filtering. In Proceedings of the Conference on High Performance Graphics, pp. 67-75. Eurographics Association, 2010. + +[12] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In European conference on computer vision, pp. 184-199. Springer, 2014. + +[13] K. Egan, Y.-T. Tseng, N. Holzschuch, F. Durand, and R. Ramamoorthi. Frequency analysis and sheared reconstruction for rendering motion blur. In ACM Transactions on Graphics, vol. 28, p. 93, 2009. + +[14] M. Gharbi, T.-M. Li, M. Aittala, J. Lehtinen, and F. Durand. Sample-based monte carlo denoising using a kernel-splatting network. ACM + +Transactions on Graphics (TOG), 38(4):1-12, 2019. + +[15] M. Haris, G. Shakhnarovich, and N. Ukita. Recurrent back-projection network for video super-resolution. In Proceedings of the IEEE Con- + +ference on Computer Vision and Pattern Recognition, pp. 3897-3906, 2019. + +[16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. + +[17] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141, 2018. + +[18] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017. + +[19] Z. Hui, X. Wang, and X. Gao. Fast and accurate single image super-resolution via information distillation network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 723-731, 2018. + +[20] H. W. Jensen. Realistic image synthesis using photon mapping. AK Peters/CRC Press, 2001. + +[21] H. W. Jensen and N. J. Christensen. Optimizing path tracing using noise reduction filters. 1995. + +[22] J. T. Kajiya. The rendering equation. In ACM SIGGRAPH computer graphics, vol. 20, pp. 143-150. ACM, 1986. + +[23] N. K. Kalantari, S. Bako, and P. Sen. A machine learning approach for filtering monte carlo noise. ACM Trans. Graph., 34(4):122-1, 2015. + +[24] N. K. Kalantari and P. Sen. Removing the noise in monte carlo rendering with general image denoising algorithms. In Computer Graphics Forum, vol. 32, pp. 93-102. Wiley Online Library, 2013. + +[25] S. Kallweit, T. Müller, B. Mcwilliams, M. Gross, and J. Novák. Deep scattering: Rendering atmospheric clouds with radiance-predicting neural networks. ACM Transactions on Graphics, 36(6):1-11, 2017. + +[26] A. Keller, L. Fascione, M. Fajardo, I. Georgiev, P. H. Christensen, J. Hanika, C. Eisenacher, and G. Nichols. The path tracing revolution in the movie industry. In SIGGRAPH Courses, pp. 24-1, 2015. + +[27] J. Kim, J. Kwon Lee, and K. Mu Lee. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1637-1645, 2016. + +[28] M. Koskela, K. Immonen, M. Mäkitalo, A. Foi, T. Viitanen, P. Jääskeläinen, H. Kultala, and J. Takala. Blockwise multi-order feature regression for real-time path-tracing reconstruction. ${ACM}$ Transactions on Graphics (TOG), 38(5):138, 2019. + +[29] A. Kuznetsov, N. K. Kalantari, and R. Ramamoorthi. Deep adaptive sampling for low sample count rendering. In Computer Graphics Forum, vol. 37, pp. 35-44, 2018. + +[30] S. Laine, H. Saransaari, J. Kontkanen, J. Lehtinen, and T. Aila. Incremental instant radiosity for real-time indirect illumination. In Proceedings of the 18th Eurographics conference on Rendering Techniques, pp. 277-286. Eurographics Association, 2007. + +[31] M. E. Lee and R. A. Redner. A note on the use of nonlinear filtering in computer graphics. IEEE Computer Graphics and Applications, ${10}\left( 3\right) : {23} - {29},{1990}$ . + +[32] T. Leimkühler, H.-P. Seidel, and T. Ritschel. Laplacian kernel splatting for efficient depth-of-field and motion blur synthesis or reconstruction. ACM Transactions on Graphics, 37(4), 2018. + +[33] T.-M. Li, Y.-T. Wu, and Y.-Y. Chuang. Sure-based optimization for adaptive sampling and reconstruction. ACM Transactions on Graphics (TOG), 31(6):194, 2012. + +[34] Y. Li, V. Tsiminaki, R. Timofte, M. Pollefeys, and L. V. Gool. 3d appearance super-resolution with deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9671-9680, 2019. + +[35] B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 136-144, 2017. + +[36] Z.-S. Liu, L.-W. Wang, C.-T. Li, and W.-C. Siu. Hierarchical back projection network for image super-resolution. In Proceedings of + +the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0-0, 2019. + +[37] M. D. McCool. Anisotropic diffusion for monte carlo noise reduction. ACM Transactions on Graphics (TOG), 18(2):171-194, 1999. + +[38] S. U. Mehta, J. Yao, R. Ramamoorthi, and F. Durand. Factored axis-aligned filtering for rendering multiple distribution effects. ACM Transactions on Graphics (TOG), 33(4):57, 2014. + +[39] M. Meyer and J. Anderson. Statistical acceleration for animated global illumination. In ACM Transactions on Graphics (TOG), vol. 25, pp. 1075-1080. ACM, 2006. + +[40] B. Moon, S. McDonagh, K. Mitchell, and M. Gross. Adaptive polynomial rendering. ACM Transactions on Graphics (TOG), 35(4):40, 2016. + +[41] R. S. Overbeck, C. Donner, and R. Ramamoorthi. Adaptive wavelet rendering. ACM Trans. Graph., 28(5):140, 2009. + +[42] F. Rousselle, C. Knaus, and M. Zwicker. Adaptive sampling and reconstruction using greedy error minimization. In ${ACM}$ Transactions on Graphics (TOG), vol. 30, p. 159. ACM, 2011. + +[43] H. E. Rushmeier and G. J. Ward. Energy preserving non-linear filters. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pp. 131-138. ACM, 1994. + +[44] B. Segovia, J. C. Iehl, R. Mitanchey, and B. Péroche. Non-interleaved deferred shading of interleaved sample patterns. In Graphics Hardware, pp. 53-60, 2006. + +[45] P. Sen and S. Darabi. On filtering the noise from the random parameters in monte carlo rendering. ACM Trans. Graph., 31(3):18-1, 2012. + +[46] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1874-1883, 2016. + +[47] B. Walter, A. Arbree, K. Bala, and D. P. Greenberg. Multidimensional lightcuts. ACM Transactions on graphics, 25(3):1081-1088, 2006. + +[48] G. J. Ward, F. M. Rubinstein, and R. D. Clear. A ray tracing solution for diffuse interreflection. ACM SIGGRAPH Computer Graphics, 22(4):85-92, 1988. + +[49] L. Wu, L.-Q. Yan, A. Kuznetsov, and R. Ramamoorthi. Multiple axis-aligned filters for rendering of combined distribution effects. In Computer Graphics Forum, vol. 36, pp. 155-166, 2017. + +[50] R. Xu and S. N. Pattanaik. A novel monte carlo noise reduction operator. IEEE Computer Graphics and Applications, 25(2):31-35, 2005. + +[51] X. Xu, Y. Ma, and W. Sun. Towards real scene super-resolution with raw images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1723-1731, 2019. + +[52] L.-Q. Yan, S. U. Mehta, R. Ramamoorthi, and F. Durand. Fast 4d sheared filtering for interactive rendering of distribution effects. ${ACM}$ Transactions on Graphics (TOG), 35(1):7, 2015. + +[53] K. Zhang, W. Zuo, and L. Zhang. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3262-3271, 2018. + +[54] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 286-301, 2018. + +[55] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu. Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472-2481, 2018. + +[56] Z. Zhang, Z. Wang, Z. Lin, and H. Qi. Image super-resolution by neural texture transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7982-7991, 2019. + +[57] H. Zimmer, F. Rousselle, W. Jakob, O. Wang, D. Adler, W. Jarosz, O. Sorkine-Hornung, and A. Sorkine-Hornung. Path-space motion estimation and decomposition for robust animation filtering. In Computer Graphics Forum, vol. 34, pp. 131-142, 2015. + +[58] M. Zwicker, W. Jarosz, J. Lehtinen, B. Moon, R. Ramamoorthi, F. Rous-selle, P. Sen, C. Soler, and S.-E. Yoon. Recent advances in adaptive sampling and reconstruction for monte carlo rendering. In Computer Graphics Forum, vol. 34, pp. 667-681, 2015. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..8484f8315a89123ca2894a76a1005fbc72f7893a --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,445 @@ +§ FAST MONTE CARLO RENDERING VIA MULTI-RESOLUTION SAMPLING + +Category: Research + +§ ABSTRACT + +Monte Carlo rendering algorithms are widely used to produce photo-realistic computer graphics images. However, these algorithms need to sample a substantial amount of rays per pixel to enable proper global illumination and thus require an immense amount of computation. In this paper, we present a hybrid rendering method to speed up Monte Carlo rendering algorithms. Our method first generates two versions of a rendering: one at a low resolution with a high sample rate (LRHS) and the other at a high resolution with a low sample rate (HRLS). We then develop a deep convolutional neural network to fuse these two renderings into a high-quality image as if it were rendered at a high resolution with a high sample rate. Specifically, we formulate this fusion task as a super resolution problem that generates a high resolution rendering from a low resolution input (LRHS), assisted with the HRLS rendering. The HRLS rendering provides critical high frequency details which are difficult to recover from the LRHS for any super resolution methods. Our experiments show that our hybrid rendering algorithm is significantly faster than the state-of-the-art Monte Carlo denoising methods while rendering high-quality images when tested on both our own BCR dataset and the Gharbi dataset [14]. + +Index Terms: Computing methodologies-Computer graphics-Ray tracing + +§ 1 INTRODUCTION + +Physically-based image synthesis has attracted considerable attention due to its wide applications in visual effects, video games, design visualization, and simulation [26]. Among them, ray tracing methods have achieved remarkable success as the most practical realistic image synthesis algorithms. For each pixel, they cast numerous rays that are bounced back from the environment to collect photons from light sources and integrate them to compute the color of that pixel. In this way, ray tracing methods are able to generate images with a very high degree of visual realism. However, obtaining visually satisfactory renderings with ray tracing algorithms often requires casting a large number of rays and thus takes a vast amount of computations. The extensive computational and memory requirements of ray tracing methods pose a challenge especially when running these rendering algorithms on resource-constrained platforms and impede their applications that require high resolutions and refresh rates. + +To speed up ray tracing, Monte Carlo rendering algorithms are used to reduce ray samples per pixel (spp) that a ray tracing method needs to cast [10]. For instance, adaptive reconstruction methods control sampling densities according to the reconstruction error estimation from existing ray samples [58]. However, when the ray sample rate is not sufficiently high, the rendering results from a Monte Carlo algorithm are often noisy. Therefore, the ray tracing results are usually post-processed to reduce the noise using algorithms like bilateral filtering and guided image filtering $\left\lbrack {{28},{38},{43},{45},{49},{52},{57}}\right\rbrack$ . Recently, deep learning-based denoising approaches are developed to reduce the noise from Monte Carlo rendering algorithms $\left\lbrack {2,9,{23},{29}}\right\rbrack$ . These methods achieved high-quality results with impressive time reduction and some of them are incorporated into commercial tools, such as VRay Renderer, Corona Renderer, and RenderMan, and open source renders like Blender. However, real-time ray tracing is still a challenging problem, especially on devices with limited computing resources. + +Our idea to speed up ray tracing is to reduce the number of pixels that we need to estimate color values. For instance, upsampling by $2 \times 2$ can reduce ${75}\%$ of pixels that need ray tracing to estimate color for. There are two main challenges in super-resolving a Monte Carlo rendering. First, it is still a fundamentally ill-posed problem to recover the high-frequency visual details that are missing from the low-resolution input. Second, a Monte Carlo rendering is subject to sampling noise, especially when it is produced at a low spp rate. Upsampling a noisy image will often amplify the noise level as well. To address these challenges, we propose to generate two versions of rendering: a low-resolution rendering but at a reasonable high ssp rate (LRHS) and a high-resolution rendering but at a lower ssp rate (HRLS). LRHS is less noisy while the more noisy HRLS can potentially provide high-frequency visual details that are inherently difficult to recover from the low resolution image. + +We accordingly develop a hybrid rendering method dedicated for images rendered by a Monte Carlo rendering algorithm. Our neural network takes both LRHS and HRLS renderings as input. We use a de-shuffle layer to downsample the HRLS rendering to make it the same size as LRHS and to reduce the computational cost. Then we concatenate the features from both LRHS and HRLS and feed them to the rest of the network to generate the high-quality high resolution rendering. Our experiments show that given the hybrid input, our method outperforms the state-of-the-art Monte-Carlo rendering algorithms significantly. + +As there is no large scale ray-tracing dataset available to train our network, we collected the first large scale Blender Cycles Ray-tracing (BCR) dataset, which contains 2449 high-quality images rendered from 1463 models. The dataset consists of various factors that affect the Monte Carlo noise distribution, such as depth of field, motion blur, and reflections. We render the images at a range of spp rates, including $1 - 8,{12},{32},{64},{250},{1000}$ , and 4000 spp. All the images are rendered at the resolution of ${1080}\mathrm{p}$ . Each image contains not only the final rendered result but also the intermediate render layers, including albedo, normal, diffuse, glossy, and so on. + +This paper contributes to the research on photo-realistic image synthesis by integrating Monte Carlo rendering and image super resolution for efficient high-quality image rendering. First, we explore super resolution to reduce the number of pixels that need ray tracing. Second, we use multi-resolution sampling to both reduce noises and create visual details. Third, we develop the first large ray-tracing image dataset, which will be made publicly available. + +§ 2 RELATED WORK + +Monte Carlo rendering is an important technology for photo-realistic rendering. It aims to reduce the number of rays that a ray tracing algorithm needs to cast and integrate while synthesizing a high quality image $\left\lbrack {{10},{22}}\right\rbrack$ . Conventional Monte Carlo rendering algorithms investigate various ways to adaptively distribute ray samples [8, 13, 20, 33, 39-42, 47, 48]. When only a small number of rays are casted, the rendered images are often noisy. They are typically filtered using various algorithms $\left\lbrack {{11},{21},{30},{31},{37},{43} - {45},{50}}\right\rbrack$ . Due to the space limit, we refer readers to a recent survey on Monte Carlo rendering [58]. + +Our research is more related to the recent deep learning approaches to Monte Carlo rendering denoising. Kalantari et al. trained a multilayer perceptron neural network to learn the parameters of filters before applying these filters to the noisy images [23]. Bako et al. extended this method by employing filters with spatially + +Low Resolution High SPP High Resolution Low SPP + +Figure 1: This paper presents a hybrid rendering method to speed up Monte Carlo rendering. Our method takes a low resolution with a high sample rate rendering (LRHS) and a high resolution with a low sample rate rendering (HRLS) as inputs, and produces the high resolution high quality result. + +adaptive kernels to denoise Monte Carlo renderings [2]. They developed a convolutional neural network method to estimate spatially adaptive filter kernels. Chaitanya et al. developed an encoder-decoder network with recurrent connections to denoise a Monte Carlo image sequence [9]. Recently, Kuznetsov et al. [29] developed a deep convolutional neural network approach that combines adaptive sampling and image denoising to optimize the rendering performance. Different from the above methods, Gharbi et al. argued that splatting samples to relevant pixels is more effective than gathering relevant samples for each pixel for denoising. Accordingly they developed a novel kernel-splatting architecture that estimates the splatting kernel for each sample, which was shown particularly effective when only a small number of samples were used [14]. Compared to these methods, our method improves the speed of Monte Carlo rendering by reducing the number of pixels that we need to cast rays for. + +Our work also builds upon the success of deep image super resolution methods $\left\lbrack {1,{12},{15},{19},{27},{34} - {36},{51},{53},{56}}\right\rbrack$ . Dong et al. developed the first deep learning approach to image super resolution [12]. They designed a three-layer fully convolutional neural network and showed that a neural network could be trained end to end for super resolution. Since that, a variety of neural network architectures, such as residual network [16], densely connected network [18], and squeeze-and-excitation network [17], are introduced to the task of image super resolution. For instance, Kim et al. developed a deep neural network that employs residual architectures and obtained promising results [27]. Lim et al. further improved super resolution results by removing batch norm layers and increasing the depth of networks [35]. Zhang et al. developed a residual densely connected network that is able to explore intermediate features via local dense connections for better image super resolution [55]. Zhang et al. recently reported that a channel-wise attention network which is able to learn attention as guidance to model channel-wise features can more effectively super resolve a low resolution image [54]. While these image super resolution methods achieved promising results, recovering visual details that do not exist in the input image is necessarily an ill-posed problem. Our method addresses this fundamentally challenging problem by leveraging a high-resolution image but rendered at a low ray sample rate. Such an auxiliary rendering can be quickly rendered and yet provide visual details that do not exist in the low resolution input rendered at a high sample rate. + +§ 3 THE BLENDER CYCLES RAY-TRACING DATASET + +To the best of our knowledge, there is no large scale ray-tracing dataset that is publicly available for training a deep neural network. Therefore, we develop the Blender Cycles Ray-tracing dataset (BCR) that consists of a large number of high quality scenes together with the ray-tracing images and the intermediate rendering layers. We will share BCR with our community. + +§ 3.1 SOURCE SCENES + +Blender's Cycles is a popular ray tracing engine that is capable of high-quality production rendering. It has an open and active community where thousands of artists share their work. Using the Blender community assets, we collected over 8000 scenes under Creative Commons licenses ${}^{123}$ , which allow us to share our dataset with the research community. We rendered these scenes at 4000 spp and manually checked the rendered images and all the rendering layers. We eliminated scenes with missing materials, lack of high frequency information, or with noticeable rendering noises even rendered at 4000 spp. This culling process reduced the total number of source scenes to 1465. These remaining scenes produced 2449 images by rendering from 1 to 10 viewpoints per scene. We split the dataset into 3 subsets: 2126 images from 1283 scenes as the training set, 193 images from 76 scenes as the validation set, and 130 images from 104 scenes as the test set. There is no overlap scene among them. As shown in Figure 2, our dataset covers various optical phenomena, such as motion blur, depth of field, and complex light transport effects. It covers a variety of scene contents, including indoor scenes, buildings, landscapes, fruits, plants, vehicles, animals. glass, and so on. + +§ 3.2 RENDERING SETTINGS + +To generate the high-quality "ground-truth" renderings, we rendered each scene at 4000 spp. As described previously, we noticed that the rendered images for some scenes still contain noticeable noises even when rendered at 4000 spp and we removed them through manual inspection. On average, it took around 20 minutes to render an image on an Nvidia Titan X Pascal GPU. We set the rendering resolution to ${1920} \times {1080}$ or ${1080} \times {1080}$ to cover the most content of scenes. For each image, we provide both the final rendered image and the render layers, which are essential for Monte Carlo rendering $\left\lbrack {2,3,9,{23},{25},{29},{32},{33}}\right\rbrack$ . In total, each image has 33 rendering layers, including albedo, normals, depth, diffuse color, diffuse direct, diffuse indirect, glossy color and so on. All images in the BCR dataset can be produced using the render layers as follows [6]: + +$$ +{I}_{HR} = {I}_{\text{ Diff }} + {I}_{\text{ Gloss }} + {I}_{\text{ Sub }} + {I}_{\text{ Trans }} + {I}_{\text{ Env }} + {I}_{\text{ Emit }}, \tag{1} +$$ + +where the diffuse, gloss, subsurface, trans layers can be generated with their color, direct light and indirect light layers + +$$ +{I}_{\text{ Diff }} = {I}_{\text{ DiffCol }} * \left( {{I}_{\text{ DiffDir }} + {I}_{\text{ DiffInd }}}\right) , +$$ + +$$ +{I}_{\text{ Gloss }} = {I}_{\text{ GlossCol }} * \left( {{I}_{\text{ GlossDir }} + {I}_{\text{ GlossInd }}}\right) , \tag{2} +$$ + +$$ +{I}_{\text{ Sub }} = {I}_{\text{ SubCol }} * \left( {{I}_{\text{ SubDir }} + {I}_{\text{ SubInd }}}\right) , +$$ + +$$ +{I}_{\text{ Trans }} = {I}_{\text{ TransCol }} * \left( {{I}_{\text{ TransDir }} + {I}_{\text{ TransInd }}}\right) . +$$ + +${}^{1}$ http://www.blendswap.com + +${}^{2}$ https://blenderartists.org + +${}^{3}$ https://gumroad.com/senad + + < g r a p h i c s > + +Figure 2: Examples from our BCR dataset. + +${10}^{10}$ _____。 100 125 150 175 200 Pixel value Number of pixels ${10}^{8}$ ${10}^{6}$ ${10}^{4}$ ${10}^{2}$ ${10}^{0}$ 25 50 75 + +Figure 3: Pixel value distribution of our BCR dataset. The rendered images use the scene linear color space and the pixel value is represented in float32. We use the logarithmic scale for the $y$ axis. While most pixels are in the range of $\left\lbrack {0,1}\right\rbrack$ , the distribution has a long tail. + +Besides rendering 4000-spp images as ground truth, we rendered each scene at $1 - 8,{12},{16},{32},{64},{128},{250}$ , and 1000 spp as input for Monte Carlo rendering enhancement algorithms, including ours. The rendered images and the auxiliary results in the scene were saved in the scene linear color space, which closely corresponds to natural colors [5]. These images were rendered with a high dynamic range. The pixel values were represented in Float32. As shown in Figure 3, most of the pixel values were in the range of $\left\lbrack {0,1}\right\rbrack$ . However. the pixel value distribution had a long tail. We also noticed that many of the very large values come from the firefly rendering artifacts. Therefore we removed these outliers by clipping at value 100 . An image in the scene linear space can be converted to sRGB space for visualization in this paper as follows. + +$$ +s = \left\{ \begin{array}{ll} 0 & \text{ if }l \leq 0, \\ {12.92} \times l & \text{ if }0 < l \leq {0.0031308}, \\ {1.055} \times {l}^{\frac{1}{2.4}} - {0.055} & \text{ if }{0.0031308} < l < 1, \\ 1 & \text{ if }l \geq 1, \end{array}\right. \tag{3} +$$ + +where $l$ and $s$ indicate the pixel value in scene linear color space and sRGB respectively [4]. + +§ 3.3 LOW RESOLUTION IMAGE GENERATION + +A straightforward way to generate low resolution images is to change the output resolution in Cycles. However, directly rendering a low resolution image does not always work [7]. For example, some scenes are modelled using a subdivision technology and changing the rendering resolution will disrupt the inherent relationship among the material and geometry settings in the scene files and thus cause mismatch between images rendered at different resolutions. Therefore, we generate low resolution images by downsampling the corresponding high resolution rendered images via the nearest neighbour degradation. We did not use bilinear or bicubic sampling as the nearest neighbor degradation more accurately simulates the real-world rendering engine. That is, rays for low resolution renderings are sampled at a sparse grid compared with high resolution ones. + +3.4 Monte Carlo Rendering Dataset Comparison + +max width= + +Dataset Images Scenes SPP Layers + +1-5 +Kalantari [24] 500 20 4,8,16,32,64,32000 5 + +1-5 +KPCN [2] 600 - 32, 128, 1024 6 + +1-5 +Chaitanya [9] - 3 1,4,8,16,32,256,2000 3 + +1-5 +Kuznetsov [29] 700 X 50 1,2,3,4,1024 4 + +1-5 +BCR dataset 2449 1463 1-8, 12, 16, 32, 64, 128,250,1000,4000 33 + +1-5 + +Table 1: Monte Carlo rendering dataset comparison. + +We compare our dataset with those used in recent deep learning-based Monte Carlo rendering denoising algorithms, including [2, 9, ${23},{29}\rbrack$ . As reported in Table 1, our dataset has over $3 \times$ the amount of images and over ${25} \times$ the number of scenes than the other datasets. Moreover, all these existing datasets are private and we will make our dataset public. + +§ 4 METHOD + +Our method takes a low-resolution-high-spp image ${I}_{LRHS}$ and its corresponding high-resolution-low-spp image ${I}_{HRLS}$ as input and aims to estimate a corresponding HR image ${I}_{SR}.{I}_{LRHS}$ contains the RGB channel, while ${I}_{HRLS}$ is composed of RGB channel and extra layers, including Albedo, Normal, Diffuse, Specular, Variance layer as these extra layers can provide high-frequency visual details. + +As shown in Figure 4, we design a two-encoder-one-decoder network to estimate the HR image. Given ${I}_{LRHS}$ and ${I}_{HRLS}$ , our network firstly extracts the features ${F}_{LRHS}$ and ${F}_{HRLS}$ , respectively. We leverage a downscale module with de-shuffle layers [46] instead of pooling layers to downscale the feature maps as de-shuffle layers can keep the high-frequency information. Compared with upscaling LRHS features, downsampling HRLS features to the same size of ${F}_{LRHS}$ can reduce the computational complexity of the network significantly. It also enables the features to fuse in the earlier layer of the network. We obtain the fused feature ${F}_{0}$ by combining ${F}_{HRLS}$ with ${F}_{LRHS}$ through a fusion module and feed it to a sequence of residual dense groups (RDG) $\left\lbrack {{54},{55}}\right\rbrack$ . With the feature ${F}_{G}$ from RDGs, we combine it with ${F}_{LRHS}$ by element-wise adding. Finally, we upscale the resulting dense feature ${F}_{DF}$ and predict the final $\mathrm{{HR}}$ image ${I}_{SR}$ through a convolutional layer. Below we describe the network in details. + +Conv ${F}_{LRHS}$ Long skip connection ${F}_{g}$ RDG ${F}_{DF}$ Up Conv ${I}_{SR}$ ${F}^{B}$ Upscale module Conv $\bigoplus$ Element add ${RD}{B}_{B}$ ${r}_{g - 1}$ Downscale module Fusion module Residual dense module ${I}_{LRHS}$ Fusion ${F}_{0}$ ${F}_{1}$ RDG, ${\mathrm{{RDG}}}_{a}$ Down ${F}_{HRLS}$ Short skip connection ${F}_{g - 1}$ ${\mathrm{{RDB}}}_{1}$ ${\mathrm{{RDB}}}_{\mathrm{b}}$ ${I}_{HRLS}$ + +Figure 4: The architecture of our network. Our network takes a low-resolution-high- spp rendering (LRHS) and its corresponding high-resolution-low-spp rendering (HRLS) as input and predicts the final high-resolution-high-quality image. + +${F}_{HRLS}^{d - 1}$ Conv ${F}_{HRLS}^{d}$ $\left( {{\alpha }^{2}C}\right) * H * W$ $\mathrm{C} * \left( {\alpha H}\right) * \left( {\alpha W}\right)$ Figure 5: Deshuffle layer for downscaling feature maps. + +LRHS shallow feature ${F}_{LRHS}$ . Following [35,54,55], we adopt a convolutional layer to get the shallow feature ${F}_{LRHS}$ + +$$ +{F}_{LRHS} = {H}_{lrhs}\left( {I}_{LRHS}\right) , \tag{4} +$$ + +where $H\left( \cdot \right)$ indicates the convolution operation. + +HRLS shallow feature ${F}_{HRLS}$ . We first extract the shallow feature from ${I}_{HRLS}$ with a convolutional layer, + +$$ +{F}_{HRLS}^{0} = {H}_{\text{ hrls }}\left( {I}_{HRLS}\right) . \tag{5} +$$ + +Inspired by ESPCN [46], we design a deshuffle layer to downscale the features. As shown in Figure 5, we downscale the feature map with a stride of $\alpha$ . In our network, we set $\alpha = 2$ . To downscale the feature map, we stack deshuffle layers together. Supposing our network has $D$ deshuffle layers, we can get the output ${F}_{HRLS}$ + +$$ +{F}_{HRLS} = {DS}{F}^{D}\left( {{DS}{F}^{D - 1}\left( {\cdots {DS}{F}^{1}\left( {F}_{HRLS}^{0}\right) \cdots }\right) }\right) , \tag{6} +$$ + +where ${DSF}\left( \cdot \right)$ indicates the operation of the deshuffle layer. By downscaling auxiliary features, our network can work in the size of the LRHS image, which can significantly reduce the computational complexity of the overall network. + +We concatenate ${F}_{LRHS}$ from LRHS image and ${F}_{HRLS}$ from HRLS into a combined feature map ${F}_{0}$ . + +Residual densely connected block. We employ the densely connected network and residual groups to build the backbone of our neural network as they are shown effective for image super resolution [54, 55]. In our network, we use 4 convolutional layers in each residual densely connected block (RDB). By stacking $B = 5\mathrm{{RDBs}}$ , we build a residual densely connected group (RDG) as follows, + +$$ +{F}_{g} = {RD}{B}_{B}\left( {{RD}{B}_{B - 1}\left( {\cdots {RD}{B}_{1}\left( {F}_{g - 1}\right) \cdots }\right) }\right) + {F}_{g - 1} \tag{7} +$$ + +We predict the dense feature ${F}_{DF}$ with $G = 3\mathrm{{RDGs}}$ as follows, + +$$ +{F}_{DF} = {RD}{G}_{G}\left( {{RD}{G}_{G - 1}\left( {\cdots {RD}{B}_{1}\left( {F}_{0}\right) \cdots }\right) }\right) + {F}_{0} \tag{8} +$$ + +Upscale. In our network, we adopt the shuffle layer from ES-PCN [46] to upscale the features and estimate the high resolution prediction ${I}_{SR}$ , + +$$ +{I}_{SR} = {H}_{Rec}\left( {{UP}\left( {F}_{DF}\right) }\right) , \tag{9} +$$ + +where ${UP}\left( \cdot \right)$ indicates the operation of upscale [46]. + +Loss function. The BCR dataset is in the scene linear color space. As shown in Figure 3, the pixel value distribution of this BCR dataset has a long tail. ${\ell }_{1}$ loss can not handle it well because it might be biased to the extremely large pixel values. To handle this problem, we adopt the following robust loss + +$$ +{\ell }_{r} = \frac{1}{N}\mathop{\sum }\limits_{{p \in {I}_{HR}}}\frac{\left| {I}_{HR}^{p} - {I}_{SR}^{p}\right| }{\beta + \left| {{I}_{HR}^{p} - {I}_{SR}^{p}}\right| }, \tag{10} +$$ + +where $\beta$ indicates the robust factor. For the small difference, ${\ell }_{r}$ works quite similar to ${\ell }_{1}$ . For the extremely large difference, ${\ell }_{r}$ will be close to but always below 1 . This will prevent our network from the bias towards rare but extremely large pixel values. We set $\beta = {0.1}$ in our experiments. + +Implement details. We set the kernel size of all convolutional layers to $3 \times 3$ , except for the fuse convolutional layer, whose kernel size is $1 \times 1$ . Every convolutional layer is followed by a RELU layer, except for the last convolutional layer. The shallow features, fusion features, and dense features have 64 channels. During each iteration of the training, we randomly select the spp of ${I}_{HRLS}$ from the set of $\left\lbrack {1 - 8,{12},{16},{32}}\right\rbrack$ and the spp of ${I}_{LRHS}$ from the set of $\lbrack 2 - 8,{12},{16},{32}$ , ${64},{128},{250},{1000},{4000}\rbrack$ while making sure that the spp of ${I}_{HRLS}$ is smaller than that of ${I}_{LRHS}$ . + +We use Pytorch to implement our network. We use a mini-batch size of 16 and train the network for 500 epochs. It takes about 1 week on one Nvidia Titan Xp for training. We use the SGD optimizer with the learning rate of ${10}^{-4}$ . We also perform data augmentation on-the-fly by randomly cropping patches. In order to save data loading time, we pre-crop training HR images into ${300} \times {300}$ large patches. During training, we further crop smaller patches on those large patches. The final patch size of HR is set to 96 for $\times 2,{192}$ for $\times 4$ , and 256 for $\times 8$ . We select the model that works best on the validation set. + +§ 5 EXPERIMENTS + +We evaluate our method by comparing it representative state-of-the-art denoising methods for Monte Carlo rendering and image super resolution algorithms. We also conduct ablation studies to further examine our method. We use two metrics to evaluate our results. First, we adopt RelMSE (Relative Mean Square Error) to report the results in the scene linear color space, which is defined as + +$$ +\operatorname{RelMSE} = {\lambda }_{1} * \frac{{\left( {I}_{SR} - {I}_{HR}\right) }^{2}}{{I}_{HR}^{2} + {\lambda }_{2}}, \tag{11} +$$ + +where ${\lambda }_{1} = {0.5}$ and ${\lambda }_{2} = {0.01}$ when experimenting on our BCR dataset following KPCN [2]. For the Gharbi dataset, we use the evaluation code from its authors [14] where ${\lambda }_{1} = 1$ and ${\lambda }_{2} = {10}^{-4}$ . + +We also use PSNR to evaluate the results in the ${sRGB}$ space. For our BCR dataset, we convert images to sRGB to calculate PSNR use + +max width= + +2*Method 2|c|2spp 2|c|4spp 2|c|8spp + +2-7 + PSNR RelMSE PSNR RelMSE PSNR RelMSE + +1-7 +Input 18.12 0.2953 21.51 0.1400 24.75 0.0646 + +1-7 +KPCN [2] 25.87 0.0390 27.31 0.0299 28.11 0.0276 + +1-7 +KPCN-ft [2] 31.03 0.0078 33.69 0.0043 35.83 0.0026 + +1-7 +Bitterli [3] 26.67 0.0293 27.22 0.0252 27.45 0.0226 + +1-7 +Gharbi [14] 30.73 0.0068 31.61 0.0057 32.29 0.0050 + +1-7 +2*Ours $\times 2$ 2|c|(4 - 1) 2|c|(8 - 2) 2|c|(16-4) + +2-7 + 33.27 0.0044 35.15 0.0027 36.74 0.0019 + +1-7 +2*Ours $\times 4$ 2|c|(16 - 1) 2|c|(32 - 2) 2|c|(64 - 4) + +2-7 + 33.94 0.0039 35.21 0.0028 36.31 0.0022 + +1-7 +2*Ours $\times 8$ 2|c|(64 - 1) 2|c|(128 - 2) 2|c|(250 - 4) + +2-7 + 31.37 0.0075 32.35 0.0057 33.14 0.0049 + +1-7 + +Table 2: Comparison on our BCR dataset. Ours $\times 2$ indicates that our method performs $\times 2$ super resolution and (4 - 1) indicates that our method takes 4 spp LRHS and 1 spp HRLS as input, which is effectively 2 spp on average for all the pixels. + +max width= + +2*Method 2|c|4 spp 2|c|8 spp 2|c|16 spp + +2-7 + PSNR RelMSE PSNR RelMSE PSNR RelMSE + +1-7 +Input 19.58 17.5358 21.91 7.5682 24.17 11.2189 + +1-7 +Sen [45] 28.23 1.0484 28.00 0.5744 27.64 0.3396 + +1-7 +Rousselle [42] 30.01 1.9407 32.32 1.9660 34.36 1.9446 + +1-7 +Kalantari [24] 31.33 1.5573 33.00 1.6635 34.43 1.8021 + +1-7 +Bitterli [3] 28.98 1.1024 30.92 0.9297 32.40 0.9640 + +1-7 +KPCN [2] 29.75 1.0616 30.56 7.0774 31.00 20.2309 + +1-7 +KPCN-ft [2] 29.86 0.5004 31.66 0.8616 33.39 0.2981 + +1-7 +Gharbi [14] 33.11 0.0486 34.45 0.0385 35.36 0.0318 + +1-7 +2*Ours $\times 2$ 2|c|(8 - 2) 2|c|(16 - 4) 2|c|(32 - 8) + +2-7 + 34.02 1.5025 35.30 1.4902 36.43 1.4748 + +1-7 +2*Ours $\times 4$ 2|c|(32 - 2) 2|c|(64 - 4) 2|c|(128 - 8) + +2-7 + 33.94 5.5586 35.22 5.6781 35.97 5.7436 + +1-7 +2*Ours $\times 8$ 2|c|(128 - 2) 2|c|(16 - 8) 2|c|(32 - 16) + +2-7 + 31.56 3.7228 32.60 4.2300 33.22 4.5045 + +1-7 + +Table 3: Comparison on the Gharbi dataset [14]. + +Equation 3. For the Gharbi dataset, we convert images to the sRGB space using codes provided by its authors as follows [14], + +$$ +s = \min \left( {1,\max \left( {0,l}\right) }\right) \text{ , } \tag{12} +$$ + +where $l$ indicates images in the scene linear space, $s$ indicates images in the sRGB space. + +§ 5.1 COMPARISON WITH DENOISING METHODS + +We compare our method to both state-of-the-art traditional denoising methods, including Sen et al. [45], Rousselle et al. [42], Kalantari et al. [23], Bitterli et al. [3], and recent representative deep learning based methods, including KPCN [2] and Gharbi et al. [14]. Unlike other methods, our method takes both a LRHS rendering and a HRLS rendering as input. Therefore, we compute the average ssp for our input as ${sp}{p}_{avg} = {sp}{p}_{LRHS}/{s}^{2} + {sp}{p}_{HRLS}$ , where $s$ indicates the super resolution scale. For instance, in Table 2,"Ours $\times 2$ " indicates that our method performs $\times 2$ super resolution and (4 - 1) indicates that our method takes 4 spp LRHS and 1 spp HRLS as input, which is effectively $2\mathrm{{spp}}$ on average. We conducted on the comparisons on both the Gharbi dataset and our BCR dataset. + +Table 2 compares our method to Bitterli et al. [3], KPCN [2], and Gharbi et al. [14]. We used the code / model shared by their authors in this experiments. For KPCN [2], we provide another version of the results produced by their neural network but fine-tuned on our BCR dataset. This experiment showes that our method, especially ours $\times 2$ and $\times 4$ , outperform the state-of-the-art methods by a large margin. Specifically, our $\times 4$ method wins ${2.91}\mathrm{\;{dB}}$ on PSNR and 0.0039 on RelMSE when the spp is 2 . When spp is relatively high, our $\times 2$ method wins ${0.91}\mathrm{\;{dB}}$ on PSNR and 0.0007 on RelMSE. Figure 7 shows several visual examples on the BCR dataset. Our results contain fewer artifacts than the other methods. + +Rendering Result Error Map + +Figure 6: Error map visualization. + +max width= + +spp 4 8 16 32 64 128 + +1-7 +Rousselle [42] X X X X X 13.3 + +1-7 +Kalantari [24] X X X X X 10.4 + +1-7 +Bitterli [3] X X X X X 21.9 + +1-7 +KPCN [2] X X X X X 14.6 + +1-7 +Sen [45] 281.2 638.1 1603.1 4847.8 - - + +1-7 +Gharbi [14] 6.0 10.1 18.9 35.9 67.0 156.5 + +1-7 +Ours $\times 2$ X X X X X 0.362 + +1-7 +Ours $\times 4$ X X X X X 0.118 + +1-7 +Ours $\times 8$ X X X X X 0.052 + +1-7 + +Table 4: Comparison of runtime cost (second) to denoise a ${1024} \times$ 1024 image. The data is from Gharbi [14]. If the runtime is constant, we report it in the last column. Our $\times 2, \times 4$ and $\times 8$ method are at least ${17} \times ,{51} \times$ and ${115} \times$ faster than the start-of-the-art methods, respectively. + +We were not able to compare to additional methods on our BCR dataset as these methods work with other rendering engines or use very different input format. We compare to these methods on the Gharbi dataset, as reported in Table 3. We obtained the results for the comparing methods from Gharbi et al. [14]. For our results, we directly used our neural network trained on our BCR dataset without fine-tuning on the Gharbi training dataset. ${}^{4}$ As shown in Table 3, our method outperforms all the other methods in terms of PSNR. + +However, the RelMSE of our results is higher than some of existing methods, such as KPCN [2] and Gharbi [14]. We looked into the discrepancy between the results measured using PSNR and RelMSE. We found that the RelMSE metric is heavily affected by a small number of pixels with abnormally large errors in our results. Figure 6 shows the RelMSE error map of one of our results with a much larger error than Gharbi. Figure 6 shows an example of our result where the errors concentrate in the region around the bright light, with 16 pixels having errors larger than ${10}^{6}$ , which contribute to most of the error of the whole image. After excluding these 16 pixels, while our error is still larger than Gharbi, the difference is much smaller. We would like to point out that our method was trained on our BCR dataset only and was not fine-tuned on the Gharbi dataset as its training set is not available. Moreover, BCR and the Gharbi dataset were rendered using different engines and thus contained different intermediate layers. To test on the Gharbi examples, we had to set the variance layer to a constant value, which compromises our results. + +Figure 8 shows visual comparisons between our method and several existing methods. Although our RelMSE is higher than Gharbi [14], our results look more plausible, which is consistent with our higher PSNR values measured in the sRGB space. In the first example, the seat in our result contains much fewer artifacts. In the second example, the highlight in our result is more accurate than others. It shows that our model and BCR dataset have a great generalization capability. + +${}^{4}$ We removed one image from the Gharbi testing set as its source model is also included into our BCR training set. + +Ground Truth Ground Truth (PSNR↑/RelMSE↓) 2spp (7.27/1.8574) KPCN [2] (18.94/0.0677) Gharbi [14] (28.09/0.0079) Ours $\times 4\left( \mathbf{{31}.{67}/{0.0037}}\right)$ 2spp (11.64/0.4790) KPCN [2] (23.76/0.0474) Gharbi [14] (31.90/0.0042) Ours $\times 4$ (36.61/0.0015) Sen [45] Rousselle [42] Kalantari [23] 30.92/0.2301 31.43/0.0254 32.28/0.0624 KPCN-ft [2] Gharbi [14] Ours $\times 2$ 29.77/0.0274 33.60/0.0104 $\mathbf{{34.99}}/{0.0188}$ Sen [45] Rousselle [42] Kalantari [23] 29.28/0.0516 36.18/0.5295 33.79/0.0943 KPCN-ft [2] Gharbi [14] Ours $\times 2$ 34.09/0.0354 36.24/0.0192 38.5910.0766 KPCN-ft [2] (27.41/0.0113) Bitterli [3] (21.11/0.0449) Ground Truth Ground Truth (PSNR↑/RelMSE↓) KPCN-ft [2] (27.84/0.0173) Bitterli [3] (23.21/0.0125) Figure 7: Visual comparison on the BCR dataset Ground Truth 4spp PSNR↑/RelMSE↓ 17.89/0.7711 Ground Truth Bitterli [3] KPCN [2] 26.40/0.0499 28,14/0.4158 Ground Truth 4spp PSNR↑/RelMSE↓ 26.45/73.63 Ground Truth Bitterli [3] KPCN [2] 33.70/0.3937 33.37/0.5008 + +Figure 8: Visual comparison on the Gharbi dataset [14]. + +max width= + +2*Methods 2|c|$\times 2$ 2|c|$\times 4$ 2|c|$\times 8$ + +2-7 + PSNR RelMSE PSNR RelMSE PSNR RelMSE + +1-7 +Bicubic 30.57 0.0141 25.39 0.0858 22.36 0.2473 + +1-7 +EDSR [35] 32.01 0.0079 30.70 0.0119 27.97 0.0241 + +1-7 +RCAN [54] 32.03 0.0084 30.73 0.0117 27.92 0.0253 + +1-7 +Ours 38.40 0.0015 34.27 0.0039 31.08 0.0079 + +1-7 + +Table 5: Comparison with super resolution algorithms on the BCR dataset. + +max width= + +2*HRLS LRHS 2|c|1spp 2|c|2spp 2|c|4spp + +2-7 + PSNR RelMSE PSNR RelMSE PSNR RelMSE + +1-7 +2spp 32.14 0.0056 - - - - + +1-7 +4spp 32.94 0.0048 33.76 0.0038 - - + +1-7 +8spp 33.52 0.0042 34.41 0.0033 35.20 0.0027 + +1-7 +16spp 33.94 0.0039 34.88 0.0030 35.71 0.0025 + +1-7 +32spp 34.22 0.0037 35.21 0.0028 36.06 0.0023 + +1-7 +64spp 34.42 0.0035 35.44 0.0027 36.31 0.0022 + +1-7 +128spp 34.56 0.0035 35.60 0.0026 36.49 0.0021 + +1-7 + +Table 6: The effect of spp values on the final rendering results. + +Speed. We report the speeds of the above methods in Table 4. We use the same setting as Gharbi [14] and obtain the timing data for the comparing methods from them as well. For our method, We report the aggregated spp for our method by combining samples used to render both HRLS and LRHS. Since all the methods use the same spp, we only include the time needed for denoising. We report the time of processing one ${1024} \times {1024}$ image on one Nvidia Xp GPU. We can find that our $\times 2, \times 4$ and $\times 8$ method are at least ${17} \times ,{51} \times$ and ${115} \times$ faster than the state-of-the-art method Gharbi [14]. + +§ 5.2 COMPARISONS WITH SUPER RESOLUTION METHODS + +We also compare our method with several baseline methods that use super resolution to upsample the low-resolution-high-spp renderings to the target size. In this experiment, we used the trained models shared by the authors of these super resolution methods $\left\lbrack {{35},{54}}\right\rbrack$ and fine-tuned them on our BCR dataset. As reported in Table 5, our method generates significantly better results than these super resolution methods. While this comparison is unfair to these baseline methods, it indeed shows the benefits of taking an extra high-resolution-low-spp rendering as input. As shown in Figure 11, our results contain more fine details that are missing from the super resolution results. + +§ 5.3 ABLATION STUDY + +§ WE NOW EXAMINE SEVERAL KEY COMPONENTS OF OUR METHOD. + +Input layers of ${I}_{LRHS}$ and ${I}_{HRLS}$ . We examine how the rendering layers affect the final results. In this experiment, we use 1 spp for ${I}_{HRLS}$ and 4000 spp for ${I}_{LRHS}$ . The upsampling scale is set to $\times 4$ . Our neural network contains two input branches, one for ${I}_{LRHS}$ and the other for ${I}_{HRLS}$ . In this experiment, we fix the input layer of one branch to RGB while changing the input layers of the other. For the model with "None", we remove this branch. As shown in Figure 9, compared with no inputs, ${I}_{HRLS}$ can greatly improve the results. Among various input layers, RGB improves the results by a large margin. The result can be further improved if the ${I}_{HRLS}$ takes all rendering layers. We believe that these improvements come from the high frequency information in the ${I}_{HRLS}$ . For the ${I}_{LRHS}$ , while all the input layers still help, the RGB result alone can achieve the best result. We conjecture that since ${I}_{LRHS}$ is rendered with a high spp, its RGB layer is already of a very high quality and the other intermediate layers do not further contribute. On the other hand, the intermediate layers for ${I}_{HRLS}$ provide useful information for denoising, which is consistent with the findings of the previous denoising methods $\left\lbrack {2,{14}}\right\rbrack$ . + +35 34.0 33.9 PSNR 32 31.6 31.5 31.0 30.7 30.1 30.3 Layers of ${I}_{HRLS}$ Figure 9: The effect of input rendering layers. 0.0060 ${\ell }_{i}$ 0.0055 ${\ell }_{1}$ 0.0052 RelMSE 0.0045 0.0045 0.0044 0.0040 0.0042 0.0035 0.0039 0.0030 ${10}^{-2}$ ${10}^{-1}$ ${10}^{c}$ ${10}^{1}$ 34.1 34 PSNR 33 33.0 32.6 32.1 31.6 31 30.6 30.5 30 Layers of ${I}_{HRLS}$ 34.4 34.27 34.2 PSNR 34.0 33.99 33.78 33.63 33.69 ${10}^{-2}$ ${10}^{-1}$ ${10}^{0}$ ${10}^{1}$ $\beta$ + +Figure 10: Comparison between ${\ell }_{r}$ and ${\ell }_{1}$ . + +Robust loss ${\ell }_{r}$ . We examine the effect of the parameter $\beta$ in our robust loss. We also compare it to the standard ${\ell }_{1}$ loss. In this experiment, we use 4000 spp for ${I}_{LRHS}$ and 1 spp for ${I}_{HRLS}$ . The upsampling scale is set to $\times 4$ . The input channels of ${I}_{LRHS}$ and ${I}_{HRLS}$ are set to RGB. Figure 10 shows that the robust loss ${\ell }_{r}$ with $\beta = {0.1}$ outperforms ${\ell }_{1}$ by a large margin as it can avoid the bias towards a very small number of pixels with extremely large pixel values. We also find that using a too large or too small $\beta$ will be harmful to the results. This is because a very large $\beta$ value reduces the robust loss to the ${\ell }_{1}$ loss while a very small beta value makes the loss always close to 1 without regard to the error between the output and the ground truth. + +SPP of ${I}_{LRHS}$ and ${I}_{HRLS}$ . We examine how our method works with different spp values used to render ${I}_{LRHS}$ and ${I}_{HRLS}$ . In the experiment, we set the upsampling scale to $\times 4$ . Table 6 shows rendering at high spp values consistently leads to better final results. + +§ 6 CONCLUSION + +This paper presented a hybrid rendering method to speed up Monte Carlo rendering algorithms. We designed a two-encoder-one-decoder network for this task. Our network takes a low resolution image with a high spp and a high resolution image with a low spp as inputs, and estimates the high resolution high quality images. We built a large-scale ray-tracing dataset Blender Cycles Ray-tracing dataset. Our experiments showed that our method is able to generate high quality high resolution images quickly. Our experiments also showed that HRLS and the robust loss are helpful to generate high quality results. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..ed89cf44a6d2b576db2164c1cc609648362843ef --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,379 @@ +# Perspective Charts + +Category: Research + +## Abstract + +We introduce three novel data visualizations, called perspective charts, based on the concept of size constancy in linear perspective projection. Bar charts are a popular and commonly used tool for the interpretation of datasets, however, representing datasets with multi-scale variation is challenging in a bar chart due to limitations in viewing space. Each of our designs focuses on the static representation of datasets with large ranges with respect to important variations in the data. Through a user study, we measure the effectiveness of our designs for representing these datasets in comparison to traditional methods, such as a standard bar chart or a broken-axis bar chart, and state-of-the-art methods, such as a scale-stack bar chart. The evaluation reveals that our designs allow pieces of data to be visually compared at a level of accuracy similar to traditional visualizations. Our designs demonstrate advantages when compared to state-of-the-art visualizations designed to represent datasets with large outliers. + +Index Terms: Human-centered computing-Visualization-Visualization techniques-Information Visualization; + +## 1 INTRODUCTION + +Today we are faced with large amounts of data with varying complexity [24]. This makes the visualization of large datasets challenging, especially when viewing space is limited. Different tools and charts are suited to different types of data [16]. Bar charts are one of the most commonly used data visualizations as they are simple and easy to interpret. However, datasets with a large range, with important variation at multiple scales, present unique visualization challenges. Examples can commonly be found in population data, as illustrated in Figure 1, which shows population data for several Canadian cities. A vertical limitation of the viewing space may require that a large amount of compression be applied to the data, which makes differences between values less readable. For example, in Figure 1, the largest value is the population of Toronto; the scale of the chart needs to be set to accommodate such large values. Showing Toronto's population in the same chart as smaller cities such as Guelph and Kingston makes it difficult to measure the population of the smaller cities. When the scaling factor increases, or when data becomes more compressed, it becomes more difficult to make comparisons between pieces of data with close values. + +The limitation that we focus on is the readability of charts with multi-scale variation in the dataset. A linear mapping between the range of the data and the height of the viewing space may result in undesirable compression of the charts. One potential solution is to use a non-linear mapping, such as a logarithmic function (see Figure 1, right). However, this type of mapping is difficult to read and understand in comparison to simple linear mappings [9]. + +How do we find a more natural solution to mapping datasets with large outliers onto a small viewing space? We propose a new technique for visualizing data with important variation at multiple scales using perspective projection. Humans naturally perceive perspective, and are able to estimate the size of distant objects through a property known as size constancy [2]. Using simple linear perspective, geometric proportions can be used to measure the size and relative differences of objects [4]. + +Our first design, which we call the slanted perspective chart, shows a bar chart that is slanted backwards from the viewing plane, such that it is viewed in perspective (See Figure 1, bottom). As the lower part of the graph appears closer to the reader, small values in the dataset become larger in comparison to a traditional bar chart. + +![01963ea8-5293-79ae-9332-d7d43c40feee_0_932_353_707_693_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_0_932_353_707_693_0.jpg) + +Figure 1: Canadian cities with a population of more than 150,000, in a traditional bar chart (left), a bar chart with a logarithmic scale (right), and a slanted perspective chart (bottom). + +The main problem with the solution of slanting a traditional bar chart is that larger values in the dataset become compressed due to the perspective projection. This may make large values more difficult to read and compare. + +Our next chart, the stepped perspective chart, is designed to address the issue of scaling large values in our slanted perspective chart, while also improving the readability of small values. In bar charts, space in some parts of the chart are often wasted due to large differences in values or outliers in the dataset. We can reduce the amount of wasted space by visualizing this area in an extreme slant. This puts only the less important range of the data at an extreme angle; each bar's value is still measurable in an area that is perpendicular to the view (see Figure 2, left). + +This design is intended to resemble a staircase; we can insert multiple bends in the axis in a single chart to compress multiple areas of the chart and eliminate multiple areas of unused space (see Figure 2, right). Since the tops of the bars are not slanted or foreshortened in the stepped perspective chart, the values are emphasized more strongly than in the slanted perspective chart. + +The stepped perspective chart is conceptually similar to a traditional broken-axis bar chart (see Figure 3), which also addresses the issue of wasted space in areas where there are large gaps in the data. However, since a broken-axis bar chart essentially cuts out a portion of the graph, the ability to visually estimate and compare data is lost, unlike in our stepped perspective chart. + +Both our slanted perspective chart and stepped perspective chart contain some wasted space around the upper corners of the viewing space. To eliminate areas of unused space wherever possible, we introduce a third type of Perspective Chart, called the circular perspective chart (see Figure 4). Our design for this chart is inspired by the impression of looking up at tall buildings and skyscrapers from a low vantage point. The horizontal axis of the chart is mapped to a circle, with the vertical axis extending away from the reader's view. This chart occupies a consistent viewing space regardless of the scale of the data or the number of entries in the dataset. + +![01963ea8-5293-79ae-9332-d7d43c40feee_1_285_151_1218_548_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_1_285_151_1218_548_0.jpg) + +Figure 2: Left: A stepped perspective chart. Right: A stepped perspective chart with multiple bends in the axis. + +![01963ea8-5293-79ae-9332-d7d43c40feee_1_225_797_578_532_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_1_225_797_578_532_0.jpg) + +Figure 3: A broken-axis bar chart. + +The data visualization challenges that we discuss related to readability in datasets with multi-scale variation can be addressed using dynamic visualization methods, such as focus-plus-context; however, we focus on a static method of addressing these issues. We introduce a new class of charts comparable to traditional static bar charts, and note that commonly used interactive techniques for bar charts can also be used with our perspective charts. + +To evaluate our visualizations, we conducted a user study with twenty-four participants. The study quantitatively measured the speed and accuracy with which users could read data from our charts in comparison to traditional methods, such as a standard bar chart or a broken-axis bar chart, and state-of-the-art methods, such as a scale-stack bar chart. We also performed a qualitative evaluation of our three designs. Participants generally responded favorably to our visualizations, and were able to read data from them as accurately as with the traditional methods in fifteen out of seventeen task types, and performed more strongly than a recent method, scale-stack bar charts [9], in three out of four tasks. + +![01963ea8-5293-79ae-9332-d7d43c40feee_1_952_794_640_603_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_1_952_794_640_603_0.jpg) + +Figure 4: Population of cities in the state of Wisconsin in the United States, in a circular perspective chart. Each line marks an increment of 50,000 . + +## 2 BACKGROUND AND RELATED WORK + +Given the increasing size and complexity of available datasets, finding clear and readable methods of visualization is becoming more challenging [12]. Traditional methods such as bar charts are not always a practical choice when visualizing datasets with a large range with respect to important variations in the data [11]. In this section, we first provide a short review of research on visualizing complex datasets using variations of bar charts. Since we use perspective projection in our charts, we provide a short review of literature on the ways that perspective affects human perception. + +### 2.1 Visualizing Data with Multi-Scale Variation + +When a range of data is mapped to a bar chart, a scaling factor is applied such that all of the data can be represented in the viewing space. In datasets with large outliers, it may not be possible to fit the chart into a limited viewing space without applying scaling that decreases legibility. To address this problem, alternatives to bar charts are used in some applications. Karduni et al. wrap large bars over a certain threshold back over the y-axis in their Du Bois wrapped bar chart [10]; a similar technique is described by Reijner's horizon graphs [19], evaluated by Heer et al. [8] as an effective technique. Hlawatsch et al. compare their scale-stack bar charts with logarithmic and broken bar charts for the visualization of datasets with a large scale [9]. An example of a scale-stack bar chart is shown in Figure 6. + +![01963ea8-5293-79ae-9332-d7d43c40feee_2_243_159_536_518_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_2_243_159_536_518_0.jpg) + +Figure 5: A radial bar chart. + +One traditional alternative to bar charts for this use case is the broken-axis bar chart. Broken-axis bar charts eliminate areas of unused space between values in a bar chart, visualized as a discrete jump in values in the y-axis. However, truncating the y-axis of a chart in this manner has been shown to negatively affect the perception of scale in datasets [3]. We compare broken-axis bar charts to our stepped perspective chart in our evaluation. + +Charts scaled with a logarithmic function are also sometimes used to represent datasets with a large range. However, this type of scale is not typically used in bar charts, as it may be difficult to interpret given that it is non-linear [9]. + +### 2.2 Variations of Bar Charts + +There exist several proposed solutions to common problems with bar charts. In cases where a guaranteed $1 : 1$ aspect ratio may be desirable for a visualization, a circular chart such as a radial bar chart may be suitable (see Figure 5). A radial bar chart occupies a fixed viewing space regardless of the scale of its data. Luboschik's work on particle-based map label placement [14] highlights the use of circular charts in geospatial data visualization, where point-based icons are useful. Despite their popularity, circular chart types are generally discouraged by visualization experts, as they tend to be more difficult to read than a traditional bar chart [7]. + +Skau et al. evaluate the impact of visual embellishments in bar charts [21], taking into account human perception and aesthetic factors in their analysis. The results of their evaluation show that simple embellishments like rounded or triangular bars have strong effects on human perception, and in some tasks will negatively affect performance. Their evaluation found that humans rely on strong lines at the ends of the bars to accurately estimate values. In our stepped perspective charts, the tops of the bars remain visible and perpendicular to the view in each cluster of data. + +![01963ea8-5293-79ae-9332-d7d43c40feee_2_955_149_653_593_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_2_955_149_653_593_0.jpg) + +Figure 6: A scale-stack bar chart. The chart represents one dataset at three different scales stacked on top of one another. + +### 2.3 Human Perception + +Perspective, in combination with lighting, distance and angle, contribute to human perception of information [6]. The visual shape and size of objects changes as the object's distance and orientation changes relative to the viewer [20]. However, the concept of size constancy explains that the perception of an object's size does not change with the object's distance from the viewer [18]. This is true even for two-dimensional representations of three-dimensional scenes (see Figure 7). This is due to humans' natural ability to account for perspective and the reduction of the projected size of an object when estimating its true size [23]. Size constancy is one of the types of natural constancy in human perception of distance and scale of objects. We use this feature of perception in the design of our perspective charts. + +Mackinlay et al. [15] use perspective projection in their technique called the Perspective Wall. This interactive technique addresses data with "wide aspect ratios" by placing the area of focus on a flat plane, with surrounding contextual data placed on planes slanted away from the viewer. Other aspects of human perception are used in the design of various hierarchical data visualizations, such as those described by Gestalt psychology principles [13] of closure [17] and continuity [5]. + +## 3 Methodology + +We propose the use of perspective as a mapping of bar charts in three different designs: the slanted, stepped, and circular perspective charts. In this section, we present design rationale and methods for creating our three different perspective charts. + +### 3.1 Slanted Perspective Charts + +Slanted perspective charts, as shown in Figure 1, are similar to traditional bar charts, but have the vertical axis of the chart slanting away from the viewer. This design is inspired by drawings and images that portray one-point perspective, i.e. images with a single vanishing point, as shown in the photograph in Figure 7. We use a simple 3D environment and set a predefined camera setup to avoid user input for 3D interaction. Slanting the chart brings smaller values closer to the viewer, while moving larger values away. + +The slant in the vertical axis of the chart can be achieved either by viewing the chart from a lower angle, or by maintaining the same viewpoint and instead slanting the chart plane backwards from the viewer in three-dimensional space. We choose the latter option in order to maintain a consistent viewing space. Slanting the chart moves large values in the chart away from the viewer, and decreases the space between scale lines. + +![01963ea8-5293-79ae-9332-d7d43c40feee_3_175_147_670_583_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_3_175_147_670_583_0.jpg) + +Figure 7: An example of single-point perspective. Due to size constancy, we perceive the height of the statues in the red and blue boxes to be equal. In image space, the statue in the red box is half the height of the statue in the blue box. + +We restrict the foreshortening ratio in order to limit the compression of large values as they move away from the viewer. The foreshortening ratio measures how objects viewed at an angle appear to be shorter than their true measurement. We slant the y-axis of the chart at a fixed angle $\theta$ , which controls the foreshortening ratio ${f}_{r}$ of the slanted line $L$ compared to the viewing plane $V$ : + +$$ +{f}_{r} = \frac{L}{V} = \sec \left( \theta \right) . +$$ + +For example, when $\theta = {60}^{ \circ }$ , the foreshortening ratio ${f}_{r}$ is 2 . In general, $1 \leq {f}_{r} < \infty$ where $0 \leq \theta < \frac{\pi }{2}$ . + +We avoid slanting the chart at extreme angles in order to control the foreshortening ratio ${f}_{r}$ and maintain readability of large values. Figure 9 demonstrates how a change in $\theta$ impacts foreshortening. + +### 3.2 Stepped Perspective Charts + +Our stepped perspective chart, as seen in Figure 2, resembles a staircase showing multiple "tiers" of data. According to these tiers, we divide the range of the data into subranges ${R}_{1},{T}_{1},{R}_{2},{T}_{2},\ldots ,{R}_{n}$ (see Figure 10) where each ${R}_{i}$ is a cluster of the data and ${T}_{i}$ are transitions. Each of these subranges represents a rectangular region of the chart. To create the stepped perspective chart we use a vertical view plane with a view angle ${\theta }_{v} = {0}^{ \circ }$ for the ${R}_{i}$ , and an extreme slant $\left( {{\theta }_{v} = {60}^{ \circ }}\right.$ for Figure 10) for the transitive regions ${T}_{i}$ . + +The stepped perspective chart is comparable to traditional broken-axis bar charts, which are also intended to address issues associated with large gaps between values in a dataset. However, broken-axis bar charts have been shown to negatively affect the perception of scale in datasets [3]. In a traditional broken-axis bar chart, without the use of labels, it is impossible to visually compare values on opposing sides of the break in the axis. In the stepped perspective chart, the area within the gap is still visible, as it runs at a different angle rather than being cut out of the chart entirely (see Figure 10). This way, visual estimation is still possible, and the reader is able to perceive the approximate size of the gap. + +The amount of the transitional region of the chart that is visible can be adjusted. Figure 8 shows varying heights from which the chart can be viewed. While the axes are always bent at an angle of ${90}^{ \circ }$ , the height of the camera affects the view angle. A height that is too low results in a high view angle, with scale lines positioned so closely together that they are no longer readable, while a low view angle lessens the impact of the separate regions of the chart We choose a height that is just high enough to allow the viewer to distinguish between scale lines. The exact appropriate height is dependent on the resolution and size at which the chart is viewed. + +### 3.3 Circular Perspective Charts + +As seen in Figure 4, our circular perspective chart is inspired by the perception of tall buildings as viewed from a low vantage point, converging on a singular vanishing point. In this chart, the bars are placed in a closed polygon and extend away from the viewer, converging at a vanishing point at the center of the polygon. + +We create our circular perspective chart by bending the horizontal axis to remove areas of unused space and accommodate a larger number of values in a limited viewing space. We can imagine that the entire bar chart is divided into multiple smaller sub-charts that are slanted individually (see Figure 11). The slanted sub-charts are then rotated to form a closed polygon. In the extreme case, each sub-chart is allocated to only one single value (see Figure 4). When there is no preferred clustering to create sub-charts, we use this extreme case as the main design of our circular perspective chart. By wrapping the horizontal axis of the chart to form a closed polygon, the chart is contained within a consistent view. + +The vantage point of the viewer is a potential variable to use in interactive visualization using the circular perspective chart. Figure 12 shows an example of a low and a high viewing height for the chart shown in Figure 4. In the left chart, each scale line represents an increment of 50,000 for a total of twenty-four scale lines. The right chart’s scale lines represent an increment of 150,000 for a total of eight scale lines. To make efficient use of space in the circular perspective chart, the scale should be chosen such that bars are compressed as little as possible while avoiding the issue of closely converging scale lines. The occurrence of this issue is dependent on the size and resolution at which the chart is displayed. + +The use of circular chart types is generally discouraged by visualization professionals [7]. However, they are frequently used in practice due to their known 1:1 aspect ratio, independent of the number of bars represented. This property is also present in our circular perspective chart. Luboschik demonstrates that circular chart types are useful in geospatial data visualization [14]. The use of bar charts in spatial data visualization may present limitations in the available viewing space, hence it is worth exploring a circular chart type that has a consistent aspect ratio. + +## 4 EVALUATION + +We conducted a within-subjects user study to evaluate the readability and novelty of perspective charts. The study quantitatively measured the accuracy and speed with which users answered a series of questions based on data shown in various charts, and collected participants' opinions of the three types of perspective charts in a qualitative study. + +In our user study, we evaluate the following hypotheses: + +H1 For data with important variation at multiple scales, small values are more easily readable in a slanted perspective chart than in a traditional bar chart. + +H2 The stepped perspective chart allows for faster and easier estimation and comparison of values than broken-axis bar charts and scale-stack bar charts, other axis-breaking methods for visualizing datasets with large outliers. + +![01963ea8-5293-79ae-9332-d7d43c40feee_4_170_145_1458_456_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_4_170_145_1458_456_0.jpg) + +Figure 8: A stepped perspective chart viewed at three different heights, resulting in varying view angles and amounts of viewing space occupied by the transitional region of the chart. Left: ${\theta }_{v} = {80}^{ \circ }$ . Center: ${\theta }_{v} = {60}^{ \circ }$ . Right: ${\theta }_{v} = {40}^{ \circ }$ + +![01963ea8-5293-79ae-9332-d7d43c40feee_4_215_903_589_254_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_4_215_903_589_254_0.jpg) + +Figure 9: A comparison of methods for slanting a traditional bar chart. The left chart's vertical axis is slanted away from the viewer at an angle of ${30}^{ \circ }$ , stretching the axis in the process. The chart on the right is slanted at an angle of ${60}^{ \circ }$ . + +![01963ea8-5293-79ae-9332-d7d43c40feee_4_215_1641_585_229_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_4_215_1641_585_229_0.jpg) + +Figure 10: An example of a dataset with two clear clusters ${R}_{1}$ and ${R}_{2}$ , and a transitive range ${T}_{1}$ . + +H3 In a dataset with important variation at multiple scales, small values are more easily readable in a circular perspective chart than in a traditional bar chart, while occupying a consistent viewing space. The ease-of-use of the chart should be comparable to existing fixed-viewing-space visualizations like the radial bar chart. + +We describe one hypothesis per perspective chart design. The hypotheses are designed based on the concept of size constancy. In each block of tasks, smaller values, or values that appear "closer" to the user should be more easily readable due to the increased scale, without sacrificing readability of larger values, which appear farther away from the user and therefore visually smaller. + +H1 compares a traditional bar chart to the slanted perspective chart, a simple modification of a traditional visualization that introduces three-dimensional perspective. H2 compares the stepped perspective chart to other chart types that use axis-breaking methods for showing important variation at multiple scales. Since the stepped perspective chart uses a bend in the axis to show a large difference in scale between values, we choose to compare this design to existing chart designs that feature breaks in the y-axis. We evaluate whether the ability to visualize the area within the axis break allows values on either side to be more easily compared. H3 compares the circular perspective chart to a traditional bar chart and a radial bar chart. This is to evaluate the circular perspective chart's performance compared to an existing circular chart type as well as a traditional chart type. + +### 4.1 Study Design + +We performed studies on an individual basis over the course of approximately 60 minutes per participant. Each participant was shown a series of various types of charts and answered a list of questions based on the data in these charts. Each participant performed tasks based on common visualization task taxonomies [1, 22]. Participants answered questions based on a traditional bar chart, a radial bar chart, a broken-axis bar chart, a scale-stack bar chart [9], as shown in Figure 6, and our slanted, stepped, and circular perspective charts. + +#### 4.1.1 Tasks + +We designed six types of tasks based on visualization taxonomies [1, 22]. The task types used are: + +- Retrieve Value - "What is the population of Franklin?" + +- Determine Magnitude Difference - "How much larger is Milwaukee than Oak Creek? (For example, 2x larger, 3.5x larger)" + +![01963ea8-5293-79ae-9332-d7d43c40feee_5_182_155_1429_271_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_5_182_155_1429_271_0.jpg) + +Figure 11: The circular perspective chart is created by grouping sections of a large chart into several smaller sub-charts. In this example we have five sub-charts. Each of the sub-charts is individually slanted, then rotated to form a closed polygon. + +![01963ea8-5293-79ae-9332-d7d43c40feee_5_151_552_717_378_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_5_151_552_717_378_0.jpg) + +Figure 12: A circular perspective chart showing population of cities in Alberta, with two different scaling factors. In the top chart, each line marks an increment of 50,000 . In the bottom chart, each line marks an increment of 150,000 . + +- Determine Range - "What is the range of the data? (Smallest population to largest population)" + +- Find Extremum - "Which city has the smallest population?" + +- Filter - "List all cities with a population of less than 100,000 ." + +- Sort - "Sort the cities by population from smallest to largest." + +#### 4.1.2 Methodology + +Each participant completed five task blocks (B1 - B5), during which they completed a set of tasks using one chart type followed by a matching set of tasks using a second chart type: + +B1: traditional bar chart I slanted perspective chart + +B2: scale-stack bar chart I stepped perspective chart + +B3: broken-axis bar chart I stepped perspective chart + +B4: radial bar chart I circular perspective chart + +B5: traditional bar chart / circular perspective chart + +Within each block, each participant first performed a series of either four or five different tasks (see Section 4.1.1) using a chart of the first type, then completed a matching set of tasks using a chart of the second type that visualized a different data set. We maintained the same block, task, and dataset order for all participants, but varied the order of the chart types within each block. For example, in B1 half of the participants completed their first five tasks using a traditional bar chart. The other half started by completing the same set of tasks using a slanted perspective chart that visualized the exact same data. + +We chose this blocking scheme for the evaluation in order to maintain a pairing between our designs and existing chart types, and tailor task types within the blocks based on the charts used. + +Each participant completed a total of forty-two tasks - five tasks (determine magnitude difference between a small and a large value, determine magnitude difference between two small values, filter, find extremum, and retrieve small value) for each chart type in B1, and four tasks for each chart type in B2-5. B2-5 had three tasks of the same type (determine magnitude difference between a small and a large value, determine magnitude difference between two small values, and retrieve small value), and one additional task of a varying type. Participants completed a find extremum task for B2, a retrieving large value task for B3, a sorting task for B4 and a filtering task for B5. + +After completing the main set of tasks, participants answered a short post-study questionnaire evaluating each type of chart used in the study. This was followed by a short verbal interview where we further gathered their opinions on the charts used in the study. + +#### 4.1.3 Participants + +We recruited 24 participants (12 female) using posters distributed across our local campus as well as via word of mouth. We based this cohort size on those used in similar evaluations [9] [8]. Twenty-two participants were students at the time of the study; 16 participants studied in STEM fields, while the remaining participants worked or studied in arts, business or social sciences. + +A post-hoc power analysis was performed on the results of our evaluation for each task type in each block with our sample size of 24 participants, using GPower 3.1. Among these analyses, the lowest reported power was 0.74 . Thus for each task type in our evaluation we have at least a 74% probability of finding true significance, given our sample size of 24 . + +#### 4.1.4 Datasets + +Since tasks were divided into five different blocks for each participant, we used two unique datasets in each block for a total of ten unique datasets. We chose real datasets that satisfied our use case of clusters of data across a large range. Six datasets showed population data across various regions, two represented pollution data, and two represented precipitation data. We used data that was likely to be unfamiliar to the participants to reduce potential bias resulting from preexisting knowledge about the data. We did this by using data from regions that were not geographically close to the location where the study was performed, or by obscuring the names of the locations in the datasets ("City 1" instead of "Vancouver", etc.). + +#### 4.1.5 Test Environment + +Before beginning the study, participants were given an explanation of the consent process, monetary compensation and risks associated with the study, as approved by the Conjoint Faculties Research Ethics Board of the University of Calgary. After indicating their consent, participants completed a short pre-questionnaire, then proceeded to the main portion of the study. Tasks were completed on paper in a prepared booklet provided to participants. + +![01963ea8-5293-79ae-9332-d7d43c40feee_6_169_155_681_457_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_6_169_155_681_457_0.jpg) + +Figure 13: Results showing the total time for participants to complete all tasks in a block of the evaluation. For this and subsequent charts, the bar in the grey box represents the median error rate. The p-value and effect size (r) of each task type is shown in the table. Each box denotes the 95% confidence interval of the median. Each dot represents the task completion time of an individual participant. + +Each chart presented to participants included scale and axis labels in a consistent font style and size across charts. For the circular perspective chart, the chart's scale was labeled in the top-left corner of the visualization, as in Figure 4. Bars were unshaded to maintain simplicity in the chart designs. The appearance of charts used in the evaluation is comparable to the charts in Figures 1, 5 and 6. + +### 4.2 Results + +We compare task completion time and percentage of error between sets of tasks. The Shapiro-Wilk test indicates that our data does not follow a normal distribution, so we compare methods using a Mann-Whitney U test. As a result, we examine the median error rate for each task. For each test we report effect sizes (r) and p-values. All data is shared on the Open Science Framework (https://osf.io/ w3fce/?view_only=23ff1dded@b74363a68ec86419c9c373). + +#### 4.2.1 Time + +Timing data was measured as the sum of the total amount time each participant took to complete all the tasks for one block. Participants had the same number and type of tasks for each chart in a pairwise comparison. This data is shown in Figure 13. + +Slanted perspective chart: We did not observe a significant difference in task completion time $\left( {p = {0.687}, r = {0.06}}\right)$ between the slanted perspective chart and a traditional bar chart. + +Stepped perspective chart: We observed that participants' task completion time was significantly faster $\left( {p < {0.001}, r = {0.53}}\right)$ using our stepped perspective chart $\left( {\mathrm{{med}} = {186.5}\mathrm{\;s}}\right)$ than a scale-stack bar chart $\left( {\mathrm{{med}} = {313.5}\mathrm{\;s}}\right)$ . We did not observe a difference $(p = {0.988}$ , $r = {0.00}$ ) between our stepped perspective chart and a broken-axis bar chart. + +Circular perspective chart: We did not observe a significant difference in task completion time $\left( {p = {0.140}, r = {0.21}}\right)$ between our circular perspective chart and a traditional bar chart. There was also no significant difference $\left( {p = {0.702}, r = {0.06}}\right)$ between our circular perspective chart and a radial bar chart. + +#### 4.2.2 Accuracy + +We examine the absolute percentage of error when determining statistical significance of accuracy results; we compute this for each task by comparing each participant's numerical response with the true correct value. + +![01963ea8-5293-79ae-9332-d7d43c40feee_6_941_152_690_463_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_6_941_152_690_463_0.jpg) + +Figure 14: Accuracy results for block one of our evaluation, comparing the slanted perspective chart (orange) to a traditional bar chart (blue). Results are grouped by task type. Each dot represents the percentage of error of an individual response. For this and subsequent charts, each box denotes the 95% confidence interval of the median. + +![01963ea8-5293-79ae-9332-d7d43c40feee_6_940_825_692_331_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_6_940_825_692_331_0.jpg) + +Figure 15: Accuracy results for block two, evaluating the stepped perspective chart compared to the scale-stack bar chart. + +For sorting tasks and filtering tasks, which had non-numerical responses, error rate was determined by counting the number of mistakes made compared to the true answer, and deducting "points" for each incorrect response. For example, in a filtering task to identify the number of cities with population below 100,000 , if ten cities fulfill this criteria, a response that gives only nine correct cities would result in an error rate of $1/{10} = {10}\%$ . + +We did not observe a significant trend in the the directionality of error for any task block. For each block of tasks, results were corrected for multiple comparisons to control the false discovery rate. Median results for each block of the evaluation are shown in Figs. 14 to 18. + +Slanted perspective chart: Between this and a traditional bar chart, no significant difference was shown in the median error rate for any task type $(p > {0.4}, r < {0.3}$ for all tasks, see Figure 14). + +Stepped perspective chart: Between this and a broken-axis bar chart, there was no significant difference in accuracy demonstrated for any task type $(p > {0.6}, r < {0.3}$ for all tasks, see Figure 16). + +The stepped perspective chart significantly outperformed the scale-stack bar chart for tasks related to determining magnitude difference between large and small values (stepped med $= {11.87}$ , scale-stack med $= {48.72}, p = {0.047}, r = {0.32}$ ), determining magnitude difference between two small values (stepped med $= {10.04}$ , scale-stack med $= {29.64}, p = {0.035}, r = {0.34}$ ), and finding extremum (stepped med $= {0.53}$ , scale-stack med $= {12.12}, p < {0.001}$ , $r = {0.76})$ . We did not observe a significant difference in error rate for tasks related to retrieving small values from the chart $(p = {0.263}$ , $r = {0.17})$ . Results are shown in Figure 15. + +![01963ea8-5293-79ae-9332-d7d43c40feee_7_156_154_696_330_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_7_156_154_696_330_0.jpg) + +Figure 16: Accuracy results for block three, evaluating the stepped perspective chart compared to a broken-axis bar chart. + +![01963ea8-5293-79ae-9332-d7d43c40feee_7_161_612_695_325_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_7_161_612_695_325_0.jpg) + +Figure 17: Accuracy results for block four, evaluating the circular perspective chart compared to a radial bar chart. + +Circular perspective chart: No significant difference was shown in the median error rate for tasks performed with a traditional bar chart and our circular perspective chart $(p > {0.7}, r < {0.2}$ for all task types) (see Figure 18). + +Compared to the circular perspective chart, participants were significantly more accurate using a radial bar chart for tasks related to retrieving values (circular med $= {5.56}$ , radial med $= {0.00}$ , $p = {0.031}, r = {0.35})$ and sorting (circular med $= {8.33}$ , radial ${med} = {0.00}, p = {0.031}, r = {0.36})$ . No significant difference was demonstrated for tasks determining magnitude difference, either between small and large values $\left( {p = {0.361}, r = {0.13}}\right)$ , or two small values $\left( {p = {0.361}, r = {0.16}}\right)$ . Results are shown in Figure 17. + +#### 4.2.3 Readability and Novelty + +We gathered participants' opinions on the different types of visualizations in a post-study questionnaire. Responses are visible in Figure 19. The two most well-known types of visualizations, the traditional bar chart and the broken-axis bar chart, were the most well-received. The other types of visualizations were less familiar to participants and received lower scores, with the circular perspective chart receiving slightly less favourable scores. + +Slanted perspective chart: According to the post-study questionnaire, the slanted perspective chart was the most well-received of our three designs. Two participants described the chart as "straightforward" (P1, P8). P12 stated that they preferred the slanted perspective chart because it allows the user to "see the whole range of data, unchanged, and still read the smaller values." + +Stepped perspective chart: Several participants indicated that they felt the design of the scale-stack bar chart was complicated, which made it more difficult to use. P10 felt it was "hard to go back and forth" between scales when using this chart. + +![01963ea8-5293-79ae-9332-d7d43c40feee_7_939_154_695_326_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_7_939_154_695_326_0.jpg) + +Figure 18: Accuracy results for block five, evaluating the circular perspective chart compared to a traditional bar chart. + +Circular perspective chart: Two participants felt that the circular perspective chart was more suitable as an artistic representation of data. P6 felt it was an effective chart type for "making an impact." + +The circular perspective chart received the most "very hard to use" scores out of all the types of visualizations; nine out of twenty-four participants found it difficult to retrieve large values from the circular perspective chart. P10 noted that they would often "lose count when the perspective got smaller for the higher numbers." Nine out of twenty-four participants indicated that the effect of the perspective was too extreme in the circular perspective chart. + +## 5 DISCUSSION + +When compared to traditional chart types, we saw similar results for our three different designs; none of these comparisons showed a significant difference in accuracy rate for any of the evaluated task types. This suggests that due to size constancy, the use of perspective did not impact participants' ability to visually interpret values. + +Slanted perspective chart: While we initially hypothesized that reading small values would be easier for participants using a slanted perspective chart than in a traditional bar chart, the results did not demonstrate a significant difference in the accuracy or completion time of the two charts. Since our introduced charts are unfamiliar visualizations for the general public, it is unsurprising that participants were more comfortable with traditional methods, as indicated by the post-study evaluation and interview. + +Participants performed similarly with both types of charts. This demonstrates that the use of perspective did not hinder their ability to perform tasks, due to size constancy. However, there was no evidence that participants retrieved small values more accurately with our designs as we hypothesized they would. In fact, median error rate was exactly the same for tasks of this type with both the slanted perspective chart and a traditional bar chart. + +Stepped perspective chart: As with the slanted perspective chart, the stepped perspective chart showed no significant difference in accuracy or timing compared to the traditional comparison method, in this case the broken-axis bar chart. + +Participants were able to complete tasks significantly more quickly and accurately with the stepped perspective chart than the scale-stack bar chart, another unfamiliar visualization. This reinforces that axis-breaking methods have a negative affect on visual perception, as suggested by Correll et al. [3]. These results also suggest that the use of perspective allowed for a more intuitive understanding of the visualization than the scale-stack bar chart for representing datasets with important variation at different scales. Charts that utilize perspective and size constancy may be a viable alternative to axis-breaking techniques for data visualization. + +Circular perspective chart: The circular perspective chart did not show a difference in accuracy compared to a traditional bar chart. However, the comparison fixed-viewing-space visualization, the radial bar chart, showed significantly higher accuracy than the circular perspective chart for sorting and value-retrieving tasks. + +![01963ea8-5293-79ae-9332-d7d43c40feee_8_184_160_1419_407_0.jpg](images/01963ea8-5293-79ae-9332-d7d43c40feee_8_184_160_1419_407_0.jpg) + +Figure 19: Qualitative results of our user study evaluation. + +A previous evaluation performed by Goldberg and Helfman suggested that task completion speed was significantly lower in circular chart types than in traditional bar charts, due in part to the placement of labels relative to the chart's data [7]. One of the evaluated charts was a radial area graph, which has similar label placement features to the circular perspective chart. However, we have not observed a significant difference in task completion time between the circular perspective chart and either the radial bar chart or a traditional bar chart. + +Participant feedback about the circular perspective chart was mixed but generally more negative than the other charts shown in the study; some participants felt that the chart was visually interesting but perhaps not suitable for retrieving data in the same way as the other evaluated charts. In the interview portion of the evaluation, several participants indicated that for larger values, the bars became too severely compressed in the circular perspective chart. + +Based on error rates and participant feedback, it seems that the circular perspective chart design is not often suitable for accurate reading of values, as some participants stated the design was disorienting or confusing. However, participants also felt that the design was impactful, and may be appropriate for artistic visualizations. + +While it is promising that participants had similar accuracy between our designs and traditional chart types, there were limitations to our evaluation. Further studies could evaluate the potential of size constancy applied to data visualization. Our evaluation has demonstrated that it is a viable solution to certain types of visualization challenges. + +## 6 CONCLUSION + +We have introduced three novel chart designs, called perspective charts, to address limitations of traditional bar charts caused by undesirable scaling factors in a fixed viewing space. Our designs can open up new possibilities for visualizing datasets with multi-scale variation using the natural perception of size constancy. We provide design rationale for our three chart designs and evaluate their usability in a user study. + +Evaluation showed no significant difference in performance between traditional visualizations, a traditional bar chart and a broken-axis bar chart, and our slanted and stepped perspective charts. This suggests that the use of perspective did not affect participants' ability to perform tasks in these types of charts. + +Participants performed tasks significantly more quickly and accurately with the stepped perspective chart than with the scale-stack bar chart, another recent visualization design intended to represent datasets with important data at multiple scales, in three out of four task types. The circular perspective chart showed less accurate results than a radial bar chart for some tasks. + +Our circular perspective chart was generally not well-received by participants. Some participants raised concerns about the compression applied to large values in the chart, and others expressed that it may be more suitable as an artistic data visualization. + +Results for H1 and H2 indicate that the use of perspective projection in data visualization is worth further examination. The designs of the slanted perspective chart and stepped perspective chart were positively received by participants and performed comparatively to traditional methods in our evaluation. The stepped perspective chart in particular performed well compared to an existing visualization design, the scale-stack bar chart, for our use case of visualizing data with important variation at multiple scales. + +Further evaluation could reinforce the results observed here. Based on our evaluation results, the slanted and stepped perspective charts particularly merit more in-depth evaluation as alternatives to existing chart types. + +## 7 ACKNOWLEDGMENTS + +The authors would like to thank Lora Oehlberg for her guidance with the analysis of our evaluation results. + +## REFERENCES + +[1] R. Amar, J. Eagan, and J. Stasko. Low-level components of analytic activity in information visualization. In Proceedings of the Proceedings of the 2005 IEEE Symposium on Information Visualization, INFOVIS '05, pp. 15-. IEEE Computer Society, Washington, DC, USA, 2005. doi: 10.1109/INFOVIS.2005.24 + +[2] N. Carlson. Psychology: The Science Behaviour. Pearson Canada Inc, 4ed.,2010. + +[3] M. Correll, E. Bertini, and S. L. Franconeri. Truncating the y-axis: Threat or menace? In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20. Association for Computing Machinery, New York, NY, USA, 2020. + +[4] C. J. Erkelens. Computation and measurement of slant specified by linear perspective. Journal of Vision, 13(13):16-27, Nov. 2013. + +[5] K. Etemad, D. Baur, J. Brosz, S. Carpendale, and F. F. Samavati. Paisleytrees: A size-invariant tree visualization. EAI Endorsed Trans. Creative Technologies, 1(1):e2, 2014. + +[6] J. J. Gibson. Pictures, perspective, and perception. Daedalus, 89(1):216-227, 1960. + +[7] J. Goldberg and J. Helfman. Eye tracking for visualization evaluation: Reading values on linear versus radial graphs. Information Visualization, 10(3):182-195, 2011. doi: 10.1177/1473871611406623 + +[8] J. Heer, N. Kong, and M. Agrawala. Sizing the horizon: The effects of chart size and layering on the graphical perception of time series visualizations. pp. 1303-1312, 04 2009. doi: 10.1145/1518701.1518897 + +[9] M. Hlawatsch, F. Sadlo, M. Burch, and D. Weiskopf. Scale-stack bar charts. Computer Graphics Forum, 32(3):181-190, 2013. + +[10] A. Karduni, R. Wesslen, I.-S. Cho, and W. Dou. Du bois wrapped bar chart: Visualizing categorical data with disproportionate values. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20. Association for Computing Machinery, New York, NY, USA, 2020. + +[11] D. A. Keim, M. C. Hao, U. Dayal, and M. Hsu. Pixel bar charts: a visualization technique for very large multi-attribute data sets. Information Visualization, 1(1):20-34, 2002. + +[12] D. A. Keim, F. Mansmann, J. Schneidewind, and H. Ziegler. Challenges in visual data analysis. In Tenth International Conference on Information Visualisation (IV'06), pp. 9-16. IEEE, 2006. doi: 10.1109/IV. 2006.31 + +[13] K. Koffka. Principles of Gestalt psychology. Routledge, 1935. + +[14] M. Luboschik, H. Schumann, and H. Cords. Particle-based labeling: Fast point-feature labeling without obscuring other visual features. IEEE Transactions on Visualization and Computer Graphics, 14(6):1237-1244, Nov. 2008. + +[15] J. D. Mackinlay, G. G. Robertson, and S. K. Card. The perspective wall: Detail and context smoothly integrated. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '91, pp. 173-176. ACM, New York, NY, USA, 1991. + +[16] T. Munzner. Visualization analysis and design. AK Peters/CRC Press, 2014. + +[17] P. Neumann, M. S. T. Carpendale, and A. Agarawala. Phyllotrees: Phyllotactic patterns for tree layout. In EuroVis, vol. 6, pp. 59-66. Citeseer, 2006. + +[18] J. A. Polack, L. A. Piegl, and M. L. Carter. Perception of images using cylindrical mapping. The Visual Computer, 13(4):155-167, 1997. + +[19] H. Reijner. The development of the horizon graph. In Electronic Proceedings of the VisWeek Workshop From Theory to Practice: Design, Vision and Visualization., 2008. + +[20] A. Shlahova. Problems in the perception of perspective in drawing. Journal of Art & Design Education, 19(1):102-109, 2000. + +[21] D. Skau, L. Harrison, and R. Kosara. An evaluation of the impact of visual embellishments in bar charts. Computer Graphics Forum, 34(3):221-230, 2015. + +[22] J. Talbot, V. Setlur, and A. Anand. Four experiments on the perception of bar charts. IEEE Transactions on Visualization and Computer Graphics, 20:2152-2160, 2014. + +[23] C. Tyler and M. Kubovy. The rise of renaissance perspective. Science and art of perspective, 2004. + +[24] C. Ware. Information visualization: perception for design. Elsevier, 2012. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..d514620fdb98a3ba584f1e373084f616163e0235 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,329 @@ +§ PERSPECTIVE CHARTS + +Category: Research + +§ ABSTRACT + +We introduce three novel data visualizations, called perspective charts, based on the concept of size constancy in linear perspective projection. Bar charts are a popular and commonly used tool for the interpretation of datasets, however, representing datasets with multi-scale variation is challenging in a bar chart due to limitations in viewing space. Each of our designs focuses on the static representation of datasets with large ranges with respect to important variations in the data. Through a user study, we measure the effectiveness of our designs for representing these datasets in comparison to traditional methods, such as a standard bar chart or a broken-axis bar chart, and state-of-the-art methods, such as a scale-stack bar chart. The evaluation reveals that our designs allow pieces of data to be visually compared at a level of accuracy similar to traditional visualizations. Our designs demonstrate advantages when compared to state-of-the-art visualizations designed to represent datasets with large outliers. + +Index Terms: Human-centered computing-Visualization-Visualization techniques-Information Visualization; + +§ 1 INTRODUCTION + +Today we are faced with large amounts of data with varying complexity [24]. This makes the visualization of large datasets challenging, especially when viewing space is limited. Different tools and charts are suited to different types of data [16]. Bar charts are one of the most commonly used data visualizations as they are simple and easy to interpret. However, datasets with a large range, with important variation at multiple scales, present unique visualization challenges. Examples can commonly be found in population data, as illustrated in Figure 1, which shows population data for several Canadian cities. A vertical limitation of the viewing space may require that a large amount of compression be applied to the data, which makes differences between values less readable. For example, in Figure 1, the largest value is the population of Toronto; the scale of the chart needs to be set to accommodate such large values. Showing Toronto's population in the same chart as smaller cities such as Guelph and Kingston makes it difficult to measure the population of the smaller cities. When the scaling factor increases, or when data becomes more compressed, it becomes more difficult to make comparisons between pieces of data with close values. + +The limitation that we focus on is the readability of charts with multi-scale variation in the dataset. A linear mapping between the range of the data and the height of the viewing space may result in undesirable compression of the charts. One potential solution is to use a non-linear mapping, such as a logarithmic function (see Figure 1, right). However, this type of mapping is difficult to read and understand in comparison to simple linear mappings [9]. + +How do we find a more natural solution to mapping datasets with large outliers onto a small viewing space? We propose a new technique for visualizing data with important variation at multiple scales using perspective projection. Humans naturally perceive perspective, and are able to estimate the size of distant objects through a property known as size constancy [2]. Using simple linear perspective, geometric proportions can be used to measure the size and relative differences of objects [4]. + +Our first design, which we call the slanted perspective chart, shows a bar chart that is slanted backwards from the viewing plane, such that it is viewed in perspective (See Figure 1, bottom). As the lower part of the graph appears closer to the reader, small values in the dataset become larger in comparison to a traditional bar chart. + + < g r a p h i c s > + +Figure 1: Canadian cities with a population of more than 150,000, in a traditional bar chart (left), a bar chart with a logarithmic scale (right), and a slanted perspective chart (bottom). + +The main problem with the solution of slanting a traditional bar chart is that larger values in the dataset become compressed due to the perspective projection. This may make large values more difficult to read and compare. + +Our next chart, the stepped perspective chart, is designed to address the issue of scaling large values in our slanted perspective chart, while also improving the readability of small values. In bar charts, space in some parts of the chart are often wasted due to large differences in values or outliers in the dataset. We can reduce the amount of wasted space by visualizing this area in an extreme slant. This puts only the less important range of the data at an extreme angle; each bar's value is still measurable in an area that is perpendicular to the view (see Figure 2, left). + +This design is intended to resemble a staircase; we can insert multiple bends in the axis in a single chart to compress multiple areas of the chart and eliminate multiple areas of unused space (see Figure 2, right). Since the tops of the bars are not slanted or foreshortened in the stepped perspective chart, the values are emphasized more strongly than in the slanted perspective chart. + +The stepped perspective chart is conceptually similar to a traditional broken-axis bar chart (see Figure 3), which also addresses the issue of wasted space in areas where there are large gaps in the data. However, since a broken-axis bar chart essentially cuts out a portion of the graph, the ability to visually estimate and compare data is lost, unlike in our stepped perspective chart. + +Both our slanted perspective chart and stepped perspective chart contain some wasted space around the upper corners of the viewing space. To eliminate areas of unused space wherever possible, we introduce a third type of Perspective Chart, called the circular perspective chart (see Figure 4). Our design for this chart is inspired by the impression of looking up at tall buildings and skyscrapers from a low vantage point. The horizontal axis of the chart is mapped to a circle, with the vertical axis extending away from the reader's view. This chart occupies a consistent viewing space regardless of the scale of the data or the number of entries in the dataset. + + < g r a p h i c s > + +Figure 2: Left: A stepped perspective chart. Right: A stepped perspective chart with multiple bends in the axis. + + < g r a p h i c s > + +Figure 3: A broken-axis bar chart. + +The data visualization challenges that we discuss related to readability in datasets with multi-scale variation can be addressed using dynamic visualization methods, such as focus-plus-context; however, we focus on a static method of addressing these issues. We introduce a new class of charts comparable to traditional static bar charts, and note that commonly used interactive techniques for bar charts can also be used with our perspective charts. + +To evaluate our visualizations, we conducted a user study with twenty-four participants. The study quantitatively measured the speed and accuracy with which users could read data from our charts in comparison to traditional methods, such as a standard bar chart or a broken-axis bar chart, and state-of-the-art methods, such as a scale-stack bar chart. We also performed a qualitative evaluation of our three designs. Participants generally responded favorably to our visualizations, and were able to read data from them as accurately as with the traditional methods in fifteen out of seventeen task types, and performed more strongly than a recent method, scale-stack bar charts [9], in three out of four tasks. + + < g r a p h i c s > + +Figure 4: Population of cities in the state of Wisconsin in the United States, in a circular perspective chart. Each line marks an increment of 50,000 . + +§ 2 BACKGROUND AND RELATED WORK + +Given the increasing size and complexity of available datasets, finding clear and readable methods of visualization is becoming more challenging [12]. Traditional methods such as bar charts are not always a practical choice when visualizing datasets with a large range with respect to important variations in the data [11]. In this section, we first provide a short review of research on visualizing complex datasets using variations of bar charts. Since we use perspective projection in our charts, we provide a short review of literature on the ways that perspective affects human perception. + +§ 2.1 VISUALIZING DATA WITH MULTI-SCALE VARIATION + +When a range of data is mapped to a bar chart, a scaling factor is applied such that all of the data can be represented in the viewing space. In datasets with large outliers, it may not be possible to fit the chart into a limited viewing space without applying scaling that decreases legibility. To address this problem, alternatives to bar charts are used in some applications. Karduni et al. wrap large bars over a certain threshold back over the y-axis in their Du Bois wrapped bar chart [10]; a similar technique is described by Reijner's horizon graphs [19], evaluated by Heer et al. [8] as an effective technique. Hlawatsch et al. compare their scale-stack bar charts with logarithmic and broken bar charts for the visualization of datasets with a large scale [9]. An example of a scale-stack bar chart is shown in Figure 6. + + < g r a p h i c s > + +Figure 5: A radial bar chart. + +One traditional alternative to bar charts for this use case is the broken-axis bar chart. Broken-axis bar charts eliminate areas of unused space between values in a bar chart, visualized as a discrete jump in values in the y-axis. However, truncating the y-axis of a chart in this manner has been shown to negatively affect the perception of scale in datasets [3]. We compare broken-axis bar charts to our stepped perspective chart in our evaluation. + +Charts scaled with a logarithmic function are also sometimes used to represent datasets with a large range. However, this type of scale is not typically used in bar charts, as it may be difficult to interpret given that it is non-linear [9]. + +§ 2.2 VARIATIONS OF BAR CHARTS + +There exist several proposed solutions to common problems with bar charts. In cases where a guaranteed $1 : 1$ aspect ratio may be desirable for a visualization, a circular chart such as a radial bar chart may be suitable (see Figure 5). A radial bar chart occupies a fixed viewing space regardless of the scale of its data. Luboschik's work on particle-based map label placement [14] highlights the use of circular charts in geospatial data visualization, where point-based icons are useful. Despite their popularity, circular chart types are generally discouraged by visualization experts, as they tend to be more difficult to read than a traditional bar chart [7]. + +Skau et al. evaluate the impact of visual embellishments in bar charts [21], taking into account human perception and aesthetic factors in their analysis. The results of their evaluation show that simple embellishments like rounded or triangular bars have strong effects on human perception, and in some tasks will negatively affect performance. Their evaluation found that humans rely on strong lines at the ends of the bars to accurately estimate values. In our stepped perspective charts, the tops of the bars remain visible and perpendicular to the view in each cluster of data. + + < g r a p h i c s > + +Figure 6: A scale-stack bar chart. The chart represents one dataset at three different scales stacked on top of one another. + +§ 2.3 HUMAN PERCEPTION + +Perspective, in combination with lighting, distance and angle, contribute to human perception of information [6]. The visual shape and size of objects changes as the object's distance and orientation changes relative to the viewer [20]. However, the concept of size constancy explains that the perception of an object's size does not change with the object's distance from the viewer [18]. This is true even for two-dimensional representations of three-dimensional scenes (see Figure 7). This is due to humans' natural ability to account for perspective and the reduction of the projected size of an object when estimating its true size [23]. Size constancy is one of the types of natural constancy in human perception of distance and scale of objects. We use this feature of perception in the design of our perspective charts. + +Mackinlay et al. [15] use perspective projection in their technique called the Perspective Wall. This interactive technique addresses data with "wide aspect ratios" by placing the area of focus on a flat plane, with surrounding contextual data placed on planes slanted away from the viewer. Other aspects of human perception are used in the design of various hierarchical data visualizations, such as those described by Gestalt psychology principles [13] of closure [17] and continuity [5]. + +§ 3 METHODOLOGY + +We propose the use of perspective as a mapping of bar charts in three different designs: the slanted, stepped, and circular perspective charts. In this section, we present design rationale and methods for creating our three different perspective charts. + +§ 3.1 SLANTED PERSPECTIVE CHARTS + +Slanted perspective charts, as shown in Figure 1, are similar to traditional bar charts, but have the vertical axis of the chart slanting away from the viewer. This design is inspired by drawings and images that portray one-point perspective, i.e. images with a single vanishing point, as shown in the photograph in Figure 7. We use a simple 3D environment and set a predefined camera setup to avoid user input for 3D interaction. Slanting the chart brings smaller values closer to the viewer, while moving larger values away. + +The slant in the vertical axis of the chart can be achieved either by viewing the chart from a lower angle, or by maintaining the same viewpoint and instead slanting the chart plane backwards from the viewer in three-dimensional space. We choose the latter option in order to maintain a consistent viewing space. Slanting the chart moves large values in the chart away from the viewer, and decreases the space between scale lines. + + < g r a p h i c s > + +Figure 7: An example of single-point perspective. Due to size constancy, we perceive the height of the statues in the red and blue boxes to be equal. In image space, the statue in the red box is half the height of the statue in the blue box. + +We restrict the foreshortening ratio in order to limit the compression of large values as they move away from the viewer. The foreshortening ratio measures how objects viewed at an angle appear to be shorter than their true measurement. We slant the y-axis of the chart at a fixed angle $\theta$ , which controls the foreshortening ratio ${f}_{r}$ of the slanted line $L$ compared to the viewing plane $V$ : + +$$ +{f}_{r} = \frac{L}{V} = \sec \left( \theta \right) . +$$ + +For example, when $\theta = {60}^{ \circ }$ , the foreshortening ratio ${f}_{r}$ is 2 . In general, $1 \leq {f}_{r} < \infty$ where $0 \leq \theta < \frac{\pi }{2}$ . + +We avoid slanting the chart at extreme angles in order to control the foreshortening ratio ${f}_{r}$ and maintain readability of large values. Figure 9 demonstrates how a change in $\theta$ impacts foreshortening. + +§ 3.2 STEPPED PERSPECTIVE CHARTS + +Our stepped perspective chart, as seen in Figure 2, resembles a staircase showing multiple "tiers" of data. According to these tiers, we divide the range of the data into subranges ${R}_{1},{T}_{1},{R}_{2},{T}_{2},\ldots ,{R}_{n}$ (see Figure 10) where each ${R}_{i}$ is a cluster of the data and ${T}_{i}$ are transitions. Each of these subranges represents a rectangular region of the chart. To create the stepped perspective chart we use a vertical view plane with a view angle ${\theta }_{v} = {0}^{ \circ }$ for the ${R}_{i}$ , and an extreme slant $\left( {{\theta }_{v} = {60}^{ \circ }}\right.$ for Figure 10) for the transitive regions ${T}_{i}$ . + +The stepped perspective chart is comparable to traditional broken-axis bar charts, which are also intended to address issues associated with large gaps between values in a dataset. However, broken-axis bar charts have been shown to negatively affect the perception of scale in datasets [3]. In a traditional broken-axis bar chart, without the use of labels, it is impossible to visually compare values on opposing sides of the break in the axis. In the stepped perspective chart, the area within the gap is still visible, as it runs at a different angle rather than being cut out of the chart entirely (see Figure 10). This way, visual estimation is still possible, and the reader is able to perceive the approximate size of the gap. + +The amount of the transitional region of the chart that is visible can be adjusted. Figure 8 shows varying heights from which the chart can be viewed. While the axes are always bent at an angle of ${90}^{ \circ }$ , the height of the camera affects the view angle. A height that is too low results in a high view angle, with scale lines positioned so closely together that they are no longer readable, while a low view angle lessens the impact of the separate regions of the chart We choose a height that is just high enough to allow the viewer to distinguish between scale lines. The exact appropriate height is dependent on the resolution and size at which the chart is viewed. + +§ 3.3 CIRCULAR PERSPECTIVE CHARTS + +As seen in Figure 4, our circular perspective chart is inspired by the perception of tall buildings as viewed from a low vantage point, converging on a singular vanishing point. In this chart, the bars are placed in a closed polygon and extend away from the viewer, converging at a vanishing point at the center of the polygon. + +We create our circular perspective chart by bending the horizontal axis to remove areas of unused space and accommodate a larger number of values in a limited viewing space. We can imagine that the entire bar chart is divided into multiple smaller sub-charts that are slanted individually (see Figure 11). The slanted sub-charts are then rotated to form a closed polygon. In the extreme case, each sub-chart is allocated to only one single value (see Figure 4). When there is no preferred clustering to create sub-charts, we use this extreme case as the main design of our circular perspective chart. By wrapping the horizontal axis of the chart to form a closed polygon, the chart is contained within a consistent view. + +The vantage point of the viewer is a potential variable to use in interactive visualization using the circular perspective chart. Figure 12 shows an example of a low and a high viewing height for the chart shown in Figure 4. In the left chart, each scale line represents an increment of 50,000 for a total of twenty-four scale lines. The right chart’s scale lines represent an increment of 150,000 for a total of eight scale lines. To make efficient use of space in the circular perspective chart, the scale should be chosen such that bars are compressed as little as possible while avoiding the issue of closely converging scale lines. The occurrence of this issue is dependent on the size and resolution at which the chart is displayed. + +The use of circular chart types is generally discouraged by visualization professionals [7]. However, they are frequently used in practice due to their known 1:1 aspect ratio, independent of the number of bars represented. This property is also present in our circular perspective chart. Luboschik demonstrates that circular chart types are useful in geospatial data visualization [14]. The use of bar charts in spatial data visualization may present limitations in the available viewing space, hence it is worth exploring a circular chart type that has a consistent aspect ratio. + +§ 4 EVALUATION + +We conducted a within-subjects user study to evaluate the readability and novelty of perspective charts. The study quantitatively measured the accuracy and speed with which users answered a series of questions based on data shown in various charts, and collected participants' opinions of the three types of perspective charts in a qualitative study. + +In our user study, we evaluate the following hypotheses: + +H1 For data with important variation at multiple scales, small values are more easily readable in a slanted perspective chart than in a traditional bar chart. + +H2 The stepped perspective chart allows for faster and easier estimation and comparison of values than broken-axis bar charts and scale-stack bar charts, other axis-breaking methods for visualizing datasets with large outliers. + + < g r a p h i c s > + +Figure 8: A stepped perspective chart viewed at three different heights, resulting in varying view angles and amounts of viewing space occupied by the transitional region of the chart. Left: ${\theta }_{v} = {80}^{ \circ }$ . Center: ${\theta }_{v} = {60}^{ \circ }$ . Right: ${\theta }_{v} = {40}^{ \circ }$ + + < g r a p h i c s > + +Figure 9: A comparison of methods for slanting a traditional bar chart. The left chart's vertical axis is slanted away from the viewer at an angle of ${30}^{ \circ }$ , stretching the axis in the process. The chart on the right is slanted at an angle of ${60}^{ \circ }$ . + + < g r a p h i c s > + +Figure 10: An example of a dataset with two clear clusters ${R}_{1}$ and ${R}_{2}$ , and a transitive range ${T}_{1}$ . + +H3 In a dataset with important variation at multiple scales, small values are more easily readable in a circular perspective chart than in a traditional bar chart, while occupying a consistent viewing space. The ease-of-use of the chart should be comparable to existing fixed-viewing-space visualizations like the radial bar chart. + +We describe one hypothesis per perspective chart design. The hypotheses are designed based on the concept of size constancy. In each block of tasks, smaller values, or values that appear "closer" to the user should be more easily readable due to the increased scale, without sacrificing readability of larger values, which appear farther away from the user and therefore visually smaller. + +H1 compares a traditional bar chart to the slanted perspective chart, a simple modification of a traditional visualization that introduces three-dimensional perspective. H2 compares the stepped perspective chart to other chart types that use axis-breaking methods for showing important variation at multiple scales. Since the stepped perspective chart uses a bend in the axis to show a large difference in scale between values, we choose to compare this design to existing chart designs that feature breaks in the y-axis. We evaluate whether the ability to visualize the area within the axis break allows values on either side to be more easily compared. H3 compares the circular perspective chart to a traditional bar chart and a radial bar chart. This is to evaluate the circular perspective chart's performance compared to an existing circular chart type as well as a traditional chart type. + +§ 4.1 STUDY DESIGN + +We performed studies on an individual basis over the course of approximately 60 minutes per participant. Each participant was shown a series of various types of charts and answered a list of questions based on the data in these charts. Each participant performed tasks based on common visualization task taxonomies [1, 22]. Participants answered questions based on a traditional bar chart, a radial bar chart, a broken-axis bar chart, a scale-stack bar chart [9], as shown in Figure 6, and our slanted, stepped, and circular perspective charts. + +§ 4.1.1 TASKS + +We designed six types of tasks based on visualization taxonomies [1, 22]. The task types used are: + + * Retrieve Value - "What is the population of Franklin?" + + * Determine Magnitude Difference - "How much larger is Milwaukee than Oak Creek? (For example, 2x larger, 3.5x larger)" + + < g r a p h i c s > + +Figure 11: The circular perspective chart is created by grouping sections of a large chart into several smaller sub-charts. In this example we have five sub-charts. Each of the sub-charts is individually slanted, then rotated to form a closed polygon. + + < g r a p h i c s > + +Figure 12: A circular perspective chart showing population of cities in Alberta, with two different scaling factors. In the top chart, each line marks an increment of 50,000 . In the bottom chart, each line marks an increment of 150,000 . + + * Determine Range - "What is the range of the data? (Smallest population to largest population)" + + * Find Extremum - "Which city has the smallest population?" + + * Filter - "List all cities with a population of less than 100,000 ." + + * Sort - "Sort the cities by population from smallest to largest." + +§ 4.1.2 METHODOLOGY + +Each participant completed five task blocks (B1 - B5), during which they completed a set of tasks using one chart type followed by a matching set of tasks using a second chart type: + +B1: traditional bar chart I slanted perspective chart + +B2: scale-stack bar chart I stepped perspective chart + +B3: broken-axis bar chart I stepped perspective chart + +B4: radial bar chart I circular perspective chart + +B5: traditional bar chart / circular perspective chart + +Within each block, each participant first performed a series of either four or five different tasks (see Section 4.1.1) using a chart of the first type, then completed a matching set of tasks using a chart of the second type that visualized a different data set. We maintained the same block, task, and dataset order for all participants, but varied the order of the chart types within each block. For example, in B1 half of the participants completed their first five tasks using a traditional bar chart. The other half started by completing the same set of tasks using a slanted perspective chart that visualized the exact same data. + +We chose this blocking scheme for the evaluation in order to maintain a pairing between our designs and existing chart types, and tailor task types within the blocks based on the charts used. + +Each participant completed a total of forty-two tasks - five tasks (determine magnitude difference between a small and a large value, determine magnitude difference between two small values, filter, find extremum, and retrieve small value) for each chart type in B1, and four tasks for each chart type in B2-5. B2-5 had three tasks of the same type (determine magnitude difference between a small and a large value, determine magnitude difference between two small values, and retrieve small value), and one additional task of a varying type. Participants completed a find extremum task for B2, a retrieving large value task for B3, a sorting task for B4 and a filtering task for B5. + +After completing the main set of tasks, participants answered a short post-study questionnaire evaluating each type of chart used in the study. This was followed by a short verbal interview where we further gathered their opinions on the charts used in the study. + +§ 4.1.3 PARTICIPANTS + +We recruited 24 participants (12 female) using posters distributed across our local campus as well as via word of mouth. We based this cohort size on those used in similar evaluations [9] [8]. Twenty-two participants were students at the time of the study; 16 participants studied in STEM fields, while the remaining participants worked or studied in arts, business or social sciences. + +A post-hoc power analysis was performed on the results of our evaluation for each task type in each block with our sample size of 24 participants, using GPower 3.1. Among these analyses, the lowest reported power was 0.74 . Thus for each task type in our evaluation we have at least a 74% probability of finding true significance, given our sample size of 24 . + +§ 4.1.4 DATASETS + +Since tasks were divided into five different blocks for each participant, we used two unique datasets in each block for a total of ten unique datasets. We chose real datasets that satisfied our use case of clusters of data across a large range. Six datasets showed population data across various regions, two represented pollution data, and two represented precipitation data. We used data that was likely to be unfamiliar to the participants to reduce potential bias resulting from preexisting knowledge about the data. We did this by using data from regions that were not geographically close to the location where the study was performed, or by obscuring the names of the locations in the datasets ("City 1" instead of "Vancouver", etc.). + +§ 4.1.5 TEST ENVIRONMENT + +Before beginning the study, participants were given an explanation of the consent process, monetary compensation and risks associated with the study, as approved by the Conjoint Faculties Research Ethics Board of the University of Calgary. After indicating their consent, participants completed a short pre-questionnaire, then proceeded to the main portion of the study. Tasks were completed on paper in a prepared booklet provided to participants. + + < g r a p h i c s > + +Figure 13: Results showing the total time for participants to complete all tasks in a block of the evaluation. For this and subsequent charts, the bar in the grey box represents the median error rate. The p-value and effect size (r) of each task type is shown in the table. Each box denotes the 95% confidence interval of the median. Each dot represents the task completion time of an individual participant. + +Each chart presented to participants included scale and axis labels in a consistent font style and size across charts. For the circular perspective chart, the chart's scale was labeled in the top-left corner of the visualization, as in Figure 4. Bars were unshaded to maintain simplicity in the chart designs. The appearance of charts used in the evaluation is comparable to the charts in Figures 1, 5 and 6. + +§ 4.2 RESULTS + +We compare task completion time and percentage of error between sets of tasks. The Shapiro-Wilk test indicates that our data does not follow a normal distribution, so we compare methods using a Mann-Whitney U test. As a result, we examine the median error rate for each task. For each test we report effect sizes (r) and p-values. All data is shared on the Open Science Framework (https://osf.io/ w3fce/?view_only=23ff1dded@b74363a68ec86419c9c373). + +§ 4.2.1 TIME + +Timing data was measured as the sum of the total amount time each participant took to complete all the tasks for one block. Participants had the same number and type of tasks for each chart in a pairwise comparison. This data is shown in Figure 13. + +Slanted perspective chart: We did not observe a significant difference in task completion time $\left( {p = {0.687},r = {0.06}}\right)$ between the slanted perspective chart and a traditional bar chart. + +Stepped perspective chart: We observed that participants' task completion time was significantly faster $\left( {p < {0.001},r = {0.53}}\right)$ using our stepped perspective chart $\left( {\mathrm{{med}} = {186.5}\mathrm{\;s}}\right)$ than a scale-stack bar chart $\left( {\mathrm{{med}} = {313.5}\mathrm{\;s}}\right)$ . We did not observe a difference $(p = {0.988}$ , $r = {0.00}$ ) between our stepped perspective chart and a broken-axis bar chart. + +Circular perspective chart: We did not observe a significant difference in task completion time $\left( {p = {0.140},r = {0.21}}\right)$ between our circular perspective chart and a traditional bar chart. There was also no significant difference $\left( {p = {0.702},r = {0.06}}\right)$ between our circular perspective chart and a radial bar chart. + +§ 4.2.2 ACCURACY + +We examine the absolute percentage of error when determining statistical significance of accuracy results; we compute this for each task by comparing each participant's numerical response with the true correct value. + + < g r a p h i c s > + +Figure 14: Accuracy results for block one of our evaluation, comparing the slanted perspective chart (orange) to a traditional bar chart (blue). Results are grouped by task type. Each dot represents the percentage of error of an individual response. For this and subsequent charts, each box denotes the 95% confidence interval of the median. + + < g r a p h i c s > + +Figure 15: Accuracy results for block two, evaluating the stepped perspective chart compared to the scale-stack bar chart. + +For sorting tasks and filtering tasks, which had non-numerical responses, error rate was determined by counting the number of mistakes made compared to the true answer, and deducting "points" for each incorrect response. For example, in a filtering task to identify the number of cities with population below 100,000, if ten cities fulfill this criteria, a response that gives only nine correct cities would result in an error rate of $1/{10} = {10}\%$ . + +We did not observe a significant trend in the the directionality of error for any task block. For each block of tasks, results were corrected for multiple comparisons to control the false discovery rate. Median results for each block of the evaluation are shown in Figs. 14 to 18. + +Slanted perspective chart: Between this and a traditional bar chart, no significant difference was shown in the median error rate for any task type $(p > {0.4},r < {0.3}$ for all tasks, see Figure 14). + +Stepped perspective chart: Between this and a broken-axis bar chart, there was no significant difference in accuracy demonstrated for any task type $(p > {0.6},r < {0.3}$ for all tasks, see Figure 16). + +The stepped perspective chart significantly outperformed the scale-stack bar chart for tasks related to determining magnitude difference between large and small values (stepped med $= {11.87}$ , scale-stack med $= {48.72},p = {0.047},r = {0.32}$ ), determining magnitude difference between two small values (stepped med $= {10.04}$ , scale-stack med $= {29.64},p = {0.035},r = {0.34}$ ), and finding extremum (stepped med $= {0.53}$ , scale-stack med $= {12.12},p < {0.001}$ , $r = {0.76})$ . We did not observe a significant difference in error rate for tasks related to retrieving small values from the chart $(p = {0.263}$ , $r = {0.17})$ . Results are shown in Figure 15. + + < g r a p h i c s > + +Figure 16: Accuracy results for block three, evaluating the stepped perspective chart compared to a broken-axis bar chart. + + < g r a p h i c s > + +Figure 17: Accuracy results for block four, evaluating the circular perspective chart compared to a radial bar chart. + +Circular perspective chart: No significant difference was shown in the median error rate for tasks performed with a traditional bar chart and our circular perspective chart $(p > {0.7},r < {0.2}$ for all task types) (see Figure 18). + +Compared to the circular perspective chart, participants were significantly more accurate using a radial bar chart for tasks related to retrieving values (circular med $= {5.56}$ , radial med $= {0.00}$ , $p = {0.031},r = {0.35})$ and sorting (circular med $= {8.33}$ , radial ${med} = {0.00},p = {0.031},r = {0.36})$ . No significant difference was demonstrated for tasks determining magnitude difference, either between small and large values $\left( {p = {0.361},r = {0.13}}\right)$ , or two small values $\left( {p = {0.361},r = {0.16}}\right)$ . Results are shown in Figure 17. + +§ 4.2.3 READABILITY AND NOVELTY + +We gathered participants' opinions on the different types of visualizations in a post-study questionnaire. Responses are visible in Figure 19. The two most well-known types of visualizations, the traditional bar chart and the broken-axis bar chart, were the most well-received. The other types of visualizations were less familiar to participants and received lower scores, with the circular perspective chart receiving slightly less favourable scores. + +Slanted perspective chart: According to the post-study questionnaire, the slanted perspective chart was the most well-received of our three designs. Two participants described the chart as "straightforward" (P1, P8). P12 stated that they preferred the slanted perspective chart because it allows the user to "see the whole range of data, unchanged, and still read the smaller values." + +Stepped perspective chart: Several participants indicated that they felt the design of the scale-stack bar chart was complicated, which made it more difficult to use. P10 felt it was "hard to go back and forth" between scales when using this chart. + + < g r a p h i c s > + +Figure 18: Accuracy results for block five, evaluating the circular perspective chart compared to a traditional bar chart. + +Circular perspective chart: Two participants felt that the circular perspective chart was more suitable as an artistic representation of data. P6 felt it was an effective chart type for "making an impact." + +The circular perspective chart received the most "very hard to use" scores out of all the types of visualizations; nine out of twenty-four participants found it difficult to retrieve large values from the circular perspective chart. P10 noted that they would often "lose count when the perspective got smaller for the higher numbers." Nine out of twenty-four participants indicated that the effect of the perspective was too extreme in the circular perspective chart. + +§ 5 DISCUSSION + +When compared to traditional chart types, we saw similar results for our three different designs; none of these comparisons showed a significant difference in accuracy rate for any of the evaluated task types. This suggests that due to size constancy, the use of perspective did not impact participants' ability to visually interpret values. + +Slanted perspective chart: While we initially hypothesized that reading small values would be easier for participants using a slanted perspective chart than in a traditional bar chart, the results did not demonstrate a significant difference in the accuracy or completion time of the two charts. Since our introduced charts are unfamiliar visualizations for the general public, it is unsurprising that participants were more comfortable with traditional methods, as indicated by the post-study evaluation and interview. + +Participants performed similarly with both types of charts. This demonstrates that the use of perspective did not hinder their ability to perform tasks, due to size constancy. However, there was no evidence that participants retrieved small values more accurately with our designs as we hypothesized they would. In fact, median error rate was exactly the same for tasks of this type with both the slanted perspective chart and a traditional bar chart. + +Stepped perspective chart: As with the slanted perspective chart, the stepped perspective chart showed no significant difference in accuracy or timing compared to the traditional comparison method, in this case the broken-axis bar chart. + +Participants were able to complete tasks significantly more quickly and accurately with the stepped perspective chart than the scale-stack bar chart, another unfamiliar visualization. This reinforces that axis-breaking methods have a negative affect on visual perception, as suggested by Correll et al. [3]. These results also suggest that the use of perspective allowed for a more intuitive understanding of the visualization than the scale-stack bar chart for representing datasets with important variation at different scales. Charts that utilize perspective and size constancy may be a viable alternative to axis-breaking techniques for data visualization. + +Circular perspective chart: The circular perspective chart did not show a difference in accuracy compared to a traditional bar chart. However, the comparison fixed-viewing-space visualization, the radial bar chart, showed significantly higher accuracy than the circular perspective chart for sorting and value-retrieving tasks. + + < g r a p h i c s > + +Figure 19: Qualitative results of our user study evaluation. + +A previous evaluation performed by Goldberg and Helfman suggested that task completion speed was significantly lower in circular chart types than in traditional bar charts, due in part to the placement of labels relative to the chart's data [7]. One of the evaluated charts was a radial area graph, which has similar label placement features to the circular perspective chart. However, we have not observed a significant difference in task completion time between the circular perspective chart and either the radial bar chart or a traditional bar chart. + +Participant feedback about the circular perspective chart was mixed but generally more negative than the other charts shown in the study; some participants felt that the chart was visually interesting but perhaps not suitable for retrieving data in the same way as the other evaluated charts. In the interview portion of the evaluation, several participants indicated that for larger values, the bars became too severely compressed in the circular perspective chart. + +Based on error rates and participant feedback, it seems that the circular perspective chart design is not often suitable for accurate reading of values, as some participants stated the design was disorienting or confusing. However, participants also felt that the design was impactful, and may be appropriate for artistic visualizations. + +While it is promising that participants had similar accuracy between our designs and traditional chart types, there were limitations to our evaluation. Further studies could evaluate the potential of size constancy applied to data visualization. Our evaluation has demonstrated that it is a viable solution to certain types of visualization challenges. + +§ 6 CONCLUSION + +We have introduced three novel chart designs, called perspective charts, to address limitations of traditional bar charts caused by undesirable scaling factors in a fixed viewing space. Our designs can open up new possibilities for visualizing datasets with multi-scale variation using the natural perception of size constancy. We provide design rationale for our three chart designs and evaluate their usability in a user study. + +Evaluation showed no significant difference in performance between traditional visualizations, a traditional bar chart and a broken-axis bar chart, and our slanted and stepped perspective charts. This suggests that the use of perspective did not affect participants' ability to perform tasks in these types of charts. + +Participants performed tasks significantly more quickly and accurately with the stepped perspective chart than with the scale-stack bar chart, another recent visualization design intended to represent datasets with important data at multiple scales, in three out of four task types. The circular perspective chart showed less accurate results than a radial bar chart for some tasks. + +Our circular perspective chart was generally not well-received by participants. Some participants raised concerns about the compression applied to large values in the chart, and others expressed that it may be more suitable as an artistic data visualization. + +Results for H1 and H2 indicate that the use of perspective projection in data visualization is worth further examination. The designs of the slanted perspective chart and stepped perspective chart were positively received by participants and performed comparatively to traditional methods in our evaluation. The stepped perspective chart in particular performed well compared to an existing visualization design, the scale-stack bar chart, for our use case of visualizing data with important variation at multiple scales. + +Further evaluation could reinforce the results observed here. Based on our evaluation results, the slanted and stepped perspective charts particularly merit more in-depth evaluation as alternatives to existing chart types. + +§ 7 ACKNOWLEDGMENTS + +The authors would like to thank Lora Oehlberg for her guidance with the analysis of our evaluation results. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..4729aee389e96df8f42ecc3e71a6e36d94837552 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,199 @@ +# "I want to recycle batteries, but it's inconvenient": A Study of Non-rechargeable Battery Recycle Practices and Challenges + +Category: Research + +## Abstract + +Non-rechargeable batteries are widely used in electronic devices and can cause environmental issues if not recycled properly. However, little is known about the challenges that people might encounter when they recycle non-rechargeable batteries. We first conducted an online survey with 106 participants to understand their practices and challenges of reusing and recycling non-rechargeable batteries. We then interviewed 12 participants to understand the potential reasons behind their behaviors. Our results show that although it is common to store used batteries temporarily, many eventually do not recycle them for reasons such as the inconvenience of recycling, not knowing how to recycle batteries and high perceived efforts of recycling. Moreover, we highlight the challenges associated with their common battery reuse and recycle strategies. We present design considerations and potential solutions for both individuals and communities to promote sustainable battery recycle behaviors. + +Index Terms: Human-centered computing-Human-computer interaction-Empirical studies in HCI; + +## 1 INTRODUCTION + +Non-rechargeable batteries, also known as primary batteries, are commonly used in portable electronic devices (e.g., remote controls, stereo headsets) and are expected to grow at a compound annual growth rate of around 3% to nearly \$19 billion by 2022 [2]. Americans purchased nearly 3 billion primary batteries yearly [1]. + +Although recycling batteries is beneficial to the environment [3] and is encouraged by governments (e.g., [1]), only 36% of used batteries were estimated to be collected and 29% were recycled in the European Union in 2015 [1]. Collectively, each person in the US discards 8 primary batteries per year [1]. By 2025, approximately 1 million metric tons of spent battery waste will be accumulated [12]. However, little is known about how people recycle used batteries and what the potential challenges are. To fill in this gap, We sought to answer the following two research questions(RQs): + +- RQ1: What are the practices and challenges of reusing and recycling used batteries? + +- RQ2: What are design opportunities to improve using and recycling used batteries? + +We first conducted a survey study with 106 participants living in North America to understand their practices and challenges of reusing and recycling used batteries. Informed by the results, we further conducted in-depth interviews to explore people's willingness and barriers to conducting environmental-friendly approaches towards dealing with used batteries and major factors that affect participants' decision-making of reusing and recycling batteries. + +Our results show that although it is common to store used batteries temporarily, many eventually do not recycle them for reasons such as the inconvenience of recycling, lack of information about recycling batteries, and high perceived efforts of recycling batteries. Moreover, the practices of reusing batteries depend on the financial status, the motivation to conduct environmentally-friendly behavior, and the availability of tools (e.g., battery testers) and instructions. To our knowledge, this is the first study that provides both quantitative and qualitative understanding of battery reuse and recycle practices and challenges from users' perspectives. + +## 2 BACKGROUND AND RELATED WORK + +### 2.1 Regulations of Collecting Batteries + +Previous research showed that people should avoid the simple behavior of discarding household batteries along with municipal solid waste because the collection, separation, and recycling processes are accessible worldwide [4]. For example, Japan, United States, and European countries have equipped the whole countries with official recycling programs with collection centers. End users are the first part of the collection chain who must return spent batteries, while the second ones are distributors and manufacturers who should perform their responsibilities to collect batteries free of charge. Therefore, we decided to explore the individual behaviors of dealing with batteries and how the community or government helped individuals involve in environmental-friendly activities regarding battery recycle. + +A study [15] found that the impacts on the environment of the collection activities, closely bounded with the transportation, outbalanced the benefit to the environment. To minimize the negative impact of transportation, several countries in Europe applied the method of "integrated waste management" to integrate the collection of batteries and other recyclable material. Take Sweden as an example, battery [15]. So do the trucks that transported both paper and batteries. A project in the Netherlands would extract old batteries from household waste with magnets [3]. We also aimed to study combined with the living environment and people's behavior, how we can practice "integrated waste management" to reduce the side effect generated from battery recycling activities. + +### 2.2 Individual Battery Recycle Behavior + +Researchers investigated people's environmental attitudes and opinions and found that there is a positive relationship between general environmental attitudes and recycling actions though it is a fairly tenuous relationship $\left\lbrack {8,{11},{14},{18}}\right\rbrack$ . Several research studies suggested that to increase the citizens' participation in recycling, it is useful to educate the public about the significance of recycling and to inform them of how and where to recycle [16, 17]. Previous research suggested the major reasons why people refuse to recycle used batteries are due to the cost of time and effort and a lack of material reward $\left\lbrack {{13},{19}}\right\rbrack$ . To be more specific, consumers are more willing to recycle objects when it is convenient to access and use the recycling equipment [20]. Moreover, when the disposal method is integrated into everyday life, individuals feel encouraged to take actions that they assume are sustainable [19]. According to interviews with families with children in the Netherlands [7], most families expressed their unsatisfactory that recycle bins are not always available when they recycle household items, like glass or plastic bottles. Xiao et al. [7] conducted a survey that explored certain generation's actions and thoughts of recycling e-waste, as well as the barriers to recycling. The results show that individuals' practices vary largely. In the analysis process, they classified the recycling actions into five categories, including transferring a product to other users, returning it to the manufacturer, and reusing the object. Accordingly, we aimed to further investigate the major reasons and barriers of reusing and recycling batteries in everyday life for North-America residents. This would provide design implications for human-computer interaction researchers to best design tools and methods to assist people with reusing and recycling batteries. + +## 3 SURVEY + +### 3.1 Survey Design + +The survey included 14 multiple-choice, Likert-scale, and short-answer questions, which were organized into themes to elicit data about participants' practices and opinions about the devices with non-rechargeable batteries that they use, reusing batteries, and recycling batteries as well as their knowledge of non-rechargeable batteries regulations which differ from place to place. + +### 3.2 Procedure and Participants + +We distributed the survey via email lists from a university and social media platforms, such as Facebook and Slack, between March and November 2020. We received 107 responses, removed one duplicate response, and performed the analyses on the 106 valid responses. + +${79}\%$ of the participants $\left( {\mathrm{N} = {84}}\right)$ were from the USA and ${21}\%$ (N=22) were from other countries. 54 participants were between 18 and 25,42 were between 26 and 35,7 were between 36 and 50, and 4 were above 50 . + +### 3.3 Findings + +#### 3.3.1 Reusing Batteries + +Participants were presented with a scenario that "the TV remote control uses three single-use batteries, and you find that the batteries cannot provide enough power." and provided their inferences on the batteries' conditions and also their potential solutions. + +While about a third (32%) of the participants believed that all the batteries were completely drained, the majority (67%) of them believed that some of the batteries were only partially drained. Nonetheless, only ${52}\%$ chose to keep some of the batteries for later reuse, and 41% chose to change the batteries with new ones all at once. This highlights a gap between participants' understanding of the used batteries and their potential actions to deal with them. + +One major challenge of reusing batteries is to find out how much power is left in a used battery. However, 74% of the participants reported having no or little experience with testing the remaining power of a used battery. Only 2 participants (less than 2%) reported having such experience. + +#### 3.3.2 Recycling Batteries + +Participants were asked to report whether and how they might recycle used batteries. ${77}\% \left( {\mathrm{N} = {82}}\right)$ of the participants chose to "store the used batteries temporarily", 32% (N=32) chose to "throw the used batteries into a regular trash can", and only 14% (N=15) chose to "take the used batteries to a recycling center". The most frequently-mentioned barriers of visiting a recycling center were as follows: I do not know where to recycle the batteries $\left( {\mathrm{N} = {69}}\right)$ , I do not collect many batteries $\left( {\mathrm{N} = {48}}\right)$ , it is inconvenient to visit a recycling center $\left( {\mathrm{N} = {43}}\right)$ , and ${Idonothaveincentivestodoso}\left( {\mathrm{N} = {19}}\right)$ . + +Furthermore, there were challenges for recycling batteries. 70% of the participants did not know the regulations and laws of recycling batteries in their local area. Only ${22}\%$ sought the resources and information about recycling batteries. + +## 4 INTERVIEWS + +To further understand the challenges of reusing and recycling used batteries and identify opportunities to improve reusing and recycling practices, we conducted a semi-structured interview study. + +### 4.1 Interview Design + +The interview is consists of 4 parts. In part 1, we provided the descriptions about the difference between non-rechargeable batteries and rechargeable batteries to avoid confusion about the concepts. In addition, we asked a kick-off question about the recent experience of using batteries to prepare participants for exploring the problems that they encountered in daily life. Part 2 focused on people's practice and knowledge under two typical scenarios to learn their practice and willingness of replacing and reusing batteries. Part 3 covered previous experience in recycling batteries. Besides, we asked about the experience with other recyclable objectives and looked for good and bad reference of providing convenience to personal recycle activities. Part 4 unveiled our idea that people can donate or receive old batteries from others to make full use of the batteries. We sought participants' opinions on this idea and discovered elements that affects their decision-making. + +### 4.2 Participants + +We recruited 12 participants from the survey respondents, social media platforms, and word-of-mouth. 4 participants were identified as males and 7 as females; 7 participants were 18-24 years old, 4 were 25-35 years old, and 1 was 36-50 years old. 11 participants lived in the US and 1 in Canada. 5 participants had recycling experience in more than 1 country and 1 participant had related experience in 2 states in the US. Each participant was compensated with \$10. + +### 4.3 Procedure + +The study obtained approval to conduct the interview from the Institutional Review Board of Rochester Institution of Technology. We conducted the study with participants remotely with an online meeting platform, such as Zoom, Google meeting. The interview session lasted for about 3040 minutes. The whole interview sessions were audio-recorded using the Voice Memos application and transcribed the interview content using Otter.ai. + +### 4.4 Analysis + +Two authors first performed open coding and discussed about disagreements on coding to gain a consensus. They then performed an affinity diagramming to derive themes emerging from codes. + +### 4.5 Findings + +Our analysis revealed the rationales and challenges associated with the practices of reusing and recycling batteries as well as the potential design opportunities. + +#### 4.5.1 Reusing Batteries + +Battery usage behaviors vary depending on people's tolerance of the perceived interruptions when products run out of power. Participants tend to change all batteries at once for products that would cause high perceived disruptions to their user experience when running out of power, for example, the controller of a video game console In contrast, they would be more willing to change only one of the batteries for products that would cause low perceived interruptions when running out of power, such as a TV remote controller. + +Our survey results show that when the batteries cannot serve a product, ${67}\%$ of the respondents believe that the batteries are only partly used. In the interview, we investigated whether they are willing to measure the voltage in the battery. + +Only two out of the 12 participants indicated that they had battery testers to measure the remaining power or voltage in the batteries and decide whether they would reuse the batteries. All other participants showed little interest in knowing the leftover power in the batteries and indicated that they would replace all batteries together. We found three reasons. First, it was perceived to be time-consuming to test with new batteries and replace all the old batteries one by one Secondly, they did not feel the need to save batteries in particular when did not have many devices using single-use batteries. + +#### 4.5.2 Recycling Batteries + +${70}\%$ of the survey respondents were not confident about their knowledge about battery recycling regulations. The interview study further explored people's willingness to learn the related regulations. + +Willingness to learn about regulations: Seven out of the twelve participants indicated that they would like to learn the regulations about recycling batteries; two participants did not care much about the regulations; three would not actively seek to learn the regulations but would learn when encountering them. P10 mentioned that " $I$ don't actively seek it on my own initiative, but if I accidentally see it, I would click in." + +There is certain content that people are interested in. 7 of 12 participants want to learn where they could discard or recycle batteries. Except for that, law, specific rules and regulations, and knowledge about processing the batteries. + +8 out of 12 participants were aware that they should recycle non-rechargeable batteries or they could not discard these batteries in the regular trash. However, only 1 participant knew where to discard the non-rechargeable batteries and regulation in where he/she lived. Only 2 participants had or have experience in recycling non-rechargeable batteries. 4 participants indicated that they knew where to discard the non-rechargeable battery in other countries or districts, including China, Canada, Taiwan, and Turkey. + +Challenge. In the survey, only ${16}\%$ of participants had experience in visiting recycle places to recycle batteries. In the interview, we tried to figure out the reason why they did not go to the recycle places (willingness) and in which way they considered it an easy way to recycle (challenge). + +All the participants in the interview attempted to dispose of rechargeable batteries in the right place now or before to some extent. P5 said that " I know that they should be recycled in some way but I don't know where. So I threw it in the trash. " However, only P8 had the habit of recycling batteries because his working space had a disposing location. + +P4 used to recycle batteries but felt it was hard to recycle batteries after she moved to a new place 6 years ago, and she mentioned her emotions: "Once we (our family) were very distressed about disposing of the battery, but we didn't do deep search on the Internet. I feel this is a very simple thing, but so hard to find one. ... We used to live in a small town. There was a university near where we lived, and there were battery recycling places in it. " + +Two participants indicated that it was inconvenient to recycle batteries because they did not use a lot of batteries because it would be too much work. The other two participants thought that recycling batteries was part of the state laws and regulations. Although P11 was unfamiliar with the regulations and laws, he felt it was reasonable for residents to recycle batteries because "this is how we move the societies forward by being strict on these environmentally friendly things that aren't too difficult to do." P3 felt that the current law was not strict enough to regulate residents' behavior and people are less likely to care about it. He mentioned that "People are not just willing to push this into law so they are less likely to care about this. ...I don't care as much because it's not a law. " + +#### 4.5.3 Design Opportunities + +Our interviews also revealed three design opportunities. + +Making recycling convenient to people. Locations, where participants and their family members visited to recycle batteries, include convenience stores, universities, apartment leasing office, electronic retailer stores, and their workplaces. One common characteristic of these locations was that they were all convenient for participants to visit. In particular, when they had to run an errand near these recycle locations, they would be more willing to visit the location. + +Participants proposed several places to position battery recycling facilities (e.g., a bin): 1) places near where people live, for example, next to regular trash drop-off locations in a residential community; 2)locations near or in the stores that people visit regularly, for example, grocery stores, wholesales stores, or convenience stores; 3) libraries: P4 and P12 both mentioned that libraries are places that families with children and students often visit, and 4) recreational centers where people do sports and attend recreational classes. + +Learning from practices of recycling other items. We asked participants about other items that they recycled in their daily lives and why they were able to recycle them. The frequently recycled items included paper and newspaper, cardboard, and plastic bottles and cans. The main reason why these items got recycled often was that participants could simply put these items next to their regular trash bins and wait for waste management to collect them. This finding shows again that convenience is key to recycling. Moreover, small rewards (e.g., store credits) were given by certain grocery stores to encourage people to recycle plastic bottles and cans. However, P5 felt that rewards might not work for recycling batteries because the tedious process of collecting and bringing batteries to certain locations as well as the hygienic issues associated with used batteries outweigh the small rewards grocery stores could provide. + +Making information about how and where to recycle batteries easy to access. ${65}\%$ of the survey participants did not know where they should send batteries to. Thus, we investigated the reasons in the interviews. Results show that participants sought out battery recycle regulations and locations primarily from their social circles, such as their parents, spouses, friends, and landlords. Surprisingly, few participants used searching online (e.g., Google) as a way to find out battery recycle regulations and locations in their local areas. "Because we cannot find it via the internet, at least we tried to google...but we couldn't find it."-P5 + +Participants proposed six approaches to delivering battery recycle information that would be convenient for them to spot: 1) on the packages of the devices that use batteries, which show information about how to recycle batteries or a QR code that can be easily scanned by a smartphone; 2) on the battery brands' websites. An important consideration is to make sure such websites would pop up on the top of the search results list; 3)local governments: Local governments could inform their residences of relevant information via text messages, emails, news reports, or bulletins; 4) landlords, housing agents or dorm managers: They could be helpful for people who recently moved to the area and are not familiar with local battery recycle regulations and guidelines; 5) non-profit organizations: non-profit organizations could use public promotion activity and educational videos to show people the alarming consequences of discarding batteries without properly recycling them; and 6) waste management companies: waste management companies could also help residents recycle batteries, such as setting up a hot-line. + +The preferred formats to deliver battery recycling regulations and guidelines were info-graphics, short videos, social media posts, or advertisements. Info-graphics could be displayed on a battery's packaging or on the packaging of a product that uses batteries. + +## 5 DISCUSSION + +Informed by the findings of both the survey and interview studies, we present design considerations (DCs) for designers and researchers to consider when helping people better reuse and recycle batteries. + +DC1: Help Users Understand the State of Used Batteries and How They Could Reuse them. Our studies found that two prominent challenges of reusing batteries were: 1) it was unknown whether a battery was fully drained or there was some power left; 2) what other products they could reuse the batteries for. Reasons for these challenges included lacking tools and knowledge about how to test the power of a used battery and having no easy access to information about possible products that could take used batteries. + +To help implement this design consideration, we propose the conceptual designs of a portable battery tester and its companion mobile app to illustrate how to lower the barrier for the general public to test used batteries and find information about how to reuse and recycle them. Figure 1 (a) and (b) show two views of the battery tester that contains three slots to place three common types of non-rechargeable batters. The battery tester could connect with a smartphone via Bluetooth and send the test results to be displayed in the mobile companion app. Figure 1 (c) and (d) show the UIs of the app, which show the battery test results and the recommended actions for two batteries in good and bad conditions respectively. Making all of the information needed for reusing and recycling batteries in one app would save users efforts and time to perform random searches online, which were reported to be less effective by our participants and also prior studies [16,17]. + +![01963e91-97ad-7403-b15a-62e136a66220_3_148_147_736_937_0.jpg](images/01963e91-97ad-7403-b15a-62e136a66220_3_148_147_736_937_0.jpg) + +Figure 1: Battery Tester Prototype and companion Mobile App: (a) A side view of the prototype; (b) a top view of it; (c) App UI that shows that the tested AAA battery is in a good condition and offers a list of products that it could be reused in; (d) App UI that shows the tested AA battery was ready to be disposed or recycled and offers information about how it could be properly handled. + +DC2: Make recycling batteries integrated into people's routine lives. Our studies show that inconvenience was one key barrier to recycling batteries. This finding corroborates the findings that showed that the cost of time and effort and lack of material rewards hinder people from recycling batteries [13, 19]. Our participants who did recycle batteries often had relatively easy access to recycling locations, such as their workplace, nearby stores, universities, and the community and homeowner's associations. This finding echos the suggestion of previous research [19]. + +To help implement this design consideration, we recommend HCI designers and researchers consider the successful practices of recycling other materials, such as cardboard, cans, and plastic bottles, and make recycling batteries embedded in people's everyday lives [9]. For example, waste management companies would take the recyclable materials weekly along with the trash and the community would set a separate space next to the regular trash for residents to drop off recyclable materials. Furthermore, some grocery stores have recycle facilities for people to recycle bottles and cans and even offer small rewards. These potential example solutions align with a concept of "integrated waste management" [3] and should be considered when integrating the collection of batteries with other waste streams to minimize not only individual's recycling efforts but also the negative impact associated with the transportation of batteries. + +DC3: Build community supports to help people recycle batteries. Our studies also found that for those who did manage to recycle batteries, they received some community supports. For example, one interviewee mentioned that her previous community manager would notify the residents to carry used batteries to the community office once or twice a year. + +To help implement this design consideration, we propose to build online community platforms for people to share information about and also exchange used batteries. A successful example platform for people to exchange used items is Cragslist [6]. There are several challenges to overcome. First, it remains unclear how to motivate people to participate in such platforms. One approach might be to gamify the process to make it fun and rewarding to participate. Second, unlike other used items, used batteries may cause hazardous concerns if not handled properly during the sharing process. Similar to our proposed design in Figure 1, it is worth designing simple approaches to checking batteries' conditions. + +## 6 LIMITATIONS AND FUTURE WORK + +First, this short paper focused on understanding the practices and challenges of reusing and recycling batteries and deriving design considerations. Although we also offered potential solutions (e.g., Figure 1), they are yet to be fully implemented and evaluated with users. Second, the individual's practice of recycling batteries may vary [7]. In our interviews, we also noticed differences between participants who lived alone and those who lived with their families or roommates. More research is needed to investigate how battery reuse and recycle practices might be affected by social factors, such as the number of people that they live with. Lastly, our analysis highlights the importance of informing individuals of recycling locations and times in their local areas. However, our participants often had difficulty in finding such information. Although many online resources (e.g., earth911 [10] and call2recycle [5]) provide such information, future research should further investigate the barriers in users' information searching process and design tools to help users easily find such information. + +## 7 CONCLUSION + +We have conducted a survey study and an interview study to understand the practices and challenges of reusing and recycling batteries. Our results found various barriers in the way of reusing and recycling batteries. First, due to the lack of information, people do not know the efficient ways to recycle batteries. Even though some may have the good intention of conducting environmental-friendly practices, people are unwilling to invest too much time and effort if they could easily access necessary information. Regarding reusing or making full use of batteries, people have difficulties in figuring out the remaining power of batteries efficiently. As a result, many would discard batteries that still have power left, which leads to the waste of energy. Our analysis also uncovered opportunities to lower the barriers to reuse and recycle batteries. Finally, we present three design considerations and discuss potential solutions. + +## REFERENCES + +[1] Universal waste, Oct 2020. + +[2] Primary batteries market global opportunities and strategies to 2022, Jan 2021. + +[3] A. Bernardes, D. C. R. Espinosa, and J. S. Tenório. Recycling of batteries: a review of current processes and technologies. Journal of Power sources, 130(1-2):291-298, 2004. + +[4] A. M. Bernardes, D. C. R. Espinosa, and J. A. S. Tenório. Collection and recycling of portable batteries: a worldwide overview compared to the brazilian situation. Journal of power sources, 124(2):586-592, 2003. + +[5] Call2Recycle. Call2recycle - united states. https://www.call2recycle.org/, 2021. + +[6] craigslist. craigslist: jobs, apartments, for sale, services, community, and events. https://craigslist.org/, 2021. + +[7] P. de Kruyff, A. Steentjes, and S. Shahid. The alkaline arcade: a child-friendly fun machine for battery recycling. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology, pp. 1-2, 2011. + +[8] T. Domina and K. Koch. Convenience and frequency of recycling: implications for including textiles in curbside recycling programs. Environment and behavior, 34(2):216-238, 2002. + +[9] P. Dourish. Hci and environmental sustainability: the politics of design and the design of politics. In Proceedings of the 8th ACM conference on designing interactive systems, pp. 1-10,2010. + +[10] Earth911. Earth911 - more ideas, less waste. https://earth911.com/, 2021. + +[11] R. J. Gamba and S. Oskamp. Factors influencing community residents' participation in commingled curbside recycling programs. Environment and behavior, 26(5):587-612, 1994. + +[12] A. Garg, L. Wei, A. Goyal, X. Cui, and L. Gao. Evaluation of batteries residual energy for battery pack recycling: Proposition of stack stress-coupled-ai approach. Journal of Energy Storage, 26:101001, 2019. + +[13] R. Hansmann, P. Bernasconi, T. Smieszek, P. Loukopoulos, and R. W. Scholz. Justifications and self-organization as determinants of recycling behavior: The case of used batteries. Resources, Conservation and Recycling, 47(2):133-159, 2006. + +[14] J. Hornik, J. Cherian, M. Madansky, and C. Narayana. Determinants of recycling behavior: A synthesis of research results. The Journal of Socio-Economics, 24(1):105-127, 1995. + +[15] D. Lisbona and T. Snee. A review of hazards associated with primary lithium and lithium-ion batteries. Process safety and environmental protection, 89(6):434-442, 2011. + +[16] N. Mee. A communications strategy for kerbside recycling. Journal of Marketing Communications, 11(4):297-308, 2005. + +[17] P. O. D. Valle, E. Reis, J. Menezes, and E. Rebelo. Behavioral determinants of household recycling participation the- portuguese case. Environ. Behav., 36(4):505-540, 2004. + +[18] J. Vining and A. Ebreo. What makes a recycler? a comparison of recyclers and nonrecyclers. Environment and behavior, 22(1):55-73, 1990. + +[19] L. Wagner. Overview of energy storage technologies. In Future Energy, pp. 613-631. Elsevier, 2014. + +[20] X. Zhang and R. Wakkary. Design analysis: understanding e-waste recycling by generation y. In Proceedings of the 2011 Conference on Designing Pleasurable Products and Interfaces, pp. 1-8, 2011. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..15c6d23b94b1744077df8ca5b6bee86f270fd6d8 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,157 @@ +§ "I WANT TO RECYCLE BATTERIES, BUT IT'S INCONVENIENT": A STUDY OF NON-RECHARGEABLE BATTERY RECYCLE PRACTICES AND CHALLENGES + +Category: Research + +§ ABSTRACT + +Non-rechargeable batteries are widely used in electronic devices and can cause environmental issues if not recycled properly. However, little is known about the challenges that people might encounter when they recycle non-rechargeable batteries. We first conducted an online survey with 106 participants to understand their practices and challenges of reusing and recycling non-rechargeable batteries. We then interviewed 12 participants to understand the potential reasons behind their behaviors. Our results show that although it is common to store used batteries temporarily, many eventually do not recycle them for reasons such as the inconvenience of recycling, not knowing how to recycle batteries and high perceived efforts of recycling. Moreover, we highlight the challenges associated with their common battery reuse and recycle strategies. We present design considerations and potential solutions for both individuals and communities to promote sustainable battery recycle behaviors. + +Index Terms: Human-centered computing-Human-computer interaction-Empirical studies in HCI; + +§ 1 INTRODUCTION + +Non-rechargeable batteries, also known as primary batteries, are commonly used in portable electronic devices (e.g., remote controls, stereo headsets) and are expected to grow at a compound annual growth rate of around 3% to nearly $19 billion by 2022 [2]. Americans purchased nearly 3 billion primary batteries yearly [1]. + +Although recycling batteries is beneficial to the environment [3] and is encouraged by governments (e.g., [1]), only 36% of used batteries were estimated to be collected and 29% were recycled in the European Union in 2015 [1]. Collectively, each person in the US discards 8 primary batteries per year [1]. By 2025, approximately 1 million metric tons of spent battery waste will be accumulated [12]. However, little is known about how people recycle used batteries and what the potential challenges are. To fill in this gap, We sought to answer the following two research questions(RQs): + + * RQ1: What are the practices and challenges of reusing and recycling used batteries? + + * RQ2: What are design opportunities to improve using and recycling used batteries? + +We first conducted a survey study with 106 participants living in North America to understand their practices and challenges of reusing and recycling used batteries. Informed by the results, we further conducted in-depth interviews to explore people's willingness and barriers to conducting environmental-friendly approaches towards dealing with used batteries and major factors that affect participants' decision-making of reusing and recycling batteries. + +Our results show that although it is common to store used batteries temporarily, many eventually do not recycle them for reasons such as the inconvenience of recycling, lack of information about recycling batteries, and high perceived efforts of recycling batteries. Moreover, the practices of reusing batteries depend on the financial status, the motivation to conduct environmentally-friendly behavior, and the availability of tools (e.g., battery testers) and instructions. To our knowledge, this is the first study that provides both quantitative and qualitative understanding of battery reuse and recycle practices and challenges from users' perspectives. + +§ 2 BACKGROUND AND RELATED WORK + +§ 2.1 REGULATIONS OF COLLECTING BATTERIES + +Previous research showed that people should avoid the simple behavior of discarding household batteries along with municipal solid waste because the collection, separation, and recycling processes are accessible worldwide [4]. For example, Japan, United States, and European countries have equipped the whole countries with official recycling programs with collection centers. End users are the first part of the collection chain who must return spent batteries, while the second ones are distributors and manufacturers who should perform their responsibilities to collect batteries free of charge. Therefore, we decided to explore the individual behaviors of dealing with batteries and how the community or government helped individuals involve in environmental-friendly activities regarding battery recycle. + +A study [15] found that the impacts on the environment of the collection activities, closely bounded with the transportation, outbalanced the benefit to the environment. To minimize the negative impact of transportation, several countries in Europe applied the method of "integrated waste management" to integrate the collection of batteries and other recyclable material. Take Sweden as an example, battery [15]. So do the trucks that transported both paper and batteries. A project in the Netherlands would extract old batteries from household waste with magnets [3]. We also aimed to study combined with the living environment and people's behavior, how we can practice "integrated waste management" to reduce the side effect generated from battery recycling activities. + +§ 2.2 INDIVIDUAL BATTERY RECYCLE BEHAVIOR + +Researchers investigated people's environmental attitudes and opinions and found that there is a positive relationship between general environmental attitudes and recycling actions though it is a fairly tenuous relationship $\left\lbrack {8,{11},{14},{18}}\right\rbrack$ . Several research studies suggested that to increase the citizens' participation in recycling, it is useful to educate the public about the significance of recycling and to inform them of how and where to recycle [16, 17]. Previous research suggested the major reasons why people refuse to recycle used batteries are due to the cost of time and effort and a lack of material reward $\left\lbrack {{13},{19}}\right\rbrack$ . To be more specific, consumers are more willing to recycle objects when it is convenient to access and use the recycling equipment [20]. Moreover, when the disposal method is integrated into everyday life, individuals feel encouraged to take actions that they assume are sustainable [19]. According to interviews with families with children in the Netherlands [7], most families expressed their unsatisfactory that recycle bins are not always available when they recycle household items, like glass or plastic bottles. Xiao et al. [7] conducted a survey that explored certain generation's actions and thoughts of recycling e-waste, as well as the barriers to recycling. The results show that individuals' practices vary largely. In the analysis process, they classified the recycling actions into five categories, including transferring a product to other users, returning it to the manufacturer, and reusing the object. Accordingly, we aimed to further investigate the major reasons and barriers of reusing and recycling batteries in everyday life for North-America residents. This would provide design implications for human-computer interaction researchers to best design tools and methods to assist people with reusing and recycling batteries. + +§ 3 SURVEY + +§ 3.1 SURVEY DESIGN + +The survey included 14 multiple-choice, Likert-scale, and short-answer questions, which were organized into themes to elicit data about participants' practices and opinions about the devices with non-rechargeable batteries that they use, reusing batteries, and recycling batteries as well as their knowledge of non-rechargeable batteries regulations which differ from place to place. + +§ 3.2 PROCEDURE AND PARTICIPANTS + +We distributed the survey via email lists from a university and social media platforms, such as Facebook and Slack, between March and November 2020. We received 107 responses, removed one duplicate response, and performed the analyses on the 106 valid responses. + +${79}\%$ of the participants $\left( {\mathrm{N} = {84}}\right)$ were from the USA and ${21}\%$ (N=22) were from other countries. 54 participants were between 18 and 25,42 were between 26 and 35,7 were between 36 and 50, and 4 were above 50 . + +§ 3.3 FINDINGS + +§ 3.3.1 REUSING BATTERIES + +Participants were presented with a scenario that "the TV remote control uses three single-use batteries, and you find that the batteries cannot provide enough power." and provided their inferences on the batteries' conditions and also their potential solutions. + +While about a third (32%) of the participants believed that all the batteries were completely drained, the majority (67%) of them believed that some of the batteries were only partially drained. Nonetheless, only ${52}\%$ chose to keep some of the batteries for later reuse, and 41% chose to change the batteries with new ones all at once. This highlights a gap between participants' understanding of the used batteries and their potential actions to deal with them. + +One major challenge of reusing batteries is to find out how much power is left in a used battery. However, 74% of the participants reported having no or little experience with testing the remaining power of a used battery. Only 2 participants (less than 2%) reported having such experience. + +§ 3.3.2 RECYCLING BATTERIES + +Participants were asked to report whether and how they might recycle used batteries. ${77}\% \left( {\mathrm{N} = {82}}\right)$ of the participants chose to "store the used batteries temporarily", 32% (N=32) chose to "throw the used batteries into a regular trash can", and only 14% (N=15) chose to "take the used batteries to a recycling center". The most frequently-mentioned barriers of visiting a recycling center were as follows: I do not know where to recycle the batteries $\left( {\mathrm{N} = {69}}\right)$ , I do not collect many batteries $\left( {\mathrm{N} = {48}}\right)$ , it is inconvenient to visit a recycling center $\left( {\mathrm{N} = {43}}\right)$ , and ${Idonothaveincentivestodoso}\left( {\mathrm{N} = {19}}\right)$ . + +Furthermore, there were challenges for recycling batteries. 70% of the participants did not know the regulations and laws of recycling batteries in their local area. Only ${22}\%$ sought the resources and information about recycling batteries. + +§ 4 INTERVIEWS + +To further understand the challenges of reusing and recycling used batteries and identify opportunities to improve reusing and recycling practices, we conducted a semi-structured interview study. + +§ 4.1 INTERVIEW DESIGN + +The interview is consists of 4 parts. In part 1, we provided the descriptions about the difference between non-rechargeable batteries and rechargeable batteries to avoid confusion about the concepts. In addition, we asked a kick-off question about the recent experience of using batteries to prepare participants for exploring the problems that they encountered in daily life. Part 2 focused on people's practice and knowledge under two typical scenarios to learn their practice and willingness of replacing and reusing batteries. Part 3 covered previous experience in recycling batteries. Besides, we asked about the experience with other recyclable objectives and looked for good and bad reference of providing convenience to personal recycle activities. Part 4 unveiled our idea that people can donate or receive old batteries from others to make full use of the batteries. We sought participants' opinions on this idea and discovered elements that affects their decision-making. + +§ 4.2 PARTICIPANTS + +We recruited 12 participants from the survey respondents, social media platforms, and word-of-mouth. 4 participants were identified as males and 7 as females; 7 participants were 18-24 years old, 4 were 25-35 years old, and 1 was 36-50 years old. 11 participants lived in the US and 1 in Canada. 5 participants had recycling experience in more than 1 country and 1 participant had related experience in 2 states in the US. Each participant was compensated with $10. + +§ 4.3 PROCEDURE + +The study obtained approval to conduct the interview from the Institutional Review Board of Rochester Institution of Technology. We conducted the study with participants remotely with an online meeting platform, such as Zoom, Google meeting. The interview session lasted for about 3040 minutes. The whole interview sessions were audio-recorded using the Voice Memos application and transcribed the interview content using Otter.ai. + +§ 4.4 ANALYSIS + +Two authors first performed open coding and discussed about disagreements on coding to gain a consensus. They then performed an affinity diagramming to derive themes emerging from codes. + +§ 4.5 FINDINGS + +Our analysis revealed the rationales and challenges associated with the practices of reusing and recycling batteries as well as the potential design opportunities. + +§ 4.5.1 REUSING BATTERIES + +Battery usage behaviors vary depending on people's tolerance of the perceived interruptions when products run out of power. Participants tend to change all batteries at once for products that would cause high perceived disruptions to their user experience when running out of power, for example, the controller of a video game console In contrast, they would be more willing to change only one of the batteries for products that would cause low perceived interruptions when running out of power, such as a TV remote controller. + +Our survey results show that when the batteries cannot serve a product, ${67}\%$ of the respondents believe that the batteries are only partly used. In the interview, we investigated whether they are willing to measure the voltage in the battery. + +Only two out of the 12 participants indicated that they had battery testers to measure the remaining power or voltage in the batteries and decide whether they would reuse the batteries. All other participants showed little interest in knowing the leftover power in the batteries and indicated that they would replace all batteries together. We found three reasons. First, it was perceived to be time-consuming to test with new batteries and replace all the old batteries one by one Secondly, they did not feel the need to save batteries in particular when did not have many devices using single-use batteries. + +§ 4.5.2 RECYCLING BATTERIES + +${70}\%$ of the survey respondents were not confident about their knowledge about battery recycling regulations. The interview study further explored people's willingness to learn the related regulations. + +Willingness to learn about regulations: Seven out of the twelve participants indicated that they would like to learn the regulations about recycling batteries; two participants did not care much about the regulations; three would not actively seek to learn the regulations but would learn when encountering them. P10 mentioned that " $I$ don't actively seek it on my own initiative, but if I accidentally see it, I would click in." + +There is certain content that people are interested in. 7 of 12 participants want to learn where they could discard or recycle batteries. Except for that, law, specific rules and regulations, and knowledge about processing the batteries. + +8 out of 12 participants were aware that they should recycle non-rechargeable batteries or they could not discard these batteries in the regular trash. However, only 1 participant knew where to discard the non-rechargeable batteries and regulation in where he/she lived. Only 2 participants had or have experience in recycling non-rechargeable batteries. 4 participants indicated that they knew where to discard the non-rechargeable battery in other countries or districts, including China, Canada, Taiwan, and Turkey. + +Challenge. In the survey, only ${16}\%$ of participants had experience in visiting recycle places to recycle batteries. In the interview, we tried to figure out the reason why they did not go to the recycle places (willingness) and in which way they considered it an easy way to recycle (challenge). + +All the participants in the interview attempted to dispose of rechargeable batteries in the right place now or before to some extent. P5 said that " I know that they should be recycled in some way but I don't know where. So I threw it in the trash. " However, only P8 had the habit of recycling batteries because his working space had a disposing location. + +P4 used to recycle batteries but felt it was hard to recycle batteries after she moved to a new place 6 years ago, and she mentioned her emotions: "Once we (our family) were very distressed about disposing of the battery, but we didn't do deep search on the Internet. I feel this is a very simple thing, but so hard to find one. ... We used to live in a small town. There was a university near where we lived, and there were battery recycling places in it. " + +Two participants indicated that it was inconvenient to recycle batteries because they did not use a lot of batteries because it would be too much work. The other two participants thought that recycling batteries was part of the state laws and regulations. Although P11 was unfamiliar with the regulations and laws, he felt it was reasonable for residents to recycle batteries because "this is how we move the societies forward by being strict on these environmentally friendly things that aren't too difficult to do." P3 felt that the current law was not strict enough to regulate residents' behavior and people are less likely to care about it. He mentioned that "People are not just willing to push this into law so they are less likely to care about this. ...I don't care as much because it's not a law. " + +§ 4.5.3 DESIGN OPPORTUNITIES + +Our interviews also revealed three design opportunities. + +Making recycling convenient to people. Locations, where participants and their family members visited to recycle batteries, include convenience stores, universities, apartment leasing office, electronic retailer stores, and their workplaces. One common characteristic of these locations was that they were all convenient for participants to visit. In particular, when they had to run an errand near these recycle locations, they would be more willing to visit the location. + +Participants proposed several places to position battery recycling facilities (e.g., a bin): 1) places near where people live, for example, next to regular trash drop-off locations in a residential community; 2)locations near or in the stores that people visit regularly, for example, grocery stores, wholesales stores, or convenience stores; 3) libraries: P4 and P12 both mentioned that libraries are places that families with children and students often visit, and 4) recreational centers where people do sports and attend recreational classes. + +Learning from practices of recycling other items. We asked participants about other items that they recycled in their daily lives and why they were able to recycle them. The frequently recycled items included paper and newspaper, cardboard, and plastic bottles and cans. The main reason why these items got recycled often was that participants could simply put these items next to their regular trash bins and wait for waste management to collect them. This finding shows again that convenience is key to recycling. Moreover, small rewards (e.g., store credits) were given by certain grocery stores to encourage people to recycle plastic bottles and cans. However, P5 felt that rewards might not work for recycling batteries because the tedious process of collecting and bringing batteries to certain locations as well as the hygienic issues associated with used batteries outweigh the small rewards grocery stores could provide. + +Making information about how and where to recycle batteries easy to access. ${65}\%$ of the survey participants did not know where they should send batteries to. Thus, we investigated the reasons in the interviews. Results show that participants sought out battery recycle regulations and locations primarily from their social circles, such as their parents, spouses, friends, and landlords. Surprisingly, few participants used searching online (e.g., Google) as a way to find out battery recycle regulations and locations in their local areas. "Because we cannot find it via the internet, at least we tried to google...but we couldn't find it."-P5 + +Participants proposed six approaches to delivering battery recycle information that would be convenient for them to spot: 1) on the packages of the devices that use batteries, which show information about how to recycle batteries or a QR code that can be easily scanned by a smartphone; 2) on the battery brands' websites. An important consideration is to make sure such websites would pop up on the top of the search results list; 3)local governments: Local governments could inform their residences of relevant information via text messages, emails, news reports, or bulletins; 4) landlords, housing agents or dorm managers: They could be helpful for people who recently moved to the area and are not familiar with local battery recycle regulations and guidelines; 5) non-profit organizations: non-profit organizations could use public promotion activity and educational videos to show people the alarming consequences of discarding batteries without properly recycling them; and 6) waste management companies: waste management companies could also help residents recycle batteries, such as setting up a hot-line. + +The preferred formats to deliver battery recycling regulations and guidelines were info-graphics, short videos, social media posts, or advertisements. Info-graphics could be displayed on a battery's packaging or on the packaging of a product that uses batteries. + +§ 5 DISCUSSION + +Informed by the findings of both the survey and interview studies, we present design considerations (DCs) for designers and researchers to consider when helping people better reuse and recycle batteries. + +DC1: Help Users Understand the State of Used Batteries and How They Could Reuse them. Our studies found that two prominent challenges of reusing batteries were: 1) it was unknown whether a battery was fully drained or there was some power left; 2) what other products they could reuse the batteries for. Reasons for these challenges included lacking tools and knowledge about how to test the power of a used battery and having no easy access to information about possible products that could take used batteries. + +To help implement this design consideration, we propose the conceptual designs of a portable battery tester and its companion mobile app to illustrate how to lower the barrier for the general public to test used batteries and find information about how to reuse and recycle them. Figure 1 (a) and (b) show two views of the battery tester that contains three slots to place three common types of non-rechargeable batters. The battery tester could connect with a smartphone via Bluetooth and send the test results to be displayed in the mobile companion app. Figure 1 (c) and (d) show the UIs of the app, which show the battery test results and the recommended actions for two batteries in good and bad conditions respectively. Making all of the information needed for reusing and recycling batteries in one app would save users efforts and time to perform random searches online, which were reported to be less effective by our participants and also prior studies [16,17]. + + < g r a p h i c s > + +Figure 1: Battery Tester Prototype and companion Mobile App: (a) A side view of the prototype; (b) a top view of it; (c) App UI that shows that the tested AAA battery is in a good condition and offers a list of products that it could be reused in; (d) App UI that shows the tested AA battery was ready to be disposed or recycled and offers information about how it could be properly handled. + +DC2: Make recycling batteries integrated into people's routine lives. Our studies show that inconvenience was one key barrier to recycling batteries. This finding corroborates the findings that showed that the cost of time and effort and lack of material rewards hinder people from recycling batteries [13, 19]. Our participants who did recycle batteries often had relatively easy access to recycling locations, such as their workplace, nearby stores, universities, and the community and homeowner's associations. This finding echos the suggestion of previous research [19]. + +To help implement this design consideration, we recommend HCI designers and researchers consider the successful practices of recycling other materials, such as cardboard, cans, and plastic bottles, and make recycling batteries embedded in people's everyday lives [9]. For example, waste management companies would take the recyclable materials weekly along with the trash and the community would set a separate space next to the regular trash for residents to drop off recyclable materials. Furthermore, some grocery stores have recycle facilities for people to recycle bottles and cans and even offer small rewards. These potential example solutions align with a concept of "integrated waste management" [3] and should be considered when integrating the collection of batteries with other waste streams to minimize not only individual's recycling efforts but also the negative impact associated with the transportation of batteries. + +DC3: Build community supports to help people recycle batteries. Our studies also found that for those who did manage to recycle batteries, they received some community supports. For example, one interviewee mentioned that her previous community manager would notify the residents to carry used batteries to the community office once or twice a year. + +To help implement this design consideration, we propose to build online community platforms for people to share information about and also exchange used batteries. A successful example platform for people to exchange used items is Cragslist [6]. There are several challenges to overcome. First, it remains unclear how to motivate people to participate in such platforms. One approach might be to gamify the process to make it fun and rewarding to participate. Second, unlike other used items, used batteries may cause hazardous concerns if not handled properly during the sharing process. Similar to our proposed design in Figure 1, it is worth designing simple approaches to checking batteries' conditions. + +§ 6 LIMITATIONS AND FUTURE WORK + +First, this short paper focused on understanding the practices and challenges of reusing and recycling batteries and deriving design considerations. Although we also offered potential solutions (e.g., Figure 1), they are yet to be fully implemented and evaluated with users. Second, the individual's practice of recycling batteries may vary [7]. In our interviews, we also noticed differences between participants who lived alone and those who lived with their families or roommates. More research is needed to investigate how battery reuse and recycle practices might be affected by social factors, such as the number of people that they live with. Lastly, our analysis highlights the importance of informing individuals of recycling locations and times in their local areas. However, our participants often had difficulty in finding such information. Although many online resources (e.g., earth911 [10] and call2recycle [5]) provide such information, future research should further investigate the barriers in users' information searching process and design tools to help users easily find such information. + +§ 7 CONCLUSION + +We have conducted a survey study and an interview study to understand the practices and challenges of reusing and recycling batteries. Our results found various barriers in the way of reusing and recycling batteries. First, due to the lack of information, people do not know the efficient ways to recycle batteries. Even though some may have the good intention of conducting environmental-friendly practices, people are unwilling to invest too much time and effort if they could easily access necessary information. Regarding reusing or making full use of batteries, people have difficulties in figuring out the remaining power of batteries efficiently. As a result, many would discard batteries that still have power left, which leads to the waste of energy. Our analysis also uncovered opportunities to lower the barriers to reuse and recycle batteries. Finally, we present three design considerations and discuss potential solutions. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..4b6224812350701fd32e40b515d8b705d6ddc123 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,445 @@ +# BayesGaze: A Bayesian Approach to Eye-Gaze Based Target Selection + +Category: Research + +## Abstract + +Selecting targets accurately and quickly with eye-gaze input remains an open research question. In this paper, we introduce BayesGaze, a Bayesian approach of determining the selected target given an eye-gaze trajectory. This approach views each sampling point in an eye-gaze trajectory as a signal for selecting a target. It then uses the Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by sampling interval to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping (CM) method [4]. Our research shows that both accumulating posterior and incorporating the prior are effective in improving the performance of eye-gaze based target selection. + +Index Terms: Human-centered computing-Human computer interaction (HCI); Human-centered computing-Human computer interaction (HCI)—Interaction techniques; Human-centered computing-Human computer interaction (HCI)-HCI design and evaluation methods-User studies; + +## 1 INTRODUCTION + +Selecting a target with the gaze remains a central problem of eye-based interaction. Two factors make this problem challenging [18]. First, gaze input is noisy because of both inadvertent eye movements and inevitable noise in the tracking device [49]. Therefore, it is difficult for a user to move their gaze to a particular position and stabilize it for an extended period of time. Second, unlike using a mouse, where a user can confirm the selection by clicking a button, gaze-based interaction lacks an easy-to-use approach to confirm the selection, adding a layer of difficulty to the design of a selection technique [42]. Although previous research has explored target selection using dwell $\left\lbrack {{15},{17}}\right\rbrack$ , motion correlation [44] and dynamic user interfaces $\left\lbrack {{26},{29},{39}}\right\rbrack$ , quickly and accurately selecting a target with gaze input remains an open research question. + +Inspired by the literature showing that Bayes' theorem is a promising principle for handling uncertainty and noise in input signals (e.g., $\left\lbrack {4,{51}}\right\rbrack$ ), we investigate how to apply a Bayesian perspective to determining the selected target given a gaze trajectory. Applying Bayes' theorem to gaze-based target selection raises two main challenges. First, it is not clear how to obtain the likelihood function for a gaze trajectory that contains a sequence of input signals (gaze points), i.e. the probability of observing a gaze trajectory given the target. Second, unlike touch or mouse input, which have a clear definitions of the terminal moment of the input, e.g. lifting the finger from the touch screen or mouse button, gaze input lacks a clear delimiter of the completion of a selection action. It is therefore necessary to design a method to determine when the selection action is completed. + +To address these challenges, we introduce BayesGaze (Figure 1), a Bayesian approach for determining the selected target given a gaze trajectory. This approach first views each sampling point in a gaze trajectory as a signal for selecting a target, and then uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point. The likelihood of a target being selected is based on the distance between the sampling point and the target center, and the prior probability of a target being selected is modeled by a categorical distribution and updated after a selection action. BayesGaze then accumulates the posterior probabilities over all sampling points, weighted by the sampling interval, to determine the selected target. BayesGaze advances the Center of Gravity Mapping (CM) [4] by modeling the prior and incorporating it into the process of determining the selected target. This contribution is key to improving the performance of gaze-based target selection. + +![01963e97-3956-7818-9958-11aa60dc4e7f_0_945_612_680_269_0.jpg](images/01963e97-3956-7818-9958-11aa60dc4e7f_0_945_612_680_269_0.jpg) + +Figure 1: An overview of how BayesGaze works. Given a gaze position ${s}_{i}$ sampled at time $i$ in a gaze trajectory, BayesGaze updates the accumulated interest of selecting target $t$ , denoted by ${I}_{i}\left( t\right)$ , by adding $P\left( {t \mid {s}_{i}}\right)$ weighted by the sampling interval ${\Delta \tau }$ to ${I}_{i - 1}\left( t\right) .P\left( {t \mid {s}_{i}}\right)$ is the posterior probability of selecting $t$ given ${s}_{i}$ , which is calculated based on Bayes’ theorem. If the accumulated interest ${I}_{i}\left( t\right)$ exceeds a threshold $\theta$ , the target $t$ is selected. BayesGaze then updates the prior probability $P\left( t\right)$ accordingly. + +We report on a controlled experiment showing that BayesGaze improves target selection accuracy (from 82.1% to 88.3%) and speed (from 2.49 seconds per selection to 2.23 seconds) over a dwell-based selection method. BayesGaze also outperforms the CM method [4]. Overall, our investigation shows that accumulating the posterior probability and incorporating the prior are effective in improving the performance of gaze-based target selection. + +## 2 RELATED WORK + +BayesGaze builds on previous work on gaze input and on Bayesian approaches. Here we review related work in gaze-based target selection techniques, Bayesian approaches to gaze input, and gaze-tracking technology. + +### 2.1 Gaze Based Target Selection + +Gaze-based target selection is a key technique for supporting a number of gaze interaction technologies such as gaze-based text input [35], gaming [16] or smart device control [36]. Dwell-based target selection (Dwell) $\left\lbrack {{15},{17},{50}}\right\rbrack$ is the most well-known and most widely used target selection method. It requires a user to dwell their gaze on a target for a specific uninterrupted period of time (usually several hundred milliseconds to 1 or 2 seconds) to select it. Such a highly concentrated action often results in eye fatigue [33]. Many works have been devoted to improving the Dwell technique by enabling a shorter dwell time and to finding other gaze-based target selection methods. For example, letting a user adjust the dwell time manually can lead to a shorter dwell time, from 876 ms to ${282}\mathrm{\;{ms}}$ [27]. Previous research [15] used Fitts’ law to model gaze input and suggested selecting the target once the user's gaze fixates the target. Other works have explored adjusting the dwell time based on how likely the target will be selected $\left\lbrack {{31},{34}}\right\rbrack$ . + +In addition to dwell-based methods, researchers have proposed alternatives to improve gaze-based target selection from the two perspectives: handling the noisy gaze input and designing new selection action $\left\lbrack {{18},{49}}\right\rbrack$ . To accommodate the inaccuracy of eye-gaze input, some works used dynamic expansion/zooming of the display [29,39] or new UIs, e.g. Actigaze [26] used a set of confirmation buttons to make gaze target selection easier. Other works investigated error-aware gaze target selection so that the inaccuracy of target selection can be tracked and the system can provide design guidelines for UIs $\left\lbrack {3,{11}}\right\rbrack$ . Gaze target selection actions are also well explored. For example, motion correlation between the target movement and gaze trajectory has been proposed to determine the selected target [44]. Actions such as blinking [7] and gaze gesture [9] have also been explored for target selection. Previous research has also used multimodal input to get rid of the dwelling action. For example, once the user gazes at the target, a separate device, such as a keyboard [22] or hand-held touchscreen [42], can be employed to perform the selection action. + +### 2.2 Bayesian Approaches to Target Selection + +There is a growing interest in applying a Bayesian perspective to handle uncertainty in target selection. Some of this research is related to gaze input. For example, previous research has proposed probabilistic frameworks to deal with uncertainty in the input process, such as handling the uncertainty of touch actions on mobile devices $\left\lbrack {6,{45}}\right\rbrack$ and touchscreens $\left\lbrack {51}\right\rbrack$ , and also handling uncertainty in gaze-based interactions [4,32]. + +Our work is related to the recent work BayesianCommand, which uses Bayes' theorem to handle uncertainty in touch target selection and word-gesture input [51]. The fundamental difference between our work and BayesianCommand is that in our work, gaze input does not have well-defined starting or ending moments, but touch input does (i.e., landing a finger on screen to start input, and taking finger off to end the input). Therefore, BayesianCommand cannot be applied to gaze input directly. + +Our research is also related to previous work on using a Bayesian perspective to address the gaze-to-object mapping problem, i.e. the Center of Gravity Mapping method (CM) [4]. CM is an improved version of the FM algorithm [47], which performed the best among 9 extant gaze-to-object mapping algorithms [40]. The main difference between our work and CM is that CM does not model nor update the prior, while our approach incorporates the prior into the process of deciding the selected target, which turns out to be the primary reason why BayesGaze improves target selection accuracy and reduces selection time. Furthermore, BayesGaze is designed for the gaze target selection problem while the CM is designed for gaze-to-object mapping problem. The gaze-based target selection is a different problem than gaze-to-object mapping $\left\lbrack {4,{40}}\right\rbrack$ because the former requires a mechanism to commit the selection while the latter does not. + +### 2.3 Gaze Tracking Technology + +Gaze tracking technology is becoming increasingly mature and available. For example, a number of professional gaze trackers are available, including Tobii 4C [23], SMI REDn [20] or Eyelink 1000 plus [37], that cost several hundreds up to a few thousand dollars. Previous research has also enabled gaze tracking with off-the-shelf cameras by using a fisheye camera [2], the front-facing RGB camera of a tablet [46], or by leveraging the glint of the screen on the user's cornea [14]. Deep learning techniques have also been used to predict gaze position using Convolutional Neural Networks [19, 48]. + +Unlike the above approaches, we enabled gaze tracking with an off-the-shelf and widely used iPad Pro equipped with a true depth camera and powered by Apple's ARKit. + +## 3 BAYESGAZE: A BAYESIAN PERSPECTIVE ON GAZE TAR- GET SELECTION + +### 3.1 A Formal Description of the Gaze Based Target Se- lection Problem + +The gaze-based target selection problem can be formally described as the following research question. Given a gaze trajectory, which one is the intended target among a set of candidates denoted by $T = \left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{N}}\right\}$ + +As shown in previous research $\left\lbrack {4,{40},{47}}\right\rbrack$ , the existing algorithms for solving the gaze-based target selection problem can be described through an interest accumulation framework: each target candidate (denoted by $t$ ) accumulates a certain amount of "time" or "interest" from gaze input, until one of them reaches a threshold (denoted by $\theta$ ) for being selected. Under this framework, the widely adopted dwell-based target selection method can be expressed as follow. + +Dwell-based Target Selection Method. Assuming that the gaze trajectory is denoted by $S = \left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{K}}\right\}$ where ${s}_{i}$ is a sampling point along the gaze trajectory at time $i$ , the accumulated "interest" for a target candidate $t$ at time $i$ , denoted by ${I}_{i}\left( t\right)$ , is calculated as: + +$$ +{I}_{i}\left( t\right) = \left\{ \begin{array}{ll} {I}_{i - 1}\left( t\right) + {\Delta \tau }, & \text{ if }{s}_{i}\text{ is within the target }t \\ 0, & \text{ otherwise } \end{array}\right. \tag{1} +$$ + +where ${s}_{i}$ is the gaze position at time $i$ , and ${\Delta \tau }$ is the sampling interval. ${I}_{i}\left( t\right)$ represents the duration during which the gaze position stayed continuously within the target candidate $t$ . If the gaze position moves outside the target, it resets ${I}_{i}\left( t\right)$ to 0 . To select a target, the eye-gaze position needs to continuously stay within a target for a period of $\theta$ . In other words, the selected target is the one (denoted by ${t}^{ * }$ ) whose accumulated selection interest ${I}_{i}\left( {t}^{ * }\right)$ first reaches $\theta$ (i.e., ${I}_{i}\left( {t}^{ * }\right) \geq \theta$ ). + +### 3.2 The BayesGaze Algorithm + +Under the framework of "accumulating selection interest", we propose BayesGaze, a Bayesian perspective for gaze-based target selection. It views each sampling point in a gaze trajectory as a signal for selecting a target, and then uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point. BayesGaze then accumulates the posterior probabilities over all sampling points weighted by the sampling interval, as accumulated interest of selecting a target. A target candidate will be selected once the accumulated interest reaches a threshold $\theta$ . Formally, the accumulated interest of selecting a target $t$ is calculated as follows, given the sampling point ${s}_{i}$ : + +$$ +{I}_{i}\left( t\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot P\left( {t \mid {s}_{i}}\right) . \tag{2} +$$ + +The posterior $P\left( {t \mid {s}_{i}}\right)$ can be estimated according to Bayes’ theorem, assuming there are $N$ target candidates: + +$$ +P\left( {t \mid {s}_{i}}\right) = \frac{P\left( {{s}_{i} \mid t}\right) P\left( t\right) }{P\left( {s}_{i}\right) } = \frac{P\left( {{s}_{i} \mid t}\right) P\left( t\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}P\left( {{s}_{i} \mid {t}_{j}}\right) P\left( {t}_{j}\right) }, \tag{3} +$$ + +where $P\left( t\right)$ is the prior probability of target $t$ being the intended target without observing the current gaze input trajectory, and $P\left( {{s}_{i} \mid t}\right)$ is the probability of ${s}_{i}$ if the intended target is $t$ (the likelihood). + +BayesGaze has the following characteristics. First, BayesGaze resumes the accumulation of selection interest from where it left if the gaze trajectory accidentally leaves a target but returns to it later. It address a problem of dwell-based method (Equation 1) that if the eye-gaze position moves outside a target, the accumulated interest for selecting such a target is reset to 0 . Second, it weights the accumulated interest with the distance between the gaze point and the target center, through the likelihood function $P\left( {{s}_{i} \mid t}\right)$ . The closer a gaze point is to the target center, the more "interest" such a point will contribute to the target selection. Third, it updates the prior distribution of targets $\left( {P\left( t\right) }\right)$ and incorporate it into the procedure of deciding the selected target. + +In the following part, we introduce how to estimate the prior distribution $P\left( t\right)$ and the likelihood $P\left( {s \mid t}\right)$ , which are keys for applying BayesGaze. + +#### 3.2.1 Prior Probability Model + +This part introduces a frequency model to estimate the prior distribution $P\left( t\right)$ based on the observable target selection history. We assume that the user does not select targets randomly and the target selection follows some distribution, e.g. Zipf's Law. This assumption is made based on the selection patterns in menu selection $\left\lbrack {8,{25},{51}}\right\rbrack$ , smartphone APP launching [30], and command triggering [1, 10, 51]. All of them are tasks that gaze target selection can support. + +We model the prior distribution (i.e., a target candidate being selected prior to observing the current gaze trajectory) as a categorical distribution. More specifically, the outcome of a gaze-based selection trial that results in a selected target is viewed as a random variable $x$ whose value is one of $N$ categories (the $N$ target candidates). The core parameter of this random variable $x$ is the parameter vector $\mathbf{p} = \left( {P\left( {t}_{1}\right) , P\left( {t}_{2}\right) ,\ldots , P\left( {t}_{N}\right) }\right)$ , which describes the probability of each category. As a common practice in Bayesian inference, we also view this parameter vector $\mathbf{p}$ as a random variable and give it a prior distribution, using the Dirichlet distribution. + +According to the properties of Dirichlet distributions, after each target selection trial we can update the expected value of the posterior + +$p$ as follows: + +$$ +P\left( {t}_{i}\right) = \frac{k + {c}_{i}}{k \cdot N + \mathop{\sum }\limits_{{j = 1}}^{N}{c}_{j}}, \tag{4} +$$ + +where $N$ is the number of candidate targets (e.g., the number of menu items), ${c}_{i}$ is the number of times we have observed target ${t}_{i}$ being selected, and $k$ is the pseudocount of the Dirichlet prior, a hyper-parameter of the distribution. The parameter $k$ can also be viewed as the update rate, which is a positive constant that controls how quickly the $P\left( {t}_{i}\right)$ are updated. Note that the prior updating model (Equation 4) is the same as the model proposed by Zhu et al. [51], although these authors do not describe it under the paradigm of categorical-Dirichlet distributions. We use the expected value of $\mathbf{p}$ (Equation 4) as the prior model in BayesGaze (Equation 3). + +This prior model matches our expectations well. When there is no target selection observed, the probability $P\left( {t}_{i}\right)$ is $\frac{k}{k \cdot N} = \frac{1}{N}$ , which means that all candidate targets have equal probability. Whereas when there are enough target selections observed, i.e. ${c}_{i} \gg k$ , we have $P\left( {t}_{i}\right) \approx \frac{{c}_{i}}{\mathop{\sum }\limits_{j}{c}_{j}}$ , which means that $P\left( {t}_{i}\right)$ can be estimated based on the frequency of ${t}_{i}$ having been selected before. + +By setting different $k$ , we can balance $P\left( {t}_{i}\right)$ between two extreme cases: 1) when $k \rightarrow + \infty$ , we have $P\left( {t}_{i}\right) \approx \frac{1}{N}$ , that is, the prior probabilities of all candidate targets are the equal. 2) when $k = 0$ , we have $P\left( {t}_{i}\right) = \frac{{c}_{i}}{\mathop{\sum }\limits_{j}{c}_{j}}$ , which means that the prior probability is only based on the history selection frequency. We later use empirical data to determine an optimal value for $k$ . + +#### 3.2.2 Likelihood Model + +The goal of this step is to estimate $P\left( {{s}_{i} \mid t}\right)$ , the likelihood of observing ${s}_{i}$ if $t$ is the intended target. Since ${s}_{i}$ is a single gaze position, a reasonable assumption is that $P\left( {{s}_{i} \mid t}\right)$ is higher if ${s}_{i}$ is closer to the center of $t$ . We follow Bernard et al. [4] and use a Gaussian density function to describe the likelihood of observing ${s}_{i}$ , a common method for modeling likelihood for a single-point target selection: + +$$ +P\left( {{s}_{i} \mid t}\right) = \frac{1}{\sqrt{{2\pi }{\sigma }^{2}}}\exp \left( {-\frac{{\begin{Vmatrix}{s}_{i} - {c}_{t}\end{Vmatrix}}^{2}}{2{\sigma }^{2}}}\right) , \tag{5} +$$ + +where ${c}_{t}$ is the center of target $t$ , the term $\begin{Vmatrix}{{s}_{i} - {c}_{t}}\end{Vmatrix}$ is the ${L}^{2}$ Euclidean norm of the vector ${s}_{i} - {c}_{t},\sigma$ is an empirical parameter defining how concentrated should the gaze points be. The parameter $\sigma$ controls how much interest can be accumulated at a certain distance. If $\sigma$ is too small, a target accumulates high interest only when the gaze point is close to the target center, which could make the target hard to select. On the other hand, if $\sigma$ is too large, the accumulated interests for neighboring targets could become large and cause mis-selections. We estimate an optimal $\sigma$ from real data in the next section. + +After obtaining both the prior probability and the likelihood, we can use BayesGaze to perform target selection. The BayesGaze algorithm is summarized in Algorithm 1. Note that the algorithm can be run online, i.e. when a gaze point ${s}_{i}$ is sampled by the gaze tracker, the top-level for-loop can be executed to check if a target is selected. + +Algorithm 1 BayesGaze Algorithm + +--- + +Input: Target set: $T = \left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{N}}\right\}$ , Gaze trajectory: $S =$ + + $\left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{K}}\right\}$ , Threshold: $\theta$ + +Output: Selected target $t$ , Selection time: ${\tau }_{\text{sel }}$ + + for ${s}_{i}$ in $S$ do + + for ${t}_{i}$ in $T$ do + + Obtain prior probability $P\left( {t}_{j}\right)$ and compute likelihood + + $P\left( {{s}_{i} \mid {t}_{j}}\right)$ using Equation 5; + + Compute accumulated interest ${I}_{i}\left( {t}_{j}\right)$ from Equation 2; + + if ${I}_{i}\left( {t}_{j}\right) > \theta$ then + + Update prior probability $P\left( {t}_{m}\right)$ for each ${t}_{m} \in T$ given + + that ${t}_{j}$ is selected using Equation 4; + + return ${t}_{j}, i \cdot {\Delta \tau }$ + + end if + + end for + + end for + +--- + +#### 3.2.3 BayesGaze without Prior + +If we consider the prior to be Uniform distribution before every trial (i.e. $\forall {t}_{i} \in T, P\left( {t}_{i}\right) = 1/N$ ), BayesGaze will be identical to the Center of Gravity Mapping (CM) algorithm [4] (referred to as the CM method hereafter), a previously proposed method for deciding a target for a gaze-to-object mapping task. Under this special condition, the accumulated interest of the CM method can be calculated by Equation 2 with the prior $P\left( t\right) = 1/N$ , that is: + +$$ +{I}_{i}\left( t\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot P\left( {t \mid {s}_{i}}\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot \frac{P\left( {{s}_{i} \mid t}\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}P\left( {{s}_{i} \mid {t}_{j}}\right) }, \tag{6} +$$ + +where $P\left( {{s}_{i} \mid t}\right)$ is calculated by Equation 5. Therefore, we view BayesGaze as an improvement over the CM method that updates and incorporates the prior in the target selection process. The CM method is also very similar to the previously proposed Fractional Mapping method $\left\lbrack {{40},{47}}\right\rbrack$ . We later compare BayesGaze with the CM method to examine to what degree incorporating the prior can improve gaze target selection performance. + +In order to successfully apply the BayesGaze algorithm, we need to obtain the values of three parameters, denoted as a 3-tuple $\left\lbrack {k,\mathbf{\sigma },\mathbf{\theta }}\right\rbrack$ , where $k$ is part of the prior probability model (Equation 4), $\sigma$ is part of the likelihood model (Equation 5), and $\theta$ is the threshold of the accumulated interest for committing a selection. We carried out a study to collect gaze data for target selection and determine the optimal parameter values from that data. + +## 4 Parameter Determination + +We adopted a data-driven simulation approach to search for the optimal parameter values for the BayesGaze algorithm. The procedure consists of two phases. In Phase 1, we carried out a Wizard-of-Oz study to collect gaze input data for selecting a target. In Phase 2, we fed the collected data to the BayesGaze algorithm to search for the optimal parameter values. We also searched for the optimal parameter for the Dwell method (Equation 1) and for the CM method (Equation 6). + +### 4.1 Phase 1: Collecting Gaze Input Data via a Wizard-of- Oz Study + +We first carried out a Wizard-of-Oz study to collect gaze input data for selecting a target. We focused on a 1-dimensional target selection task, where the target is a horizontal bar and gaze motion is vertical. We picked this task because 1-dimensional pointing is a typical target selection task, and horizontal bars are widely used UI elements on mobile computing devices such as smartphones and tablets. + +#### 4.1.1 Participants + +Twelve users ( 4 female) between 23 and 31 years old (average ${27.25} \pm {2.22})$ participated in the experiment. All of them had normal or correct-to-normal sight and none of them was color blind. None of them had the experience of using gaze tracking devices or applications. + +#### 4.1.2 Apparatus + +We used the 11-in iPad Pro for gaze tracking and to run the experiment because it can be widely and conveniently accessed. The gaze tracking was implemented based on the iPad's true depth camera and Apple’s ARKit library, and the sampling rate was ${60}\mathrm{\;{Hz}}$ . Specifically, we used the leftEyeTransform and rightEyeTransform provided by ARKit library and performed a hitTestWithSegment call to obtain the raw gaze position. Based on the recommendation of [11], we used the Outlier Correction filter with a triangle kernel [21] to obtain smooth gaze tracking. The filter contains a saccade/fixation detection module so that it can apply sliding windows of different lengths separately for saccades and fixations. The thresholds for the $x$ and $y$ axis to detect a saccade were both set to ${0.5}^{ \circ }$ (calculated based on the estimated face-screen distance). For fixations, the sliding window size of the filter was set to 40 as suggested by [11]. For the saccade, the sliding window size of the filter was set to 10 , rather than using the raw position directly, to increase gaze tracking stability. We also followed the findings of previous works $\left\lbrack {{24},{38},{41}}\right\rbrack$ that allow head movements to improve target selection performance. We used a gazing task where the user gazes at 40 different points on the screen with a cursor showing where the user is looking at to test the gazing accuracy. The result showed a ${0.67}^{ \circ }$ with a standard deviation of 0.85 , which means the user may accurately control the gaze to select targets. + +#### 4.1.3 Procedure + +During the experiment the participant sat in front of a desk where an iPad Pro running the experiment was placed on a phone holder. The participant can freely adjust the iPad position, and was instructed to keep the distance between their eyes and the iPad at around ${40}\mathrm{\;{cm}}$ . + +The study includes multiple target selection trials. In each trial, a horizontal bar in blue was displayed on the screen as the target and the participant was instructed to select it via gaze input. Fig. 2 shows the setup. Before each trial, the participant first moved the gaze-controlled cursor in the starting gray bar. After 3 seconds, the starting bar turned green, signaling the start of the trial. The participant was then instructed to move the cursor with their gaze to select a target of width(W)at a distance $D$ from the starting bar. We collected gaze input data for 5 seconds after a trial started. We assumed that 5 seconds was long enough for the participants to select a target. Each participant took a break after 15 trials. In total the experiment lasted around 15 minutes per participant. + +We adopted a within-participant $3 \times 4 \times 2$ design with three levels of target width $W : 2\mathrm{\;{cm}}\left( {{2.86}^{ \circ }\text{calculated based on a participant-}}\right.$ screen distance of ${40}\mathrm{\;{cm}}$ ), $3\mathrm{\;{cm}}\left( {4.29}^{ \circ }\right)$ , and $4\mathrm{\;{cm}}\left( {5.76}^{ \circ }\right)$ , four levels of distance $D : 6\mathrm{\;{cm}}\left( {8.53}^{ \circ }\right) ,8\mathrm{\;{cm}}\left( {11.31}^{ \circ }\right) ,{10}\mathrm{\;{cm}}\left( {14.04}^{ \circ }\right)$ , and ${12}\mathrm{\;{cm}}\left( {16.70}^{ \circ }\right)$ , and two levels of gaze motion direction: up or down from the starting bar. We counterbalanced the factors by randomizing the trials in the experiment. + +![01963e97-3956-7818-9958-11aa60dc4e7f_3_1062_147_443_765_0.jpg](images/01963e97-3956-7818-9958-11aa60dc4e7f_3_1062_147_443_765_0.jpg) + +Figure 2: A screenshot of the Wizard-of-Oz study. The green button is the starting bar, and the target is shown as a blue bar. There is a red cursor indicating where the participant is looking. + +In total, the study resulted in 12 participant $\times 3$ target sizes $\times 4$ distances $\times 2$ directions $\times 2$ repetitions = 576 trials. + +### 4.2 Phase 2: Determining Parameters from the Col- lected Data + +We created a set of gaze-based target selection tasks, simulated gaze input based on the data collected in Phase 1, and searched for the parameter values for the BayesGaze, CM, and Dwell that led to high input accuracy and fast input speed. + +#### 4.2.1 Simulating Eye-Gaze Target Selection Tasks + +We first created a set of target selection tasks in which a user is supposed to control their gaze to select a target among $N$ candidates. These $N$ candidates are stacked together with no gap between them to simulate the common vertical list or vertical menu design of mobile devices (e.g., settings menus in iOS). We included the same 3 target sizes in the simulation as in the data collection study $(2,3$ , and $4\mathrm{\;{cm}}$ ) and set $N = 5$ . The gaze trajectories for selecting a target are obtained from the collected data, according to the target sizes. Fig. 3 shows examples of simulated gaze trajectories for selecting different targets on the screen. + +Since previous research has shown that the distribution of menu items being selected follows Zipf’s distribution $\left\lbrack {1,8,{10},{25},{30},{51}}\right\rbrack$ , we assumed that the frequency of each candidate being the target follows Zipf's Law: + +$$ +f\left( {l;\alpha , N}\right) = \frac{1/{l}^{\alpha }}{\mathop{\sum }\limits_{{n = 1}}^{N}\left( {1/{n}^{\alpha }}\right) }, \tag{7} +$$ + +where $N$ is the number of candidate targets (in the simulation, $N = 5$ ), $l \in \{ 1,2,\ldots , N\} , n$ is the rank of each target, and $\alpha$ is the value of the exponent characterizing the distribution. We include ${4\alpha }$ values (0.5,1,2,3)in the simulation. + +![01963e97-3956-7818-9958-11aa60dc4e7f_4_353_174_1090_411_0.jpg](images/01963e97-3956-7818-9958-11aa60dc4e7f_4_353_174_1090_411_0.jpg) + +Figure 3: An example of using the same gaze trajectory to simulate selecting a target (the blue one) at different indices among the five horizontal bars. The three red line shows the same gaze trajectory collected in the Wizard-of-Oz experiment. The red dot indicates the start of the trajectory. A simulated user is selecting the $2\mathrm{{nd}}\left( \mathrm{a}\right)$ , the $3\mathrm{{rd}}\left( \mathrm{b}\right)$ , and the $4\mathrm{{th}}\left( \mathrm{c}\right)$ target among5target candidates, with the same gaze trajectory. + +For each target size, we had 192 collected trajectories. Among the $N$ candidates, we randomly assigned the frequencies. For example, when $N = 5$ and $\alpha = 1$ , the generated frequencies can be $\lbrack {28},{84}$ , ${21},{42},{17}\rbrack$ , which means that the first target among 5 candidates will be selected 28 times, the second 84 times, etc. We randomly selected trajectories (without repetition) to simulate selecting targets at different indices given the generated frequencies. + +#### 4.2.2 Searching for the Parameter Values + +Given a particular parameter tuple $\left\lbrack {k,\sigma ,\theta }\right\rbrack$ , we ran the BayesGaze algorithm to determine the selected target in the simulated target selection tasks. We viewed the process of searching for the optimal parameter values as an optimization problem: determining parameter values that optimizes target selection performance, measured in terms of success rate and selection time. + +We performed a grid search to search for optimal parameter values for $k,\sigma$ and $\theta$ . In the grid search, $k$ ranges from 0.5 to 5 by steps of ${0.5},\sigma$ ranges from ${0.14}\mathrm{\;{cm}}\left( {0.2}^{ \circ }\right)$ to ${1.4}\mathrm{\;{cm}}\left( {2}^{ \circ }\right)$ by steps of ${0.14}\mathrm{\;{cm}},\theta$ ranges from 0.2 seconds to 2 seconds by steps of 0.1 seconds. The simulation results showed that different values for $k$ do not influence performance. We chose ${k}^{ * } = 1$ , as in [51]. When $k = 1$ , the Dirichlet prior of the Categorical distribution, without observing any selection results, becomes a Uniform distribution, i.e. an equally distributed prior. The best parameters for $\sigma$ were from 0.28 cm to ${0.56}\mathrm{\;{cm}}$ for BayesGaze. We chose ${\sigma }^{ * } = {0.28}\mathrm{\;{cm}}$ to reduce the chance of mis-selections. + +Because we want to improve two objectives, success rate and selection time, we adopted a Pareto optimization process to find the optimal $\theta$ .The process generates a set of parameter values, called the Pareto-optimal set or Pareto front. Each parameter in the set is Pareto-optimal, which means that none of the two metrics (success rate or selection time) can be improved without hurting the other metric. We plot the Pareto front of BayesGaze in Fig. 4a. We followed the exact same optimization process to search for the optimal parameter values for the CM and Dwell methods, and generated the corresponding Pareto fronts in Fig. 4b and 4c. For the CM method, the parameters are a 2-tuple $\left\lbrack {\sigma ,\theta }\right\rbrack$ , as it does not incorporate the prior into the accumulated interest. For the Dwell method, the parameter is $\theta$ , the threshold for deciding whether a target is selected based on the accumulated selection interest. + +To balance the success rate and selection time, we assigned equal weights to success rate and selection time. We first normalized the success rate and selection time to the range $\left\lbrack {0,1}\right\rbrack$ . We picked a parameter value ${\theta }^{ * }$ that leads to the best overall score $S$ , which is defined as: + +$$ +S = {0.5} \times \text{SuccessRate} - {0.5} \times \text{SelectionTime,} \tag{8} +$$ + +where SuccessRate and SelectionTime are the normalized values between 0 and 1 , according to the highest and lowest values displayed in Fig. 4. The coefficient of SelectionTime is -0.5 because the lower the selection time, the higher the selection performance. The optimal parameters for different $\alpha$ values are the same and are summarized in Table 1. + +
Target Selection Method${k}^{ * }$${\sigma }^{ * }$${\theta }^{ * }$
BayesGaze10.28 cm0.9
CM-0.28 cm0.9
Dwell--0.8
+ +Table 1: Optimal parameters (same for different $\alpha$ in Zipf’s Law) selected on the Pareto front for three target selection methods + +## 5 A TARGET SELECTION EXPERIMENT + +To empirically evaluate BayesGaze, we conducted an 1D gaze-based target selection study using the parameters from the simulations. We included CM and Dwell as baselines in our study because (1) Dwell was a widely adopted target selection method and CM was one of the best-performed algorithms from the literature, and (2) CM can be viewed as BayesGaze without prior. Including these two methods in comparison allowed us to evaluate whether BayesGaze improved the performance with extant algorithms, and to understand how the two components of BayesGaze (likelihood function and prior) would contribute the target selection performance improvement. + +### 5.1 Participants and Apparatus + +Eighteen adults ( 5 female) between 24 and 31 years old (average ${27.2} \pm {2.1}$ ) participated in the study. All of them had normal sight or correct-to-normal sight and none of them reported himself/herself as color blind. + +The apparatus was the same as that used in the Wizard-of-Oz study (Section 4.1.2), so was the eye-gaze tracking technology: we used an iPad Pro with true-depth camera; the eye-gaze tracking technology was implemented with the ARKit library, as previously described. + +![01963e97-3956-7818-9958-11aa60dc4e7f_5_257_194_1277_324_0.jpg](images/01963e97-3956-7818-9958-11aa60dc4e7f_5_257_194_1277_324_0.jpg) + +Figure 4: The Pareto front of different parameter combinations for 3 target selection methods under $\alpha = 1$ in Zipf’s Law. The enlarged dots represent the selected parameter settings for three methods, respectively. These settings have the most balanced performance according to Equation 8. + +### 5.2 Design + +We adopted a $\left\lbrack {3 \times 2 \times 2}\right\rbrack$ within-participant design. The three independent variables were: (1) the target selection method with 3 levels (BayesGaze, CM, Dwell), (2) the target size with 2 levels (1 $\mathrm{{cm}}$ or ${1.43}^{ \circ }$ , and $2\mathrm{\;{cm}}$ or ${2.86}^{ \circ }$ ), and (3) the $\alpha$ value of the Zipf’s distribution with 2 levels $\left( {\alpha = 1\text{, and}\alpha = 2}\right)$ . The Zipf’s distribution controls the distribution of the intended targets among the candidates. + +For each selection method $\times$ target size $\times$ Zipf’s law $\alpha$ combination, each participant performed 24 trials. When $\alpha = 1$ , the frequencies of the 5 target candidates being the intended targets were11,5,4,3,1; when $\alpha = 2$ , these frequencies were 16,4,2,1,1. We included two $\alpha$ values to evaluate whether the skewness of the target distribution affects selection performance. Among a set of 24 trials, the distance between the target and the starting bar was either $4\mathrm{\;{cm}}$ or $5\mathrm{\;{cm}}$ with ${50}\%$ probability for each distance, and the target was either above or below the starting bar, also with ${50}\%$ probability for each option. + +![01963e97-3956-7818-9958-11aa60dc4e7f_5_289_1265_438_765_0.jpg](images/01963e97-3956-7818-9958-11aa60dc4e7f_5_289_1265_438_765_0.jpg) + +Figure 5: The controlled 1D gaze target selection experiment + +### 5.3 Procedure + +For each trial, the participant was instructed to select one of the five adjacent horizontal bars displayed on the iPad screen via eye-gaze. The tracked gaze position was rendered as a cross-hair cursor on the display, as shown in Figure 5. The target to be selected was shown in blue and other targets in cyan. A starting bar was also displayed, which served as the starting position for the gaze input. Prior to starting a trial, the participant was asked to move the cursor into the starting bar which was initially displayed in gray. The bar turned to green after three seconds, signaling the start of a trial. The participant then moved the cursor to select the target bar on the screen. The selected target then turned dark. If the user selected the wrong target, or did not select any target after 5 seconds after the beginning of the trial, it was considered a miss. The participant moved to the next trial regardless of the outcome of the trial. To alleviate eye fatigue, the participant was allowed to take a break no longer than 2 minutes every 15 trials. Fig. 5 shows a screenshot of the experiment and a participant performing a trial. + +After each trial, BayesGaze updated the prior probability for each target candidate. We assumed that each condition corresponds to a particular interface, and when the experimental condition changes (e.g., target size, or $\alpha$ value in Zipf’s distribution), we reset all the prior information. + +The participants were guided to select the target as accurately and quickly as possible. At the end of the study, participants were asked to rate their preference over the three methods on a scale of 1 to 5 (1: dislike, 5: like very much). They also answered a subset of NASA-TLX [12] questions to measure the workload of the gaze target selection task, including about mental and physical demand. The rating of the workload was from 1 to 10 , from least to most demanding. The experiment lasted about 50 minutes. + +To counterbalance the independent variables, the methods were fully balanced based on all 6 possible orders. For half of the users, $\alpha$ was set to 1 for the first half of the trials, and to 2 for the other half. For the other half of the users, it was the opposite order. Other factors were randomized. In total, we collected 18 users $\times 3$ methods $\times 2$ target sizes $\times {2\alpha } \times {24}$ trials $= {5184}$ trials. + +### 5.4 Results + +We evaluate the performance of the BayesGaze, CM, and Dwell by the success rate and selection time. + +#### 5.4.1 Success Rate + +The success rate measures the ratio of correct selections over the total number of trials. The results (Fig. 6a) show that: 1) BayesGaze always has the highest success rate and Dwell has the lowest success rate, which confirms the effectiveness of Bayesian approach and the benefit of using the prior. 2) Large targets(2cm)have higher Online Submission ID: 26 success rate than small targets(1cm), because it is much easier to move one's gaze into a large target. + +
Target Selection MethodFrequencies when $\alpha = 1$Frequencies when $\alpha = 2$
115431164211
BayesGaze88.186.188.984.377.890.687.590.388.983.3
CM85.682.884.086.186.185.987.593.188.988.9
Dwell83.185.675.784.388.979.985.483.386.183.3
+ +Table 2: The success rate (%) for different target selection frequencies (the lowest success rate is marked in bold) + +A repeated measures ANOVA on success rate shows two significant main effects: target selection method $\left( {{F}_{2,{34}} = {11.45}, p < {0.001}}\right)$ and target size $\left( {{F}_{1,{17}} = {30.76}, p < {0.001}}\right)$ . The test does not show a significant main effect of Zipf’s Law’s $\alpha \left( {{F}_{1,{17}} = {1.722}, p = {0.207}}\right)$ . There is no significant interaction effect. Pairwise comparisons with Holm adjustment [13] on the success rate show significant differences between BayesGaze vs. Dwell $\left( {p < {0.01}}\right) ,\mathrm{{CM}}$ vs. Dwell $\left( {p < {0.05}}\right)$ , and BayesGaze vs. CM $\left( {p < {0.05}}\right)$ . + +The overall mean $\pm {95}\%$ confidence interval (CI) of success rate among all target sizes and $\alpha$ is ${88.3}\% \pm {3.6}$ for BayesGaze, ${85.9}\% \pm {4.3}$ for CM, and ${82.1}\% \pm {5.2}$ for Dwell. In total, BayesGaze improves the success rate by ${6.2}\%$ over Dwell, and by ${2.4}\%$ over CM. + +![01963e97-3956-7818-9958-11aa60dc4e7f_6_187_944_645_916_0.jpg](images/01963e97-3956-7818-9958-11aa60dc4e7f_6_187_944_645_916_0.jpg) + +(b) Decomposition of the error rate for target size $\times$ Zipf’s Law’s $\alpha$ + +Figure 6: The average success rate with ${95}\% \mathrm{{CI}}$ and the decomposition of the error rate (Mis-Selection (MS) and Non-Selection (NS)) + +In addition to the success rate, we also look into the error rate, which measures the ratio of the cases where the right target is not selected. There are two types of errors: (1) Mis-Selection (MS), where a wrong target is selected, and (2) Non-Selection (NS), where no target is selected. We examine the error rates of these two types of errors separately. Fig. 6b shows the decomposition of the error rate. The major part of the error rate of BayesGaze and CM comes from mis-selection, and the same for Dwell when the target size is $2\mathrm{\;{cm}}$ . However, when the target size is $1\mathrm{\;{cm}}$ , Dwell suffers from not selecting any target. The result implies that using a Bayesian framework can alleviate the problem of not being able to select target. + +With BayesGaze, a potential side effect of incorporating the prior might be that less frequent targets are more difficult to select. Table 2 shows the success rates by target frequency. Although the success rates for items with a frequency of 1 are lower than for the high frequency items, they are still near ${80}\%$ . A repeated measures ANOVA does not show significant main effects of frequency on success rate for BayesGaze $\left( {{F}_{9.153} = {0.776}, p = {0.639}}\right) ,\mathrm{{CM}}$ $\left( {{F}_{9,{153}} = {1.248}, p = {0.27}}\right)$ , or Dwell $\left( {{F}_{9,{153}} = {0.669}, p = {0.736}}\right)$ , indicating that this potential side effect is minor. + +#### 5.4.2 Selection Time + +Fig. 7 shows the results for selection time, which measures the time to select the target from the start of the trial. As with the success rate, we observe that: 1) BayesGaze has the lowest selection time, and Dwell has the longest one; 2) Small targets(1cm)take longer to select than large ones (2cm), especially for Dwell. + +![01963e97-3956-7818-9958-11aa60dc4e7f_6_971_1286_636_409_0.jpg](images/01963e97-3956-7818-9958-11aa60dc4e7f_6_971_1286_636_409_0.jpg) + +Figure 7: The average selection time (with ${95}\%$ CI) by target size $\times$ Zipf’s Law’s $\alpha$ + +A repeated measures ANOVA on selection time shows two significant main effects: target selection method $\left( {{F}_{2.34} = {21.19}, p < {0.001}}\right)$ and target size $\left( {{F}_{1.17} = {116.9}, p < {0.001}}\right)$ . The test does not show a significant main effect of Zipf’s Law’s $\alpha \left( {{F}_{1,{17}} = {1.685}, p = {0.212}}\right)$ . The only significant interaction effect is target size $\times$ target selection method $\left( {{F}_{2,{34}} = {31.81}, p < {0.001}}\right)$ . Pairwise comparisons with Holm adjustment on selection time show significant differences for BayesGaze vs. Dwell $\left( {p < {0.001}}\right)$ and CM vs. Dwell $\left( {p < {0.01}}\right)$ . The pairwise comparisions does not show a significant difference for BayesGaze vs. CM $\left( {p = {0.09}}\right)$ . + +The overall mean $\pm {95}\%$ CI selection time among all target sizes and $\sigma$ is ${2.23} \pm {0.15}$ seconds for BayesGaze, ${2.30} \pm {0.15}$ seconds for $\mathrm{{CM}}$ , and ${2.49} \pm {0.18}$ seconds for Dwell. In total, BayesGaze can save ${10.4}\%$ selection time over Dwell, and 3% over CM. + +#### 5.4.3 Subjective Feedback + +The result of subjective feedback is shown in Fig. 8. For overall preference, the median ratings for BayesGaze, CM and Dwell are 4, 3.5 and 3 respectively. BayesGaze has the highest median rating. For mental and physical demand, the medians are 6.5 and 5.5 for BayesGaze, 6 and 6 for CM, and 7.5 and 7.5 for Dwell. Nonparametric Friedman tests do not show significant main effects of selection method on three metrics: overall preference $\left( {{X}_{r}^{2}\left( 2\right) = }\right.$ ${1.11}, p = {0.57})$ , physical demand $\left( {{X}_{r}^{2}\left( 2\right) = {2.93}, p = {0.085}}\right)$ , and mental demand $\left( {{X}_{r}^{2}\left( 2\right) = {5.24}, p = {0.073}}\right)$ . The $p$ values for physical and mental demanding are approaching statistical significance. + +![01963e97-3956-7818-9958-11aa60dc4e7f_7_210_718_569_384_0.jpg](images/01963e97-3956-7818-9958-11aa60dc4e7f_7_210_718_569_384_0.jpg) + +Figure 8: The median of subjective ratings of overall preference, mental demand and physical demand. For overall preference, higher ratings are better. For mental and physical demand, lower ratings are better. + +### 5.5 Discussion + +Performance. The experiment results show that BayesGaze outperformed both the Dwell and CM methods, in both selection accuracy and speed. BayesGaze improved the success rate of Dwell from 82.1% to 88.3%, i.e. a 6.2% increase, and reduced selection time from 2.49 seconds to 2.23 seconds, i.e. a 10.4% deduction. BayesGaze also improved the success rate of CM by ${2.4}\%$ , and reduced the selection time by $3\%$ . Pairwise comparisons with Holm adjustment showed all these differences to be significant $\left( {p < {0.05}}\right)$ , except for selection time between BayesGaze vs. CM $\left( {p = {0.09}}\right)$ . + +The promising performance of BayesGaze first shows that incorporating the prior significantly improves target selection performance. Compared with CM, which can be viewed as BayesGaze without prior, BayesGaze performed better in both accuracy and speed across all conditions. This suggests that incorporating the prior distribution of targets is effective in improving the performance of gaze-based target selection tasks. Second, both BayesGaze and CM outperformed Dwell, indicating that accumulating the interest, which is represented by the posterior in BayesGaze and by the likelihood in CM, is also effective for gaze-based target selection. + +Prior. Incorporating the prior might make less frequent targets more difficult to select, even though we did not observe it in our experiment, as shown in Table 2. There are several ways to prevent this potential problem: (1) Set a lower bound for the target frequency so that no target will become hard to select. (2) In real-world applications, leverage user actions to address the problem. For example, if the previous selection is incorrect (back/cancel action is performed immediately), reduce the probability of the incorrect target. (3) Similar to what we do in this paper, use a small $\sigma$ for the likelihood model in order to decrease interference between neighboring targets. + +Midas-Touch Problem. Our method can also work with existing approaches to solve the Midas-Touch problem in gaze target selection, For example: (1) We can use methods like [5, 43] to infer whether a user is reading content on the UI or controlling their gaze to select a target. These methods will classify gaze positions into content reading phase and target selection phase. BayesGaze can discard the gaze positions in the content reading phase, and use only the gaze positions in the target selection phase to decide the target. (2) We can increase the threshold of accumulated posterior for selection to mitigate the Midas-Touch problem. Reading content on UI tends to take a shorter period of time than controlling gaze to select a target. Increasing the threshold could prevent falsely activating the selection, and the actual threshold should be set based on specific scenarios. This approach is also adopted by dwell-based methods (e.g., [28]) to mitigate the Midas-Touch problem. + +Target Dimension. This paper considers 1D targets to show that Bayes' theorem can be adopted to improve the performance of gaze-based target selection. In real applications, there are many linear menus on computers and smartphones where our method can be directly applied. However, the underlying principle (Eq. 2 - 5) is not tied to a specific type of target and works for both 1D and 2D targets. The main difference between 1D and 2D target selection lies in the likelihood function (Eq. 5). For 1D targets, we adopted a 1D Gaussian; for 2D targets it should be replaced by a 2D Gaussian distribution. The rest of the method, including updating priors, accumulating weighted posterior, and using Pareto optimization to balance accuracy and selection time will remain the same. + +Scalability. BayesGaze uses a gaze position buffer to store the gaze trajectory and empties it after each selection action. Our study (Fig. 7) shows that most selections happen within 3 seconds, which takes only a small amount of memory to store gaze data. In real-world applications, we may set a rolling-window with a size of 3 seconds to store gaze position. It can then scale up and handle long gaze-based input. + +## 6 CONCLUSION + +In this paper, we introduced BayesGaze, a Bayesian approach to determining the selected target given an eye-gaze trajectory. This approach views each sampling point in a gaze trajectory as a signal for selecting a target, uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by the sampling interval over all sampling points to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution with a Dirichlet prior. Our controlled experiment showed that BayesGaze improves target selection accuracy from 82.1% to 88.3% and selection time from 2.49 seconds per selection to 2.23 seconds over the widely adopted dwell-based selection method. It also improves selection accuracy and selection time over the CM method [4] (85.9%, 2.3 seconds per selection), a high-performance gaze target selection algorithm. Overall, our research shows that both incorporating the prior and accumulating the posterior are effective in improving the performance of gaze-based target selection. + +## REFERENCES + +[1] C. Appert and S. Zhai. Using strokes as command shortcuts: cognitive benefits and toolkit support. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2289-2298, 2009. + +[2] M. Ashmore, A. T. Duchowski, and G. Shoemaker. Efficient eye pointing with a fisheye lens. In Proceedings of Graphics interface 2005, pp. 203-210. Citeseer, 2005. + +[3] M. Barz, F. Daiber, D. Sonntag, and A. Bulling. Error-aware gaze-based interfaces for robust mobile gaze interaction. In Proceedings of + +the 2018 ACM Symposium on Eye Tracking Research & Applications, pp. 1-10, 2018. + +[4] M. Bernhard, E. Stavrakis, M. Hecher, and M. Wimmer. Gaze-to- + +object mapping during visual search in $3\mathrm{\;d}$ virtual environments. ${ACM}$ Transactions on Applied Perception (TAP), 11(3):1-17, 2014. + +[5] P. Biswas and P. Langdon. A new interaction technique involving eye gaze tracker and scanning system. In Proceedings of the 2013 conference on eye tracking South Africa, pp. 67-70, 2013. + +[6] D. Buschek and F. Alt. Touchml: A machine learning toolkit for modelling spatial touch targeting behaviour. In Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 110-114, 2015. + +[7] I. Chatterjee, R. Xiao, and C. Harrison. Gaze+ gesture: Expressive, precise and targeted free-space interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 131-138, 2015. + +[8] A. Cockburn, C. Gutwin, and S. Greenberg. A predictive model of menu performance. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 627-636, 2007. + +[9] A. De Luca, R. Weiss, and H. Drewes. Evaluation of eye-gaze interaction methods for security enhanced pin-entry. In Proceedings of the 19th australasian conference on computer-human interaction: Entertaining user interfaces, pp. 199-202, 2007. + +[10] S. R. Ellis and R. J. Hitchcock. The emergence of zipf's law: Spontaneous encoding optimization by users of a command language. IEEE transactions on systems, man, and cybernetics, 16(3):423-427, 1986. + +[11] A. M. Feit, S. Williams, A. Toledo, A. Paradiso, H. Kulkarni, S. Kane, and M. R. Morris. Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design. In Proceedings of the 2017 Chi conference on human factors in computing systems, pp. 1118-1130, 2017. + +[12] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in psychology, vol. 52, pp. 139-183. Elsevier, 1988. + +[13] S. Holm. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics, pp. 65-70, 1979. + +[14] M. X. Huang, J. Li, G. Ngai, and H. V. Leong. Screenglint: Practical, in-situ gaze estimation on smartphones. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2546-2557, 2017. + +[15] T. Isomoto, T. Ando, B. Shizuki, and S. Takahashi. Dwell time reduction technique using fitts' law for gaze-based target acquisition. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, pp. 1-7, 2018. + +[16] H. Istance, A. Hyrskykari, L. Immonen, S. Mansikkamaa, and S. Vickers. Designing gaze gestures for gaming: an investigation of performance. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pp. 323-330, 2010. + +[17] R. J. Jacob. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems (TOIS), 9(2):152-169, 1991. + +[18] R. J. Jacob and K. S. Karn. Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. In The mind's eye, pp. 573-605. Elsevier, 2003. + +[19] L. Jigang, B. S. L. Francis, and D. Rajan. Free-head appearance-based eye gaze estimation on mobile devices. In 2019 International Conference on Artificial Intelligence in Information and Communication (ICAHC), pp. 232-237. IEEE, 2019. + +[20] C. Kumar, R. Hedeshy, I. S. MacKenzie, and S. Staab. Tagswipe: Touch assisted gaze swipe for text entry. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2020. + +[21] M. Kumar, J. Klingner, R. Puranik, T. Winograd, and A. Paepcke. Improving the accuracy of gaze input for interaction. In Proceedings of the 2008 symposium on Eye tracking research & applications, pp. 65-68, 2008. + +[22] M. Kumar, A. Paepcke, and T. Winograd. Eyepoint: practical pointing and selection using gaze and keyboard. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 421-430, 2007. + +[23] G. H. Kütt, K. Lee, E. Hardacre, and A. Papoutsaki. Eye-write: Gaze sharing for collaborative writing. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2019. + +[24] M. Kytö, B. Ens, T. Piumsomboon, G. A. Lee, and M. Billinghurst. + +Pinpointing: Precise head-and eye-based target selection for augmented reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2018. + +[25] W. Liu, G. Bailly, and A. Howes. Effects of frequency distribution on linear menu performance. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 1307-1312, 2017. + +[26] C. Lutteroth, M. Penkar, and G. Weber. Gaze vs. mouse: A fast and accurate gaze-only click alternative. In Proceedings of the 28th annual ACM symposium on user interface software & technology, pp. 385-394, 2015. + +[27] P. Majaranta, U.-K. Ahola, and O. Špakov. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 357-360, 2009. + +[28] P. Majaranta and K.-J. Räihä. Text entry by gaze: Utilizing eye-tracking. Text entry systems: Mobility, accessibility, universality, pp. 175-187, 2007. + +[29] D. Miniotas, O. Špakov, and I. S. MacKenzie. Eye gaze interaction with expanding targets. In CHI'04 extended abstracts on Human factors in computing systems, pp. 1255-1258, 2004. + +[30] A. Morrison, X. Xiong, M. Higgs, M. Bell, and M. Chalmers. A large-scale study of iphone app launch behaviour. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2018. + +[31] M. E. Mott, S. Williams, J. O. Wobbrock, and M. R. Morris. Improving dwell-based gaze typing with dynamic, cascading dwell times. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2558-2570, 2017. + +[32] A. Nayyar, U. Dwivedi, K. Ahuja, N. Rajput, S. Nagar, and K. Dey. Optidwell: intelligent adjustment of dwell click time. In Proceedings of the 22nd international conference on intelligent user interfaces, pp. 193-204, 2017. + +[33] M. Parisay, C. Poullis, and M. Kersten-Oertel. Felix: Fixation-based eye fatigue load index a multi-factor measure for gaze-based interactions. In 2020 13th International Conference on Human System Interaction (HSI), pp. 74-81. IEEE, 2020. + +[34] J. Pi and B. E. Shi. Probabilistic adjustment of dwell time for eye typing. In 2017 10th International Conference on Human System Interactions (HSI), pp. 251-257. IEEE, 2017. + +[35] K.-J. Räihä and S. Ovaska. An exploratory study of eye typing fundamentals: dwell time, text entry rate, errors, and workload. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 3001-3010, 2012. + +[36] D. Rozado, T. Moreno, J. San Agustin, F. Rodriguez, and P. Varona. Controlling a smartphone using gaze gestures as the input mechanism. Human-Computer Interaction, 30(1):34-63, 2015. + +[37] I. Schuetz, T. S. Murdison, K. J. MacKenzie, and M. Zannoli. An explanation of fitts' law-like performance in gaze-based selection tasks using a psychophysics approach. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2019. + +[38] L. Sidenmark and H. Gellersen. Eye&head: Synergetic eye and head movement for gaze pointing and selection. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, pp. 1161-1174, 2019. + +[39] H. Skovsgaard, J. C. Mateo, J. M. Flach, and J. P. Hansen. Small-target selection with gaze alone. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pp. 145-148, 2010. + +[40] O. Spakov. Comparison of gaze-to-objects mapping algorithms. In Proceedings of the 1st Conference on Novel Gaze-Controlled Applications, pp. 1-8, 2011. + +[41] O. Špakov and P. Majaranta. Enhanced gaze interaction using simple head gestures. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pp. 705-710, 2012. + +[42] S. Stellmach and R. Dachselt. Look & touch: gaze-supported target acquisition. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 2981-2990, 2012. + +[43] B. B. Velichkovsky, M. A. Rumyantsev, and M. A. Morozov. New + +solution to the midas touch problem: Identification of visual commands via extraction of focal fixations. procedia computer science, 39:75-82, 2014. + +[44] E. Velloso, M. Carter, J. Newn, A. Esteves, C. Clarke, and H. Gellersen. Motion correlation: Selecting objects by matching their movement. ACM Transactions on Computer-Human Interaction (TOCHI), 24(3):1- 35, 2017. + +[45] D. Weir, S. Rogers, R. Murray-Smith, and M. Löchtefeld. A user-specific machine learning approach for improving touch accuracy on mobile devices. In Proceedings of the 25th annual ACM symposium on User interface software and technology, pp. 465-476, 2012. + +[46] E. Wood and A. Bulling. Eyetab: Model-based gaze estimation on unmodified tablet computers. In Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 207-210, 2014. + +[47] S. Xu, H. Jiang, and F. C. Lau. Personalized online document, image and video recommendation via commodity eye-tracking. In Proceedings of the 2008 ACM conference on Recommender systems, pp. 83-90, 2008. + +[48] X. Zhang, M. X. Huang, Y. Sugano, and A. Bulling. Training person-specific gaze estimators from user interactions with multiple devices. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2018. + +[49] X. Zhang, X. Ren, and H. Zha. Improving eye cursor's stability for eye pointing tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 525-534, 2008. + +[50] X. Zhang, X. Ren, and H. Zha. Modeling dwell-based eye pointing target acquisition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2083-2092, 2010. + +[51] S. Zhu, Y. Kim, J. Zheng, J. Y. Luo, R. Qin, L. Wang, X. Fan, F. Tian, and X. Bi. Using bayes' theorem for command input: Principle, models, and applications. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-15, 2020. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..9be4b5742968b4b081d0e95f913fd2705ce6ba33 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,358 @@ +§ BAYESGAZE: A BAYESIAN APPROACH TO EYE-GAZE BASED TARGET SELECTION + +Category: Research + +§ ABSTRACT + +Selecting targets accurately and quickly with eye-gaze input remains an open research question. In this paper, we introduce BayesGaze, a Bayesian approach of determining the selected target given an eye-gaze trajectory. This approach views each sampling point in an eye-gaze trajectory as a signal for selecting a target. It then uses the Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by sampling interval to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping (CM) method [4]. Our research shows that both accumulating posterior and incorporating the prior are effective in improving the performance of eye-gaze based target selection. + +Index Terms: Human-centered computing-Human computer interaction (HCI); Human-centered computing-Human computer interaction (HCI)—Interaction techniques; Human-centered computing-Human computer interaction (HCI)-HCI design and evaluation methods-User studies; + +§ 1 INTRODUCTION + +Selecting a target with the gaze remains a central problem of eye-based interaction. Two factors make this problem challenging [18]. First, gaze input is noisy because of both inadvertent eye movements and inevitable noise in the tracking device [49]. Therefore, it is difficult for a user to move their gaze to a particular position and stabilize it for an extended period of time. Second, unlike using a mouse, where a user can confirm the selection by clicking a button, gaze-based interaction lacks an easy-to-use approach to confirm the selection, adding a layer of difficulty to the design of a selection technique [42]. Although previous research has explored target selection using dwell $\left\lbrack {{15},{17}}\right\rbrack$ , motion correlation [44] and dynamic user interfaces $\left\lbrack {{26},{29},{39}}\right\rbrack$ , quickly and accurately selecting a target with gaze input remains an open research question. + +Inspired by the literature showing that Bayes' theorem is a promising principle for handling uncertainty and noise in input signals (e.g., $\left\lbrack {4,{51}}\right\rbrack$ ), we investigate how to apply a Bayesian perspective to determining the selected target given a gaze trajectory. Applying Bayes' theorem to gaze-based target selection raises two main challenges. First, it is not clear how to obtain the likelihood function for a gaze trajectory that contains a sequence of input signals (gaze points), i.e. the probability of observing a gaze trajectory given the target. Second, unlike touch or mouse input, which have a clear definitions of the terminal moment of the input, e.g. lifting the finger from the touch screen or mouse button, gaze input lacks a clear delimiter of the completion of a selection action. It is therefore necessary to design a method to determine when the selection action is completed. + +To address these challenges, we introduce BayesGaze (Figure 1), a Bayesian approach for determining the selected target given a gaze trajectory. This approach first views each sampling point in a gaze trajectory as a signal for selecting a target, and then uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point. The likelihood of a target being selected is based on the distance between the sampling point and the target center, and the prior probability of a target being selected is modeled by a categorical distribution and updated after a selection action. BayesGaze then accumulates the posterior probabilities over all sampling points, weighted by the sampling interval, to determine the selected target. BayesGaze advances the Center of Gravity Mapping (CM) [4] by modeling the prior and incorporating it into the process of determining the selected target. This contribution is key to improving the performance of gaze-based target selection. + + < g r a p h i c s > + +Figure 1: An overview of how BayesGaze works. Given a gaze position ${s}_{i}$ sampled at time $i$ in a gaze trajectory, BayesGaze updates the accumulated interest of selecting target $t$ , denoted by ${I}_{i}\left( t\right)$ , by adding $P\left( {t \mid {s}_{i}}\right)$ weighted by the sampling interval ${\Delta \tau }$ to ${I}_{i - 1}\left( t\right) .P\left( {t \mid {s}_{i}}\right)$ is the posterior probability of selecting $t$ given ${s}_{i}$ , which is calculated based on Bayes’ theorem. If the accumulated interest ${I}_{i}\left( t\right)$ exceeds a threshold $\theta$ , the target $t$ is selected. BayesGaze then updates the prior probability $P\left( t\right)$ accordingly. + +We report on a controlled experiment showing that BayesGaze improves target selection accuracy (from 82.1% to 88.3%) and speed (from 2.49 seconds per selection to 2.23 seconds) over a dwell-based selection method. BayesGaze also outperforms the CM method [4]. Overall, our investigation shows that accumulating the posterior probability and incorporating the prior are effective in improving the performance of gaze-based target selection. + +§ 2 RELATED WORK + +BayesGaze builds on previous work on gaze input and on Bayesian approaches. Here we review related work in gaze-based target selection techniques, Bayesian approaches to gaze input, and gaze-tracking technology. + +§ 2.1 GAZE BASED TARGET SELECTION + +Gaze-based target selection is a key technique for supporting a number of gaze interaction technologies such as gaze-based text input [35], gaming [16] or smart device control [36]. Dwell-based target selection (Dwell) $\left\lbrack {{15},{17},{50}}\right\rbrack$ is the most well-known and most widely used target selection method. It requires a user to dwell their gaze on a target for a specific uninterrupted period of time (usually several hundred milliseconds to 1 or 2 seconds) to select it. Such a highly concentrated action often results in eye fatigue [33]. Many works have been devoted to improving the Dwell technique by enabling a shorter dwell time and to finding other gaze-based target selection methods. For example, letting a user adjust the dwell time manually can lead to a shorter dwell time, from 876 ms to ${282}\mathrm{\;{ms}}$ [27]. Previous research [15] used Fitts’ law to model gaze input and suggested selecting the target once the user's gaze fixates the target. Other works have explored adjusting the dwell time based on how likely the target will be selected $\left\lbrack {{31},{34}}\right\rbrack$ . + +In addition to dwell-based methods, researchers have proposed alternatives to improve gaze-based target selection from the two perspectives: handling the noisy gaze input and designing new selection action $\left\lbrack {{18},{49}}\right\rbrack$ . To accommodate the inaccuracy of eye-gaze input, some works used dynamic expansion/zooming of the display [29,39] or new UIs, e.g. Actigaze [26] used a set of confirmation buttons to make gaze target selection easier. Other works investigated error-aware gaze target selection so that the inaccuracy of target selection can be tracked and the system can provide design guidelines for UIs $\left\lbrack {3,{11}}\right\rbrack$ . Gaze target selection actions are also well explored. For example, motion correlation between the target movement and gaze trajectory has been proposed to determine the selected target [44]. Actions such as blinking [7] and gaze gesture [9] have also been explored for target selection. Previous research has also used multimodal input to get rid of the dwelling action. For example, once the user gazes at the target, a separate device, such as a keyboard [22] or hand-held touchscreen [42], can be employed to perform the selection action. + +§ 2.2 BAYESIAN APPROACHES TO TARGET SELECTION + +There is a growing interest in applying a Bayesian perspective to handle uncertainty in target selection. Some of this research is related to gaze input. For example, previous research has proposed probabilistic frameworks to deal with uncertainty in the input process, such as handling the uncertainty of touch actions on mobile devices $\left\lbrack {6,{45}}\right\rbrack$ and touchscreens $\left\lbrack {51}\right\rbrack$ , and also handling uncertainty in gaze-based interactions [4,32]. + +Our work is related to the recent work BayesianCommand, which uses Bayes' theorem to handle uncertainty in touch target selection and word-gesture input [51]. The fundamental difference between our work and BayesianCommand is that in our work, gaze input does not have well-defined starting or ending moments, but touch input does (i.e., landing a finger on screen to start input, and taking finger off to end the input). Therefore, BayesianCommand cannot be applied to gaze input directly. + +Our research is also related to previous work on using a Bayesian perspective to address the gaze-to-object mapping problem, i.e. the Center of Gravity Mapping method (CM) [4]. CM is an improved version of the FM algorithm [47], which performed the best among 9 extant gaze-to-object mapping algorithms [40]. The main difference between our work and CM is that CM does not model nor update the prior, while our approach incorporates the prior into the process of deciding the selected target, which turns out to be the primary reason why BayesGaze improves target selection accuracy and reduces selection time. Furthermore, BayesGaze is designed for the gaze target selection problem while the CM is designed for gaze-to-object mapping problem. The gaze-based target selection is a different problem than gaze-to-object mapping $\left\lbrack {4,{40}}\right\rbrack$ because the former requires a mechanism to commit the selection while the latter does not. + +§ 2.3 GAZE TRACKING TECHNOLOGY + +Gaze tracking technology is becoming increasingly mature and available. For example, a number of professional gaze trackers are available, including Tobii 4C [23], SMI REDn [20] or Eyelink 1000 plus [37], that cost several hundreds up to a few thousand dollars. Previous research has also enabled gaze tracking with off-the-shelf cameras by using a fisheye camera [2], the front-facing RGB camera of a tablet [46], or by leveraging the glint of the screen on the user's cornea [14]. Deep learning techniques have also been used to predict gaze position using Convolutional Neural Networks [19, 48]. + +Unlike the above approaches, we enabled gaze tracking with an off-the-shelf and widely used iPad Pro equipped with a true depth camera and powered by Apple's ARKit. + +§ 3 BAYESGAZE: A BAYESIAN PERSPECTIVE ON GAZE TAR- GET SELECTION + +§ 3.1 A FORMAL DESCRIPTION OF THE GAZE BASED TARGET SE- LECTION PROBLEM + +The gaze-based target selection problem can be formally described as the following research question. Given a gaze trajectory, which one is the intended target among a set of candidates denoted by $T = \left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{N}}\right\}$ + +As shown in previous research $\left\lbrack {4,{40},{47}}\right\rbrack$ , the existing algorithms for solving the gaze-based target selection problem can be described through an interest accumulation framework: each target candidate (denoted by $t$ ) accumulates a certain amount of "time" or "interest" from gaze input, until one of them reaches a threshold (denoted by $\theta$ ) for being selected. Under this framework, the widely adopted dwell-based target selection method can be expressed as follow. + +Dwell-based Target Selection Method. Assuming that the gaze trajectory is denoted by $S = \left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{K}}\right\}$ where ${s}_{i}$ is a sampling point along the gaze trajectory at time $i$ , the accumulated "interest" for a target candidate $t$ at time $i$ , denoted by ${I}_{i}\left( t\right)$ , is calculated as: + +$$ +{I}_{i}\left( t\right) = \left\{ \begin{array}{ll} {I}_{i - 1}\left( t\right) + {\Delta \tau }, & \text{ if }{s}_{i}\text{ is within the target }t \\ 0, & \text{ otherwise } \end{array}\right. \tag{1} +$$ + +where ${s}_{i}$ is the gaze position at time $i$ , and ${\Delta \tau }$ is the sampling interval. ${I}_{i}\left( t\right)$ represents the duration during which the gaze position stayed continuously within the target candidate $t$ . If the gaze position moves outside the target, it resets ${I}_{i}\left( t\right)$ to 0 . To select a target, the eye-gaze position needs to continuously stay within a target for a period of $\theta$ . In other words, the selected target is the one (denoted by ${t}^{ * }$ ) whose accumulated selection interest ${I}_{i}\left( {t}^{ * }\right)$ first reaches $\theta$ (i.e., ${I}_{i}\left( {t}^{ * }\right) \geq \theta$ ). + +§ 3.2 THE BAYESGAZE ALGORITHM + +Under the framework of "accumulating selection interest", we propose BayesGaze, a Bayesian perspective for gaze-based target selection. It views each sampling point in a gaze trajectory as a signal for selecting a target, and then uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point. BayesGaze then accumulates the posterior probabilities over all sampling points weighted by the sampling interval, as accumulated interest of selecting a target. A target candidate will be selected once the accumulated interest reaches a threshold $\theta$ . Formally, the accumulated interest of selecting a target $t$ is calculated as follows, given the sampling point ${s}_{i}$ : + +$$ +{I}_{i}\left( t\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot P\left( {t \mid {s}_{i}}\right) . \tag{2} +$$ + +The posterior $P\left( {t \mid {s}_{i}}\right)$ can be estimated according to Bayes’ theorem, assuming there are $N$ target candidates: + +$$ +P\left( {t \mid {s}_{i}}\right) = \frac{P\left( {{s}_{i} \mid t}\right) P\left( t\right) }{P\left( {s}_{i}\right) } = \frac{P\left( {{s}_{i} \mid t}\right) P\left( t\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}P\left( {{s}_{i} \mid {t}_{j}}\right) P\left( {t}_{j}\right) }, \tag{3} +$$ + +where $P\left( t\right)$ is the prior probability of target $t$ being the intended target without observing the current gaze input trajectory, and $P\left( {{s}_{i} \mid t}\right)$ is the probability of ${s}_{i}$ if the intended target is $t$ (the likelihood). + +BayesGaze has the following characteristics. First, BayesGaze resumes the accumulation of selection interest from where it left if the gaze trajectory accidentally leaves a target but returns to it later. It address a problem of dwell-based method (Equation 1) that if the eye-gaze position moves outside a target, the accumulated interest for selecting such a target is reset to 0 . Second, it weights the accumulated interest with the distance between the gaze point and the target center, through the likelihood function $P\left( {{s}_{i} \mid t}\right)$ . The closer a gaze point is to the target center, the more "interest" such a point will contribute to the target selection. Third, it updates the prior distribution of targets $\left( {P\left( t\right) }\right)$ and incorporate it into the procedure of deciding the selected target. + +In the following part, we introduce how to estimate the prior distribution $P\left( t\right)$ and the likelihood $P\left( {s \mid t}\right)$ , which are keys for applying BayesGaze. + +§ 3.2.1 PRIOR PROBABILITY MODEL + +This part introduces a frequency model to estimate the prior distribution $P\left( t\right)$ based on the observable target selection history. We assume that the user does not select targets randomly and the target selection follows some distribution, e.g. Zipf's Law. This assumption is made based on the selection patterns in menu selection $\left\lbrack {8,{25},{51}}\right\rbrack$ , smartphone APP launching [30], and command triggering [1, 10, 51]. All of them are tasks that gaze target selection can support. + +We model the prior distribution (i.e., a target candidate being selected prior to observing the current gaze trajectory) as a categorical distribution. More specifically, the outcome of a gaze-based selection trial that results in a selected target is viewed as a random variable $x$ whose value is one of $N$ categories (the $N$ target candidates). The core parameter of this random variable $x$ is the parameter vector $\mathbf{p} = \left( {P\left( {t}_{1}\right) ,P\left( {t}_{2}\right) ,\ldots ,P\left( {t}_{N}\right) }\right)$ , which describes the probability of each category. As a common practice in Bayesian inference, we also view this parameter vector $\mathbf{p}$ as a random variable and give it a prior distribution, using the Dirichlet distribution. + +According to the properties of Dirichlet distributions, after each target selection trial we can update the expected value of the posterior + +$p$ as follows: + +$$ +P\left( {t}_{i}\right) = \frac{k + {c}_{i}}{k \cdot N + \mathop{\sum }\limits_{{j = 1}}^{N}{c}_{j}}, \tag{4} +$$ + +where $N$ is the number of candidate targets (e.g., the number of menu items), ${c}_{i}$ is the number of times we have observed target ${t}_{i}$ being selected, and $k$ is the pseudocount of the Dirichlet prior, a hyper-parameter of the distribution. The parameter $k$ can also be viewed as the update rate, which is a positive constant that controls how quickly the $P\left( {t}_{i}\right)$ are updated. Note that the prior updating model (Equation 4) is the same as the model proposed by Zhu et al. [51], although these authors do not describe it under the paradigm of categorical-Dirichlet distributions. We use the expected value of $\mathbf{p}$ (Equation 4) as the prior model in BayesGaze (Equation 3). + +This prior model matches our expectations well. When there is no target selection observed, the probability $P\left( {t}_{i}\right)$ is $\frac{k}{k \cdot N} = \frac{1}{N}$ , which means that all candidate targets have equal probability. Whereas when there are enough target selections observed, i.e. ${c}_{i} \gg k$ , we have $P\left( {t}_{i}\right) \approx \frac{{c}_{i}}{\mathop{\sum }\limits_{j}{c}_{j}}$ , which means that $P\left( {t}_{i}\right)$ can be estimated based on the frequency of ${t}_{i}$ having been selected before. + +By setting different $k$ , we can balance $P\left( {t}_{i}\right)$ between two extreme cases: 1) when $k \rightarrow + \infty$ , we have $P\left( {t}_{i}\right) \approx \frac{1}{N}$ , that is, the prior probabilities of all candidate targets are the equal. 2) when $k = 0$ , we have $P\left( {t}_{i}\right) = \frac{{c}_{i}}{\mathop{\sum }\limits_{j}{c}_{j}}$ , which means that the prior probability is only based on the history selection frequency. We later use empirical data to determine an optimal value for $k$ . + +§ 3.2.2 LIKELIHOOD MODEL + +The goal of this step is to estimate $P\left( {{s}_{i} \mid t}\right)$ , the likelihood of observing ${s}_{i}$ if $t$ is the intended target. Since ${s}_{i}$ is a single gaze position, a reasonable assumption is that $P\left( {{s}_{i} \mid t}\right)$ is higher if ${s}_{i}$ is closer to the center of $t$ . We follow Bernard et al. [4] and use a Gaussian density function to describe the likelihood of observing ${s}_{i}$ , a common method for modeling likelihood for a single-point target selection: + +$$ +P\left( {{s}_{i} \mid t}\right) = \frac{1}{\sqrt{{2\pi }{\sigma }^{2}}}\exp \left( {-\frac{{\begin{Vmatrix}{s}_{i} - {c}_{t}\end{Vmatrix}}^{2}}{2{\sigma }^{2}}}\right) , \tag{5} +$$ + +where ${c}_{t}$ is the center of target $t$ , the term $\begin{Vmatrix}{{s}_{i} - {c}_{t}}\end{Vmatrix}$ is the ${L}^{2}$ Euclidean norm of the vector ${s}_{i} - {c}_{t},\sigma$ is an empirical parameter defining how concentrated should the gaze points be. The parameter $\sigma$ controls how much interest can be accumulated at a certain distance. If $\sigma$ is too small, a target accumulates high interest only when the gaze point is close to the target center, which could make the target hard to select. On the other hand, if $\sigma$ is too large, the accumulated interests for neighboring targets could become large and cause mis-selections. We estimate an optimal $\sigma$ from real data in the next section. + +After obtaining both the prior probability and the likelihood, we can use BayesGaze to perform target selection. The BayesGaze algorithm is summarized in Algorithm 1. Note that the algorithm can be run online, i.e. when a gaze point ${s}_{i}$ is sampled by the gaze tracker, the top-level for-loop can be executed to check if a target is selected. + +Algorithm 1 BayesGaze Algorithm + +Input: Target set: $T = \left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{N}}\right\}$ , Gaze trajectory: $S =$ + + $\left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{K}}\right\}$ , Threshold: $\theta$ + +Output: Selected target $t$ , Selection time: ${\tau }_{\text{ sel }}$ + + for ${s}_{i}$ in $S$ do + + for ${t}_{i}$ in $T$ do + + Obtain prior probability $P\left( {t}_{j}\right)$ and compute likelihood + + $P\left( {{s}_{i} \mid {t}_{j}}\right)$ using Equation 5; + + Compute accumulated interest ${I}_{i}\left( {t}_{j}\right)$ from Equation 2; + + if ${I}_{i}\left( {t}_{j}\right) > \theta$ then + + Update prior probability $P\left( {t}_{m}\right)$ for each ${t}_{m} \in T$ given + + that ${t}_{j}$ is selected using Equation 4; + + return ${t}_{j},i \cdot {\Delta \tau }$ + + end if + + end for + + end for + +§ 3.2.3 BAYESGAZE WITHOUT PRIOR + +If we consider the prior to be Uniform distribution before every trial (i.e. $\forall {t}_{i} \in T,P\left( {t}_{i}\right) = 1/N$ ), BayesGaze will be identical to the Center of Gravity Mapping (CM) algorithm [4] (referred to as the CM method hereafter), a previously proposed method for deciding a target for a gaze-to-object mapping task. Under this special condition, the accumulated interest of the CM method can be calculated by Equation 2 with the prior $P\left( t\right) = 1/N$ , that is: + +$$ +{I}_{i}\left( t\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot P\left( {t \mid {s}_{i}}\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot \frac{P\left( {{s}_{i} \mid t}\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}P\left( {{s}_{i} \mid {t}_{j}}\right) }, \tag{6} +$$ + +where $P\left( {{s}_{i} \mid t}\right)$ is calculated by Equation 5. Therefore, we view BayesGaze as an improvement over the CM method that updates and incorporates the prior in the target selection process. The CM method is also very similar to the previously proposed Fractional Mapping method $\left\lbrack {{40},{47}}\right\rbrack$ . We later compare BayesGaze with the CM method to examine to what degree incorporating the prior can improve gaze target selection performance. + +In order to successfully apply the BayesGaze algorithm, we need to obtain the values of three parameters, denoted as a 3-tuple $\left\lbrack {k,\mathbf{\sigma },\mathbf{\theta }}\right\rbrack$ , where $k$ is part of the prior probability model (Equation 4), $\sigma$ is part of the likelihood model (Equation 5), and $\theta$ is the threshold of the accumulated interest for committing a selection. We carried out a study to collect gaze data for target selection and determine the optimal parameter values from that data. + +§ 4 PARAMETER DETERMINATION + +We adopted a data-driven simulation approach to search for the optimal parameter values for the BayesGaze algorithm. The procedure consists of two phases. In Phase 1, we carried out a Wizard-of-Oz study to collect gaze input data for selecting a target. In Phase 2, we fed the collected data to the BayesGaze algorithm to search for the optimal parameter values. We also searched for the optimal parameter for the Dwell method (Equation 1) and for the CM method (Equation 6). + +§ 4.1 PHASE 1: COLLECTING GAZE INPUT DATA VIA A WIZARD-OF- OZ STUDY + +We first carried out a Wizard-of-Oz study to collect gaze input data for selecting a target. We focused on a 1-dimensional target selection task, where the target is a horizontal bar and gaze motion is vertical. We picked this task because 1-dimensional pointing is a typical target selection task, and horizontal bars are widely used UI elements on mobile computing devices such as smartphones and tablets. + +§ 4.1.1 PARTICIPANTS + +Twelve users ( 4 female) between 23 and 31 years old (average ${27.25} \pm {2.22})$ participated in the experiment. All of them had normal or correct-to-normal sight and none of them was color blind. None of them had the experience of using gaze tracking devices or applications. + +§ 4.1.2 APPARATUS + +We used the 11-in iPad Pro for gaze tracking and to run the experiment because it can be widely and conveniently accessed. The gaze tracking was implemented based on the iPad's true depth camera and Apple’s ARKit library, and the sampling rate was ${60}\mathrm{\;{Hz}}$ . Specifically, we used the leftEyeTransform and rightEyeTransform provided by ARKit library and performed a hitTestWithSegment call to obtain the raw gaze position. Based on the recommendation of [11], we used the Outlier Correction filter with a triangle kernel [21] to obtain smooth gaze tracking. The filter contains a saccade/fixation detection module so that it can apply sliding windows of different lengths separately for saccades and fixations. The thresholds for the $x$ and $y$ axis to detect a saccade were both set to ${0.5}^{ \circ }$ (calculated based on the estimated face-screen distance). For fixations, the sliding window size of the filter was set to 40 as suggested by [11]. For the saccade, the sliding window size of the filter was set to 10, rather than using the raw position directly, to increase gaze tracking stability. We also followed the findings of previous works $\left\lbrack {{24},{38},{41}}\right\rbrack$ that allow head movements to improve target selection performance. We used a gazing task where the user gazes at 40 different points on the screen with a cursor showing where the user is looking at to test the gazing accuracy. The result showed a ${0.67}^{ \circ }$ with a standard deviation of 0.85, which means the user may accurately control the gaze to select targets. + +§ 4.1.3 PROCEDURE + +During the experiment the participant sat in front of a desk where an iPad Pro running the experiment was placed on a phone holder. The participant can freely adjust the iPad position, and was instructed to keep the distance between their eyes and the iPad at around ${40}\mathrm{\;{cm}}$ . + +The study includes multiple target selection trials. In each trial, a horizontal bar in blue was displayed on the screen as the target and the participant was instructed to select it via gaze input. Fig. 2 shows the setup. Before each trial, the participant first moved the gaze-controlled cursor in the starting gray bar. After 3 seconds, the starting bar turned green, signaling the start of the trial. The participant was then instructed to move the cursor with their gaze to select a target of width(W)at a distance $D$ from the starting bar. We collected gaze input data for 5 seconds after a trial started. We assumed that 5 seconds was long enough for the participants to select a target. Each participant took a break after 15 trials. In total the experiment lasted around 15 minutes per participant. + +We adopted a within-participant $3 \times 4 \times 2$ design with three levels of target width $W : 2\mathrm{\;{cm}}\left( {{2.86}^{ \circ }\text{ calculated based on a participant- }}\right.$ screen distance of ${40}\mathrm{\;{cm}}$ ), $3\mathrm{\;{cm}}\left( {4.29}^{ \circ }\right)$ , and $4\mathrm{\;{cm}}\left( {5.76}^{ \circ }\right)$ , four levels of distance $D : 6\mathrm{\;{cm}}\left( {8.53}^{ \circ }\right) ,8\mathrm{\;{cm}}\left( {11.31}^{ \circ }\right) ,{10}\mathrm{\;{cm}}\left( {14.04}^{ \circ }\right)$ , and ${12}\mathrm{\;{cm}}\left( {16.70}^{ \circ }\right)$ , and two levels of gaze motion direction: up or down from the starting bar. We counterbalanced the factors by randomizing the trials in the experiment. + + < g r a p h i c s > + +Figure 2: A screenshot of the Wizard-of-Oz study. The green button is the starting bar, and the target is shown as a blue bar. There is a red cursor indicating where the participant is looking. + +In total, the study resulted in 12 participant $\times 3$ target sizes $\times 4$ distances $\times 2$ directions $\times 2$ repetitions = 576 trials. + +§ 4.2 PHASE 2: DETERMINING PARAMETERS FROM THE COL- LECTED DATA + +We created a set of gaze-based target selection tasks, simulated gaze input based on the data collected in Phase 1, and searched for the parameter values for the BayesGaze, CM, and Dwell that led to high input accuracy and fast input speed. + +§ 4.2.1 SIMULATING EYE-GAZE TARGET SELECTION TASKS + +We first created a set of target selection tasks in which a user is supposed to control their gaze to select a target among $N$ candidates. These $N$ candidates are stacked together with no gap between them to simulate the common vertical list or vertical menu design of mobile devices (e.g., settings menus in iOS). We included the same 3 target sizes in the simulation as in the data collection study $(2,3$ , and $4\mathrm{\;{cm}}$ ) and set $N = 5$ . The gaze trajectories for selecting a target are obtained from the collected data, according to the target sizes. Fig. 3 shows examples of simulated gaze trajectories for selecting different targets on the screen. + +Since previous research has shown that the distribution of menu items being selected follows Zipf’s distribution $\left\lbrack {1,8,{10},{25},{30},{51}}\right\rbrack$ , we assumed that the frequency of each candidate being the target follows Zipf's Law: + +$$ +f\left( {l;\alpha ,N}\right) = \frac{1/{l}^{\alpha }}{\mathop{\sum }\limits_{{n = 1}}^{N}\left( {1/{n}^{\alpha }}\right) }, \tag{7} +$$ + +where $N$ is the number of candidate targets (in the simulation, $N = 5$ ), $l \in \{ 1,2,\ldots ,N\} ,n$ is the rank of each target, and $\alpha$ is the value of the exponent characterizing the distribution. We include ${4\alpha }$ values (0.5,1,2,3)in the simulation. + + < g r a p h i c s > + +Figure 3: An example of using the same gaze trajectory to simulate selecting a target (the blue one) at different indices among the five horizontal bars. The three red line shows the same gaze trajectory collected in the Wizard-of-Oz experiment. The red dot indicates the start of the trajectory. A simulated user is selecting the $2\mathrm{{nd}}\left( \mathrm{a}\right)$ , the $3\mathrm{{rd}}\left( \mathrm{b}\right)$ , and the $4\mathrm{{th}}\left( \mathrm{c}\right)$ target among5target candidates, with the same gaze trajectory. + +For each target size, we had 192 collected trajectories. Among the $N$ candidates, we randomly assigned the frequencies. For example, when $N = 5$ and $\alpha = 1$ , the generated frequencies can be $\lbrack {28},{84}$ , ${21},{42},{17}\rbrack$ , which means that the first target among 5 candidates will be selected 28 times, the second 84 times, etc. We randomly selected trajectories (without repetition) to simulate selecting targets at different indices given the generated frequencies. + +§ 4.2.2 SEARCHING FOR THE PARAMETER VALUES + +Given a particular parameter tuple $\left\lbrack {k,\sigma ,\theta }\right\rbrack$ , we ran the BayesGaze algorithm to determine the selected target in the simulated target selection tasks. We viewed the process of searching for the optimal parameter values as an optimization problem: determining parameter values that optimizes target selection performance, measured in terms of success rate and selection time. + +We performed a grid search to search for optimal parameter values for $k,\sigma$ and $\theta$ . In the grid search, $k$ ranges from 0.5 to 5 by steps of ${0.5},\sigma$ ranges from ${0.14}\mathrm{\;{cm}}\left( {0.2}^{ \circ }\right)$ to ${1.4}\mathrm{\;{cm}}\left( {2}^{ \circ }\right)$ by steps of ${0.14}\mathrm{\;{cm}},\theta$ ranges from 0.2 seconds to 2 seconds by steps of 0.1 seconds. The simulation results showed that different values for $k$ do not influence performance. We chose ${k}^{ * } = 1$ , as in [51]. When $k = 1$ , the Dirichlet prior of the Categorical distribution, without observing any selection results, becomes a Uniform distribution, i.e. an equally distributed prior. The best parameters for $\sigma$ were from 0.28 cm to ${0.56}\mathrm{\;{cm}}$ for BayesGaze. We chose ${\sigma }^{ * } = {0.28}\mathrm{\;{cm}}$ to reduce the chance of mis-selections. + +Because we want to improve two objectives, success rate and selection time, we adopted a Pareto optimization process to find the optimal $\theta$ .The process generates a set of parameter values, called the Pareto-optimal set or Pareto front. Each parameter in the set is Pareto-optimal, which means that none of the two metrics (success rate or selection time) can be improved without hurting the other metric. We plot the Pareto front of BayesGaze in Fig. 4a. We followed the exact same optimization process to search for the optimal parameter values for the CM and Dwell methods, and generated the corresponding Pareto fronts in Fig. 4b and 4c. For the CM method, the parameters are a 2-tuple $\left\lbrack {\sigma ,\theta }\right\rbrack$ , as it does not incorporate the prior into the accumulated interest. For the Dwell method, the parameter is $\theta$ , the threshold for deciding whether a target is selected based on the accumulated selection interest. + +To balance the success rate and selection time, we assigned equal weights to success rate and selection time. We first normalized the success rate and selection time to the range $\left\lbrack {0,1}\right\rbrack$ . We picked a parameter value ${\theta }^{ * }$ that leads to the best overall score $S$ , which is defined as: + +$$ +S = {0.5} \times \text{ SuccessRate } - {0.5} \times \text{ SelectionTime, } \tag{8} +$$ + +where SuccessRate and SelectionTime are the normalized values between 0 and 1, according to the highest and lowest values displayed in Fig. 4. The coefficient of SelectionTime is -0.5 because the lower the selection time, the higher the selection performance. The optimal parameters for different $\alpha$ values are the same and are summarized in Table 1. + +max width= + +Target Selection Method ${k}^{ * }$ ${\sigma }^{ * }$ ${\theta }^{ * }$ + +1-4 +BayesGaze 1 0.28 cm 0.9 + +1-4 +CM - 0.28 cm 0.9 + +1-4 +Dwell - - 0.8 + +1-4 + +Table 1: Optimal parameters (same for different $\alpha$ in Zipf’s Law) selected on the Pareto front for three target selection methods + +§ 5 A TARGET SELECTION EXPERIMENT + +To empirically evaluate BayesGaze, we conducted an 1D gaze-based target selection study using the parameters from the simulations. We included CM and Dwell as baselines in our study because (1) Dwell was a widely adopted target selection method and CM was one of the best-performed algorithms from the literature, and (2) CM can be viewed as BayesGaze without prior. Including these two methods in comparison allowed us to evaluate whether BayesGaze improved the performance with extant algorithms, and to understand how the two components of BayesGaze (likelihood function and prior) would contribute the target selection performance improvement. + +§ 5.1 PARTICIPANTS AND APPARATUS + +Eighteen adults ( 5 female) between 24 and 31 years old (average ${27.2} \pm {2.1}$ ) participated in the study. All of them had normal sight or correct-to-normal sight and none of them reported himself/herself as color blind. + +The apparatus was the same as that used in the Wizard-of-Oz study (Section 4.1.2), so was the eye-gaze tracking technology: we used an iPad Pro with true-depth camera; the eye-gaze tracking technology was implemented with the ARKit library, as previously described. + + < g r a p h i c s > + +Figure 4: The Pareto front of different parameter combinations for 3 target selection methods under $\alpha = 1$ in Zipf’s Law. The enlarged dots represent the selected parameter settings for three methods, respectively. These settings have the most balanced performance according to Equation 8. + +§ 5.2 DESIGN + +We adopted a $\left\lbrack {3 \times 2 \times 2}\right\rbrack$ within-participant design. The three independent variables were: (1) the target selection method with 3 levels (BayesGaze, CM, Dwell), (2) the target size with 2 levels (1 $\mathrm{{cm}}$ or ${1.43}^{ \circ }$ , and $2\mathrm{\;{cm}}$ or ${2.86}^{ \circ }$ ), and (3) the $\alpha$ value of the Zipf’s distribution with 2 levels $\left( {\alpha = 1\text{ , and }\alpha = 2}\right)$ . The Zipf’s distribution controls the distribution of the intended targets among the candidates. + +For each selection method $\times$ target size $\times$ Zipf’s law $\alpha$ combination, each participant performed 24 trials. When $\alpha = 1$ , the frequencies of the 5 target candidates being the intended targets were11,5,4,3,1; when $\alpha = 2$ , these frequencies were 16,4,2,1,1. We included two $\alpha$ values to evaluate whether the skewness of the target distribution affects selection performance. Among a set of 24 trials, the distance between the target and the starting bar was either $4\mathrm{\;{cm}}$ or $5\mathrm{\;{cm}}$ with ${50}\%$ probability for each distance, and the target was either above or below the starting bar, also with ${50}\%$ probability for each option. + + < g r a p h i c s > + +Figure 5: The controlled 1D gaze target selection experiment + +§ 5.3 PROCEDURE + +For each trial, the participant was instructed to select one of the five adjacent horizontal bars displayed on the iPad screen via eye-gaze. The tracked gaze position was rendered as a cross-hair cursor on the display, as shown in Figure 5. The target to be selected was shown in blue and other targets in cyan. A starting bar was also displayed, which served as the starting position for the gaze input. Prior to starting a trial, the participant was asked to move the cursor into the starting bar which was initially displayed in gray. The bar turned to green after three seconds, signaling the start of a trial. The participant then moved the cursor to select the target bar on the screen. The selected target then turned dark. If the user selected the wrong target, or did not select any target after 5 seconds after the beginning of the trial, it was considered a miss. The participant moved to the next trial regardless of the outcome of the trial. To alleviate eye fatigue, the participant was allowed to take a break no longer than 2 minutes every 15 trials. Fig. 5 shows a screenshot of the experiment and a participant performing a trial. + +After each trial, BayesGaze updated the prior probability for each target candidate. We assumed that each condition corresponds to a particular interface, and when the experimental condition changes (e.g., target size, or $\alpha$ value in Zipf’s distribution), we reset all the prior information. + +The participants were guided to select the target as accurately and quickly as possible. At the end of the study, participants were asked to rate their preference over the three methods on a scale of 1 to 5 (1: dislike, 5: like very much). They also answered a subset of NASA-TLX [12] questions to measure the workload of the gaze target selection task, including about mental and physical demand. The rating of the workload was from 1 to 10, from least to most demanding. The experiment lasted about 50 minutes. + +To counterbalance the independent variables, the methods were fully balanced based on all 6 possible orders. For half of the users, $\alpha$ was set to 1 for the first half of the trials, and to 2 for the other half. For the other half of the users, it was the opposite order. Other factors were randomized. In total, we collected 18 users $\times 3$ methods $\times 2$ target sizes $\times {2\alpha } \times {24}$ trials $= {5184}$ trials. + +§ 5.4 RESULTS + +We evaluate the performance of the BayesGaze, CM, and Dwell by the success rate and selection time. + +§ 5.4.1 SUCCESS RATE + +The success rate measures the ratio of correct selections over the total number of trials. The results (Fig. 6a) show that: 1) BayesGaze always has the highest success rate and Dwell has the lowest success rate, which confirms the effectiveness of Bayesian approach and the benefit of using the prior. 2) Large targets(2cm)have higher Online Submission ID: 26 success rate than small targets(1cm), because it is much easier to move one's gaze into a large target. + +max width= + +2*Target Selection Method 5|c|Frequencies when $\alpha = 1$ 5|c|Frequencies when $\alpha = 2$ + +2-11 + 11 5 4 3 1 16 4 2 1 1 + +1-11 +BayesGaze 88.1 86.1 88.9 84.3 77.8 90.6 87.5 90.3 88.9 83.3 + +1-11 +CM 85.6 82.8 84.0 86.1 86.1 85.9 87.5 93.1 88.9 88.9 + +1-11 +Dwell 83.1 85.6 75.7 84.3 88.9 79.9 85.4 83.3 86.1 83.3 + +1-11 + +Table 2: The success rate (%) for different target selection frequencies (the lowest success rate is marked in bold) + +A repeated measures ANOVA on success rate shows two significant main effects: target selection method $\left( {{F}_{2,{34}} = {11.45},p < {0.001}}\right)$ and target size $\left( {{F}_{1,{17}} = {30.76},p < {0.001}}\right)$ . The test does not show a significant main effect of Zipf’s Law’s $\alpha \left( {{F}_{1,{17}} = {1.722},p = {0.207}}\right)$ . There is no significant interaction effect. Pairwise comparisons with Holm adjustment [13] on the success rate show significant differences between BayesGaze vs. Dwell $\left( {p < {0.01}}\right) ,\mathrm{{CM}}$ vs. Dwell $\left( {p < {0.05}}\right)$ , and BayesGaze vs. CM $\left( {p < {0.05}}\right)$ . + +The overall mean $\pm {95}\%$ confidence interval (CI) of success rate among all target sizes and $\alpha$ is ${88.3}\% \pm {3.6}$ for BayesGaze, ${85.9}\% \pm {4.3}$ for CM, and ${82.1}\% \pm {5.2}$ for Dwell. In total, BayesGaze improves the success rate by ${6.2}\%$ over Dwell, and by ${2.4}\%$ over CM. + + < g r a p h i c s > + +(b) Decomposition of the error rate for target size $\times$ Zipf’s Law’s $\alpha$ + +Figure 6: The average success rate with ${95}\% \mathrm{{CI}}$ and the decomposition of the error rate (Mis-Selection (MS) and Non-Selection (NS)) + +In addition to the success rate, we also look into the error rate, which measures the ratio of the cases where the right target is not selected. There are two types of errors: (1) Mis-Selection (MS), where a wrong target is selected, and (2) Non-Selection (NS), where no target is selected. We examine the error rates of these two types of errors separately. Fig. 6b shows the decomposition of the error rate. The major part of the error rate of BayesGaze and CM comes from mis-selection, and the same for Dwell when the target size is $2\mathrm{\;{cm}}$ . However, when the target size is $1\mathrm{\;{cm}}$ , Dwell suffers from not selecting any target. The result implies that using a Bayesian framework can alleviate the problem of not being able to select target. + +With BayesGaze, a potential side effect of incorporating the prior might be that less frequent targets are more difficult to select. Table 2 shows the success rates by target frequency. Although the success rates for items with a frequency of 1 are lower than for the high frequency items, they are still near ${80}\%$ . A repeated measures ANOVA does not show significant main effects of frequency on success rate for BayesGaze $\left( {{F}_{9.153} = {0.776},p = {0.639}}\right) ,\mathrm{{CM}}$ $\left( {{F}_{9,{153}} = {1.248},p = {0.27}}\right)$ , or Dwell $\left( {{F}_{9,{153}} = {0.669},p = {0.736}}\right)$ , indicating that this potential side effect is minor. + +§ 5.4.2 SELECTION TIME + +Fig. 7 shows the results for selection time, which measures the time to select the target from the start of the trial. As with the success rate, we observe that: 1) BayesGaze has the lowest selection time, and Dwell has the longest one; 2) Small targets(1cm)take longer to select than large ones (2cm), especially for Dwell. + + < g r a p h i c s > + +Figure 7: The average selection time (with ${95}\%$ CI) by target size $\times$ Zipf’s Law’s $\alpha$ + +A repeated measures ANOVA on selection time shows two significant main effects: target selection method $\left( {{F}_{2.34} = {21.19},p < {0.001}}\right)$ and target size $\left( {{F}_{1.17} = {116.9},p < {0.001}}\right)$ . The test does not show a significant main effect of Zipf’s Law’s $\alpha \left( {{F}_{1,{17}} = {1.685},p = {0.212}}\right)$ . The only significant interaction effect is target size $\times$ target selection method $\left( {{F}_{2,{34}} = {31.81},p < {0.001}}\right)$ . Pairwise comparisons with Holm adjustment on selection time show significant differences for BayesGaze vs. Dwell $\left( {p < {0.001}}\right)$ and CM vs. Dwell $\left( {p < {0.01}}\right)$ . The pairwise comparisions does not show a significant difference for BayesGaze vs. CM $\left( {p = {0.09}}\right)$ . + +The overall mean $\pm {95}\%$ CI selection time among all target sizes and $\sigma$ is ${2.23} \pm {0.15}$ seconds for BayesGaze, ${2.30} \pm {0.15}$ seconds for $\mathrm{{CM}}$ , and ${2.49} \pm {0.18}$ seconds for Dwell. In total, BayesGaze can save ${10.4}\%$ selection time over Dwell, and 3% over CM. + +§ 5.4.3 SUBJECTIVE FEEDBACK + +The result of subjective feedback is shown in Fig. 8. For overall preference, the median ratings for BayesGaze, CM and Dwell are 4, 3.5 and 3 respectively. BayesGaze has the highest median rating. For mental and physical demand, the medians are 6.5 and 5.5 for BayesGaze, 6 and 6 for CM, and 7.5 and 7.5 for Dwell. Nonparametric Friedman tests do not show significant main effects of selection method on three metrics: overall preference $\left( {{X}_{r}^{2}\left( 2\right) = }\right.$ ${1.11},p = {0.57})$ , physical demand $\left( {{X}_{r}^{2}\left( 2\right) = {2.93},p = {0.085}}\right)$ , and mental demand $\left( {{X}_{r}^{2}\left( 2\right) = {5.24},p = {0.073}}\right)$ . The $p$ values for physical and mental demanding are approaching statistical significance. + + < g r a p h i c s > + +Figure 8: The median of subjective ratings of overall preference, mental demand and physical demand. For overall preference, higher ratings are better. For mental and physical demand, lower ratings are better. + +§ 5.5 DISCUSSION + +Performance. The experiment results show that BayesGaze outperformed both the Dwell and CM methods, in both selection accuracy and speed. BayesGaze improved the success rate of Dwell from 82.1% to 88.3%, i.e. a 6.2% increase, and reduced selection time from 2.49 seconds to 2.23 seconds, i.e. a 10.4% deduction. BayesGaze also improved the success rate of CM by ${2.4}\%$ , and reduced the selection time by $3\%$ . Pairwise comparisons with Holm adjustment showed all these differences to be significant $\left( {p < {0.05}}\right)$ , except for selection time between BayesGaze vs. CM $\left( {p = {0.09}}\right)$ . + +The promising performance of BayesGaze first shows that incorporating the prior significantly improves target selection performance. Compared with CM, which can be viewed as BayesGaze without prior, BayesGaze performed better in both accuracy and speed across all conditions. This suggests that incorporating the prior distribution of targets is effective in improving the performance of gaze-based target selection tasks. Second, both BayesGaze and CM outperformed Dwell, indicating that accumulating the interest, which is represented by the posterior in BayesGaze and by the likelihood in CM, is also effective for gaze-based target selection. + +Prior. Incorporating the prior might make less frequent targets more difficult to select, even though we did not observe it in our experiment, as shown in Table 2. There are several ways to prevent this potential problem: (1) Set a lower bound for the target frequency so that no target will become hard to select. (2) In real-world applications, leverage user actions to address the problem. For example, if the previous selection is incorrect (back/cancel action is performed immediately), reduce the probability of the incorrect target. (3) Similar to what we do in this paper, use a small $\sigma$ for the likelihood model in order to decrease interference between neighboring targets. + +Midas-Touch Problem. Our method can also work with existing approaches to solve the Midas-Touch problem in gaze target selection, For example: (1) We can use methods like [5, 43] to infer whether a user is reading content on the UI or controlling their gaze to select a target. These methods will classify gaze positions into content reading phase and target selection phase. BayesGaze can discard the gaze positions in the content reading phase, and use only the gaze positions in the target selection phase to decide the target. (2) We can increase the threshold of accumulated posterior for selection to mitigate the Midas-Touch problem. Reading content on UI tends to take a shorter period of time than controlling gaze to select a target. Increasing the threshold could prevent falsely activating the selection, and the actual threshold should be set based on specific scenarios. This approach is also adopted by dwell-based methods (e.g., [28]) to mitigate the Midas-Touch problem. + +Target Dimension. This paper considers 1D targets to show that Bayes' theorem can be adopted to improve the performance of gaze-based target selection. In real applications, there are many linear menus on computers and smartphones where our method can be directly applied. However, the underlying principle (Eq. 2 - 5) is not tied to a specific type of target and works for both 1D and 2D targets. The main difference between 1D and 2D target selection lies in the likelihood function (Eq. 5). For 1D targets, we adopted a 1D Gaussian; for 2D targets it should be replaced by a 2D Gaussian distribution. The rest of the method, including updating priors, accumulating weighted posterior, and using Pareto optimization to balance accuracy and selection time will remain the same. + +Scalability. BayesGaze uses a gaze position buffer to store the gaze trajectory and empties it after each selection action. Our study (Fig. 7) shows that most selections happen within 3 seconds, which takes only a small amount of memory to store gaze data. In real-world applications, we may set a rolling-window with a size of 3 seconds to store gaze position. It can then scale up and handle long gaze-based input. + +§ 6 CONCLUSION + +In this paper, we introduced BayesGaze, a Bayesian approach to determining the selected target given an eye-gaze trajectory. This approach views each sampling point in a gaze trajectory as a signal for selecting a target, uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by the sampling interval over all sampling points to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution with a Dirichlet prior. Our controlled experiment showed that BayesGaze improves target selection accuracy from 82.1% to 88.3% and selection time from 2.49 seconds per selection to 2.23 seconds over the widely adopted dwell-based selection method. It also improves selection accuracy and selection time over the CM method [4] (85.9%, 2.3 seconds per selection), a high-performance gaze target selection algorithm. Overall, our research shows that both incorporating the prior and accumulating the posterior are effective in improving the performance of gaze-based target selection. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..f75f3b94fbf38d69acbecd5af961f6e9ee371462 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,249 @@ +# Improved Low-cost 3D Reconstruction Pipeline by Merging Data From Different Color and Depth Cameras + +Category: Research + +## Abstract + +The performance of traditional 3D capture methods directly influences the quality of digitally reconstructed 3D models. To obtain complete and well-detailed low-cost three-dimensional models, this paper proposes a 3D reconstruction pipeline using point clouds from different sensors, combining captures of a low-cost depth sensor post-processed by Super-Resolution techniques with high-resolution RGB images from an external camera using Structure-from-Motion and Multi-View Stereo output data. The main contribution of this work includes the description of a complete pipeline that improves the stage of information acquisition and makes the data merging from different sensors. Several phases of the $3\mathrm{D}$ reconstruction pipeline were also specialized to improve the model's visual quality. The experimental evaluation demonstrates that the developed method produces good and reliable results for low-cost 3D reconstruction of an object. + +Keywords: Low-Cost 3D Reconstruction, Depth Sensor, Photogrammetry. + +Index Terms: Computer graphics, Shape modeling, 3D Reconstruction. + +## 1 INTRODUCTION + +3D reconstruction makes it possible to capture the geometry and appearance of an object or scene allowing us to inspect details without risk of damaging, measure properties, and reproduce 3D models in different material [21]. In recent years, numerous advances in 3D digitization have been observed, mainly by applying pipelines for three-dimensional reconstruction using costly high-precision 3D scanners. In addition, recent researches have sought to reconstruct objects or scenes using depth images from low-cost acquisition devices (e.g., the Microsoft Kinect sensor [17]) or using Structure from Motion (SFM) [24] combined with Multi-View Stereo (MVS) [5] from RGB-images. + +Good quality 3D reconstructions require a large number of financial resources, as they require state-of-the-art equipment to capture object data in high precision and detail. On the other hand, low-resolution equipment implies a lower quality capture, even being financially more viable. Even with the ease of operation, lightweight, and portability, low-cost approaches must consider the limitations of the scanning equipment used [20]. + +The acquisition step of a 3D reconstruction pipeline refers to the use of devices to capture data from objects in a scene such as their geometry and color [22]. One result of 3D geometry capture is the production of discrete points collection that demonstrates the model shape. We call it point clouds. The data obtained by this step will be used in all other phases of the 3D reconstruction process [2]. + +Active capture methods use equipment such as scanners to infer objects geometry through a beam of light, inside or outside the visible spectrum. The scanner sensor has the advantages of fast measuring speed, robustness regarding external factors, and ease of acquiring information. Active sensors also have good performance in reconstructing texture-less and featureless surfaces [6,22]. The sensors need to be sensitive to small variations in the information acquired, since for small differences in distance, the variation in the time it takes to reach two different points is very low, requiring low equipment latency and good response time. For this reason, these systems tend to be slightly noisy [21]. Considering low-cost reconstruction approaches difficulties to capture color in high precision are disadvantage [10]. + +Passive methods are based on optical imaging techniques. They are highly flexible and work well with any modern digital camera. Image-based 3D reconstruction is practical, non-intrusive, low-cost and easily deployable outdoors. Various properties of the images can be used to retrieve the target shape, such as material, viewpoints and illumination. As opposed to active techniques, image-based techniques provide an efficient and easy way to acquire the color of a target object [10]. Although passive reconstructions mainly using SFM and MVS produce excellent results, they have limitations like the difficulty of distinguishing the target object from the background [25] and require the target object to have detailed geometry [6]. A controlled environment is needed to obtain better reconstruction results $\left\lbrack {{12},{24}}\right\rbrack$ . + +Considering the limitations imposed by the presented approaches, it is important to note that a target whose geometry has been described by only a low-cost capture method has a real challenge in expressing its completeness, with rich and small details [6]. + +This paper proposes a hybrid pipeline from a low-cost depth camera (low-resolution images) and an external color capture camera (digital camera with high-resolution RGB images) to estimate and reconstruct the surface of an object and apply a high-quality texture. Such limitations of each data acquisition approach are bypass, generating a complete and well-detailed replica of the target model with high visual quality. To achieve this effect, this project uses a variation and combination of Structure from Motion, Multi-View Stereo and depth camera capture techniques. + +Although there are mature projects aimed at low-cost 3D reconstruction, few are those who describing step-by-step how to overcome the limitations from low-cost three-dimensional data capture using the best features in all phases of the pipeline to obtain the model as realistic as possible. The main contribution of this work is the description of a complete pipeline that makes use of post-processed depth captures and merging data from different sensors, in which depth sensor data and high-resolution color images do not need to be synchronized. + +As it is a post-processed task (after capture/estimate depth data), this work also includes the detection of the region of interest, based on the average distance of the scene, removing points not belonging to the target object and allows the inclusion of new images containing regions of the target object not previously photographed to improve the texturing step results. + +In addition to this introductory section, this work is organized as follows: Section 2 presents related works, while section 3 describes the proposed pipeline. The experiments and evaluation of the pipeline are presented in section 4. Finally, section 5 discusses the final considerations and results achieved by this research. + +## 2 RELATED WORK + +Prokos et al. [19] proposed a hybrid approach combining shape from stereo (with additional geometric constraints) and laser scanning techniques. Using two cameras and a portable laser beam, they achieved accuracy as good as some high-end laser triangulation scanners. Although, they do not include automatically detecting outliers in their results. + +The KinectFusion system [17] tracks the pose of portable depth cameras (Kinect) as they move through space and perform good three-dimensional surface reconstructions in real-time. The Kinect sensor has considerable limitations, including temporal inconsistency and the low resolution of the captured color and depth images [22]. Real-time reconstruction is not a requirement for well-detailed, accurate, and complete reconstructions. + +Silva et al. [26] provides a guided reconstruction process using Super-Resolution (SR) techniques, helping to increase the quality of the low-resolution data captured with a low-cost sensor. The method of data acquisition using low-cost depth cameras and SR is also improved by Raimundo [22]. Even with depth image improvements, a poor registration of captures can affect the final model's shape. + +Falkingham [9] demonstrates the potential applications of low-cost technology in the field of paleontology. The Microsoft Kinect was used to digitalize specimens of various sizes, and the resulting digital models were compared with models produced using SFM and MVS. The work pointed out that although Kinect generally registers morphology at a lower resolution capturing less detail than photogrammetry techniques, it offers advantages in the speed of data acquisition and generation of the $3\mathrm{D}$ mesh completed in real-time during data capture. Also, they did not use Super-Resolution to improve captures from low-cost devices and the models produced by the Kinect lack any color information. + +Zollhöfer et al. [28] used a Kinect sensor to capture the geometry of an excavation site and took advantage of a topographic map to distort the reconstructed model, significantly increasing the quality of the scene. The global distortion, with Super-Resolution techniques applied to raw scans, significantly increased the fidelity and realism of its results but is too specialized for large scales scenes. + +Paola and Inzerillo [8] in order to digitally produce the Egyptian stone from Palermo, proposed a method with a structured light scanner, smartphones and SFM to apply texture in the highly accurate mesh generated by the scanner. The main challenges were the dark color of the material and the superficiality of the groove of the hi-eroglyphs that some capture approaches have difficulty recognizing. The level of detail of the texture application showed up quite accurately. This reference work used a high-resolution 3D scanner, not aiming at low-cost reconstruction. + +Jo and Hong [13] use a combination of terrestrial laser scanning and Unmanned Aerial Vehicle (UAV) photogrammetry to establish a three-dimensional model of the Magoksa Temple in Korea. The scans were used to acquire the perpendicular geometry of buildings and locations, being aligned and merged with the photogrammetry output, producing a hybrid point cloud. The photogrammetry adds value to the $3\mathrm{D}$ model, complementing the point cloud with the upper parts of buildings, which are difficult to acquire through laser scanning. + +Chen [6] proposes a registration method to combine the data of a laser scanner and photogrammetry to reconstruct the real outdoor 3D scene. They managed greatly increasing the accuracy and convenience of operation. The two sensors can work independently, the method fuses their data even if in different scales. Mesh reconstruction and texturing were not explored by this work. + +Raimundo et al. [21] point out in their bibliographic review several studies that successfully used advanced rendering techniques such as global illumination, ambient occlusion, normal mapping, shadow baking, per-vertex lighting, and level of detail. These rendering techniques also improve the final presentation of 3D reconstructions. + +## 3 PIPELINE PROPOSAL + +To overcome limitations of the low-cost three-dimensional data acquisition process, the following pipeline is proposed: capturing depth and color images (using a low-cost depth sensor and a digital camera); generation of point clouds from low-cost RGB-D camera depth images (using SR techniques [22]); shape estimation from RGB images (using SFM [24] and MVS [5]); merging of data from these different capture techniques; mesh generation; and texturing with high quality photos (Fig. 1). Several phases of the pipeline were specialized to achieve better accuracy and visual quality of 3D reconstructions of small and medium scale objects. The proposed pipeline works offline. + +![01963e92-3739-796c-974f-481252d47763_1_939_146_695_778_0.jpg](images/01963e92-3739-796c-974f-481252d47763_1_939_146_695_778_0.jpg) + +Figure 1: Schematic diagram for the proposed pipeline and the 3D reconstruction processes of an object. + +### 3.1 Data acquisition + +For capture using a low-cost depth sensor is established the following acquisition protocol: take several depths captures, moving the sensor around the object, and defining the limits of the capture volume. Furthermore, a turntable can also be used, obtaining a more controlled capture and align process. The number of views captured is less than that of real-time approaches due to the additional processing required to ensure the quality of each capture. Considering the quality requirements for this proposed work, an interactive tool [20] is used to acquire the raw data from the depth sensor (Fig. 2). + +The depth capture method will present results proportional to the better the captures by the device, that is, the lower the incidence of noise and the better the accuracy of the inferred depth. With this in mind, each depth image goes through a filtering step with the application of Super-Resolution [22]. To provide high-resolution information beyond what is possible with a specific sensor, several low-resolution captures are merging, recreating as much detail as possible. + +To add 3D information in greater detail and to apply a simple high-quality texturing process, photographs are taken from a digital camera around the target object. In our pipeline, these captures are independent of the depth sensor, we need just to take pictures with the fixed object, in a free movement of the camera. The set of images must be sufficient to cover most of the object's surface and the images must portray, in pairs, common parts of it. The color images will be used in the SFM pipeline. + +The SFM pipeline detects characteristics in the images (feature detection), mapping these characteristics between images and finding descriptors capable of representing a distinguishable region (feature matching). These descriptors represent vertices of the reconstruction of the 3D scene (sparse reconstruction). The greater the number of matches found between the images, the greater the degree of accuracy of calculating a $3\mathrm{D}$ transformation matrix between the images, providing the estimation of the relative position between camera poses $\left\lbrack {3,{10}}\right\rbrack$ . + +![01963e92-3739-796c-974f-481252d47763_2_160_144_700_831_0.jpg](images/01963e92-3739-796c-974f-481252d47763_2_160_144_700_831_0.jpg) + +Figure 2: Software to acquire and process depth images. The slider controls the capture limits (in millimeters) and the cut limits (in pixels), effectively determining the capture volume. + +Photographs with good resolution and objects with a higher level of detail tend to bring greater precision to the photogrammetry algorithms. For objects with fewer details and features, the environment can be used to achieve better results [24]. In addition to the estimated structure to improve the depth sensor captured geometry, we use these cameras' pose estimation to apply easily and directly texture over the final model surface. + +The Multi-View Stereo process is used to improve the point cloud obtained by SFM, resulting in a dense reconstruction. As the camera parameters such as position, rotation, and focal length are already known from SFM, the MVS computes 3D vertices in regions not detected by the descriptors. Multi-View Stereo algorithms generally have good accuracy, even with few images [10]. + +For this image-based point cloud result, to highlight the target object, a method of detecting the region of interest can be used. A simple algorithm is used to detect the centroid of the set of 3D points and remove points based on a radius from it. If the floor below the object is discernible, it is also possible to use a planar segmentation algorithm to remove the plane. A statistical removal algorithm can also be used to remove outliers. If even more accurate outlier removal is required, a manual process using a user interface tool can be performed. Most of the discrepancies and the background are removed using the proposed steps, minimizing working time. + +Although image-based 3D reconstructions get greater detail than using low-cost depth sensors [9], this approach may not be able to estimate the completeness of the object (Fig. 3). This is a common result when the captures do not fully describe the target model, or it does not have a very distinguishable texture or detail. + +![01963e92-3739-796c-974f-481252d47763_2_1037_145_498_981_0.jpg](images/01963e92-3739-796c-974f-481252d47763_2_1037_145_498_981_0.jpg) + +Figure 3: Some parts of the surface may not be estimated by the photogrammetry process. In (a) the white and smooth painting of the object (b) prevents the MVS algorithm from obtaining a greater number of points that define this part of the structure of the model, leaving this featureless surface region with a fewer density of points than others. + +The algorithms used in the next steps require a guided set of data, thus, the normals of the point clouds are estimated before performing the alignment step. A normal estimation k-neighbor algorithm is used for this task. + +### 3.2 Alignment + +To deal with the problem of aligning the point clouds of the acquisition phase, transformations are applied to place all captures in a global coordinate system. This alignment is usually performed in a coarse and fine alignment step. + +To perform the initial alignment between the point clouds obtained by the depth sensor we use global alignment algorithms where the pairs of three-dimensional captures are roughly aligned [15]. Given the initial alignment between the captured views, the Iterative Closest Point (ICP) algorithm [11] is executed to obtain a fine alignment. After pairwise incremental registration, an algorithm for global minimization of the accumulated error is run. + +The initial alignment step may not produce good alignment results due to the nature of the depth data utilized, as the low amount of discernible points between two point clouds [20], so the registration may present drifts. With this in mind, we use the point cloud obtained by photogrammetry as an auxiliary to apply a new alignment over the depth sensors point clouds, distorting the transformation, propagating the accumulation of errors between consecutive alignments and the loop closure, improving the global registration and the quality of the aligned point cloud. + +![01963e92-3739-796c-974f-481252d47763_3_200_174_1399_1086_0.jpg](images/01963e92-3739-796c-974f-481252d47763_3_200_174_1399_1086_0.jpg) + +Figure 4: Porcelain horse. With the richness of details that this object has, as in the head and saddle, we use the photogrammetry method for distinguishing them with the highest level of detail. At the same time it has a low number of characteristics in predominantly smoothness regions, as the base of the structure and the body of the animal, we use the depth sensor capture approach where this factor does not influence the 3D acquisition process. The data captured by the low-cost depth sensor aggregated information where there are few visible features, as can be seen at the base and legs of the horse. + +The point cloud generated by the image-based 3D reconstruction pipeline and the one obtained with the depth sensor captures are created from different image spectrum and are very common to have different scales. The point clouds obtained using the depth sensor must be aligned with the corresponding points of the object in the photogrammetry point cloud. + +As the depth sensor captures are already in a global coordinate system, to carry out this alignment, it is sufficient just to scale and transform a single capture to fit the cloud obtained by MVS and apply the same transformation to the others, speeding up the registration process. After that, the ICP algorithm can be reapplied, including the photogrammetry output point cloud. This last point cloud is not to be transformed, only the rest of the captures is aligned to it because the camera positions that we will utilize for texturing will use this model's coordinate system. + +The merging of point clouds from both data capture approaches will increase the information that defines the object geometry. This resulting point cloud is used on the next steps of the pipeline. + +### 3.3 Surface reconstruction + +The mesh generation step is characterized by the reconstruction of the surface, a process in which a 3D continuous surface is inferred from a collection of discrete points that prove its shape [1]. + +For this step, we use the algorithm Screened Poisson Surface Reconstruction [14]. This algorithm seeks to find a surface in which the gradient of its points is the closest to the normals of the vertices of the input point cloud. The choice of a parametric method for the surface reconstruction is justified by the robustness and the possibility of using numerical methods to improve the results. Also, the resulting meshes are almost regular and smooth. + +### 3.4 Texture synthesis + +Applying textures to reconstructed $3\mathrm{D}$ models is one of the keys to realism [27]. High-quality texture mapping aims to avoid seams, smoothing the transition of an image used for applying texture and its adjacent one [16]. + +The texture synthesis phase of the proposed pipeline comprises the combination of the high-resolution pictures captured with an external digital camera with the integrated model obtained from the previous step of the pipeline. + +![01963e92-3739-796c-974f-481252d47763_4_197_199_1406_1039_0.jpg](images/01963e92-3739-796c-974f-481252d47763_4_197_199_1406_1039_0.jpg) + +Figure 5: Jaguar pan replica. Even with some visual characteristics generated by the 3D printing process, the object has very few distinguishable features because of its predominantly white texture. This factor makes difficult the reconstruction process by SFM and MVS. With this, we use the environment to assist in detecting the positions and orientation of the cameras. The captures with the depth sensor added information in the legs of the jaguar and the belly (bottom) not acquired by photogrammetry. + +The high-resolution photos taken with a digital camera with the poses calculated using SFM, will be used to perform the generation of texture coordinates and atlas of the model, avoiding a time-consuming manual process. + +The images with respective poses from SFM may not be able to apply a texture on faces not visible by any image used for the reconstruction, causing non-textured mesh surfaces in the three-dimensional model. To overcome this limitation, we post-apply the texture, merging camera relative poses result from SFM with new photos, calculating the new poses using photogrammetry result relative coordinate system. + +## 4 EXPERIMENTS AND EVALUATION + +For evaluation, we run the proposed pipeline on some objects varying size and complexity: a porcelain horse-shaped object ("Porcelain horse", Fig. 4), a jaguar and a turtle-shaped clay pan replicas ("Jaguar pan", Fig. 5 and "Turtle pan", Fig. 6 respectively). The remnant objects used in this study are replicas of cultural objects from the Waurá tribe and belong to the collection of Federal University of Bahia Brazilian Museum of Archaeology and Ethnology (MAE/UFBA). The replicas were three-dimensionally reconstructed by Raimundo [20] and 3D printed. In addition, the turtle replica was colored by hydrographic printing. + +In our experiments we used Microsoft Kinect version 1, however, any other low-cost sensor can be used to capture depth images. This sensor is affordable and captures color and depth information with a resolution of ${640} \times {480}$ pixels. To produce point clouds from the low-cost 3D scanner, we used the Super-Resolution approach proposed by Raimundo [22] with 16 Low-Resolution (LR) depth frames. + +The photos used as input to the passive 3D reconstruction method were taken with a Redmi Note 8 camera for all evaluated models. The number of photos was arbitrarily chosen to maximize coverage of the object. For the SFM pipeline, the RGB images were processed using COLMAP [24] to calculate camera poses and sparse shape reconstruction. OpenMVS [5] was used for dense reconstruction. For the texturing stage, we used the algorithm proposed by Waechter et al. [27]. + +Some software tools were developed from third-party libraries for various purposes. For instance, OpenCV [4] and PCL [23] were used to handle and process depth images and point clouds, libfreenect [18] was used on the depth acquisition application to access and retrieve data from the Microsoft Kinect. Meshlab system [7] has been used for Poisson reconstruction and adjustments in 3D point clouds and meshes when necessary. + +Table 1: Algorithms and main components of each experiment. + +
ObjectPorcelain horseJaguar panTurtle pan
Dimensions (cm)${35} \times {12} \times {31}$${21.5} \times {15} \times 7$$9 \times {6.5} \times {3.5}$
TextureHandmadePredominantly whiteHydrographic printing
Num. of RGB images1086529
RGB images resolution${8000} \times {6000}\mathrm{{px}}$4000 x1844px${8000} \times {6000}\mathrm{{px}}$
SFM algorithmCOLMAP [24]COLMAP [24]COLMAP [24]
MVS algorithmOpenMVS [5]OpenMVS [5]OpenMVS [5]
Depth sensorKinect V1Kinect V1Kinect V1
LR frames per capture161616
SR point clouds262220
+ +The Figures 4 and 5 show the acquisition, merging, and reconstruction steps proposed by this pipeline for the Porcelain Horse and Jaguar Pan. The figures also bring the discussion of the main challenges for each reconstruction and how they were handled by the pipeline. The algorithms and main components of each experiment are described in Table 1. + +The resolution of clouds obtained by the low-cost sensor with SR is considerably lower than in clouds obtained by photogrammetry. This is evident in the turtle's captures and reconstructions (Fig. 6(b)). In such figure, is shown that the low-cost sensor presented a scale limitation. However, it has the advantage of making new captures of the object even if it has moved in the scene. The photogrammetry also presented limitations when it try to describe featureless regions of any object (as shown in Fig. 3 and Fig. 5(f)). However, this does not happen with the depth sensor since the coloring does not influence on capture. The resolution of the images used on the SFM pipeline is also a factor that directly influences the quality and details of the $3\mathrm{D}$ reconstruction. The point clouds obtained by photogrammetry were capable of representing, with good quality, distinguishable details on a millimeter scale. The merging of point clouds was helpful to express in greater detail the objects that were reconstructed, taking the advantages of both captures. + +The merged point clouds have been down-sampled to facilitate visualization and meshing generation since the aligned and combined point clouds may have an excessive and redundant number of vertices and there is no guarantee that the sampling density is sufficient for proper reconstruction [2]. Point clouds were meshed using the Screened Poisson Surface Reconstruction feature in Meshlab [7] using reconstruction depth 7 and 3 as the minimum number of samples. It is important to note that the production of a mesh is a highly dependent process on the variables used to generate the surface. We will consider as standard for all reconstructions the Poisson Surface Reconstruction the parameters defined in this paragraph. + +For quantitative validation, the 3D surfaces reconstructions of the Turtle (Fig. 6) were compared with the model used for 3D printing (ground truth in Fig. 6(d)). For this comparison, we used the Hausdorff Distance tool of Meshlab [7]. The results are discussed on Table 2 and graphically represented on Fig. 7. + +The same quantitative validation was carried out with the reconstructions of the Jaguar's 3D surfaces and its respective model used for 3D printing. The results are presented in Table 3 and as like the turtle's Hausdorff Distances, the reconstruction of the jaguar with this pipeline achieves better mean and lower values of maximum and minimum when compared with individual approaches. + +All objects studied benefited from the merging of point clouds as Poisson's surface reconstruction identifies and differentiates nearby geometric details, some of them are added by the merging. It was noticed that, when the points are linearly spaced, the resulting mesh is smoother and more accurate. + +Table 2: Hausdorff Distances for 3D surface reconstructions of the Turtle pan. Each vertex sampled from the source mesh is searched to the closest vertex on ground truth. Values in the mesh units and concerning the diagonal of the bounding box of the mesh. + +
$\mathbf{{Mesh}}$MVS (Filtered)Kinect (SR)Merged
Samples17928 pts20639 pts20455 pts
Minimum0.0000000.0000030.000000
Maximum0.6877410.1727650.124484
$\mathbf{{Mean}}$0.0260210.0282090.012780
RMS0.0824360.0387910.023629
ReferenceFig. 6(a)Fig. 6(b)Fig. 6(c)
+ +Table 3: Hausdorff Distances for 3D surface reconstructions of the Jaguar pan. + +
$\mathbf{{Mesh}}$MVS (Filtered)Kinect (SR)Merged
Samples12513 pts13034 pts13147 pts
Minimum0.0000050.0000020.000001
Maximum0.7500010.1735690.139575
$\mathbf{{Mean}}$0.0511470.0175970.019753
RMS0.0916080.0282660.026867
+ +Texturing results using surfaces from merged point clouds are shown in Fig. 8. This stage is satisfactory due to the high quality of the images used and from the camera positions correctly aligned and undistorted with the target object from SFM results. + +The images with respective poses used by the SFM system did not be able to apply a texture on the bottom of the objects since bottom view was not visible. A new camera pose was manually added with the image of the bottom view on the SFM output, re-applying the texturing on this uncovered angle. + +Every procedure described in this section was performed on a notebook Avell G1550 MUV, Intel Core i7-9750H CPU @ 2.60GHz x 12, 16GB of RAM, GeForce RTX 2070 graphics card, on Ubuntu 16.04 64-bits. + +## 5 CONCLUSION + +With the proposed pipeline, it is possible to add 3D capture information, reconstructing details beyond what a single low-cost capture method initially provides. A low-cost depth sensor allows preliminary verification of data during acquisition. The Super-Resolution methodology reduces the incidence of noise and mitigates the low amount of details from depth maps acquired using low-cost RGB-D hardware. Photogrammetry despite capturing a higher level of detail has certain limitations related to the number of resources, like geometric and feature details. + +![01963e92-3739-796c-974f-481252d47763_6_161_186_1421_297_0.jpg](images/01963e92-3739-796c-974f-481252d47763_6_161_186_1421_297_0.jpg) + +Figure 6: Screened Poisson Surface Reconstruction results for the Turtle pan point clouds. The reconstruction depth is 7, while the minimum number of samples is 3 for all experiments. In (a) the limiting factor was the bottom part of the object that is not inferred by the photogrammetry process. (b) shows that the low-cost depth sensor was unable to identify details of the model, this is due to the small size of the object, making it difficult to obtain details, however, this mesh was able to represent the model in all directions, including the bottom. The merged mesh (c) was able to reproduce all the small details found by photogrammetry and include regions that were represented only by depth sensor captures. For comparison (d) presents the model’s ground truth used for 3D printing. + +![01963e92-3739-796c-974f-481252d47763_6_306_696_1182_440_0.jpg](images/01963e92-3739-796c-974f-481252d47763_6_306_696_1182_440_0.jpg) + +Figure 7: Hausdorff Distance of Turtle pan mesh result using the proposed pipeline (Fig. 6(c)). 20455 sampled vertices were searched to the closest vertices on ground truth. Minimum of 0.0 (red), maximum of 0.124484 (blue), mean of 0.012780 and RMS 0.023629. Values in the mesh units and concerning the diagonal of the bounding box of the mesh. The main limitation of the results was the bottom part, which was inferred only by the depth sensor. + +The texturing process using high definition images from SFM output, adding possible missing parts, if needed, also helps to achieve greater visual realism to the reconstructed $3\mathrm{D}$ model. + +Future research involves a quantitative analysis of the 3D reconstruction after the texturing step. It is also projected an automation to align point clouds using the scale-based iterative closest point algorithm (scaled PCA-ICP) and the application of this pipeline to digital preservation of artifacts from the cultural heritage of the MAE/UFBA. + +## REFERENCES + +[1] M. Berger, A. Tagliasacchi, L. M. Seversky, P. Alliez, G. Guennebaud, J. A. Levine, A. Sharf, and C. T. Silva. A survey of surface reconstruction from point clouds. Computer Graphics Forum, 36(1):301-329, 2017. doi: 10.1111/cgf. 12802 + +[2] F. Bernardini and H. Rushmeier. The 3d model acquisition pipeline. Computer Graphics Forum, 21(2):149-172, 2002. doi: 10.1111/1467 -8659.00574 + +[3] S. Bianco, G. Ciocca, and D. Marelli. Evaluating the performance of structure from motion pipelines. Journal of Imaging, 4(8), 2018. doi: 10.3390/jimaging4080098 + +[4] G. Bradski. The OpenCV Library. Dr. Dobb's Journal of Software Tools, 2000. + +[5] D. Cernea. OpenMVS: Multi-view stereo reconstruction library, 2020. + +[6] H. Chen, Y. Feng, J. Yang, and C. Cui. 3d reconstruction approach for outdoor scene based on multiple point cloud fusion. Journal of the Indian Society of Remote Sensing, 47(10):1761-1772, 2019. doi: 10. 1007/s12524-019-01029-y + +[7] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia. MeshLab: an Open-Source Mesh Processing Tool. In V. Scarano, R. D. Chiara, and U. Erra, eds., Eurographics Italian Chapter Conference. The Eurographics Association, 2008. doi: 10. 2312/LocalChapterEvents/ItalChap/ItalianChapConf2008/129-136 + +[8] F. Di Paola and L. Inzerillo. 3d reconstruction-reverse engineering - digital fabrication of the egyptian palermo stone using by smartphone and light structured scanner. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2:311-318, 2018. doi: 10.5194/isprs-archives-XLII-2-311-2018 + +[9] P. Falkingham. Low cost 3d scanning using off-the-shelf video gaming peripherals. Journal of Paleontological Techniques, 11:1-9, 2013. + +[10] C. Hernández and G. Vogiatzis. Shape from Photographs: A Multiview Stereo Pipeline, pp. 281-311. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010. doi: 10.1007/978-3-642-12848-6_11 + +[11] D. Holz, A. E. Ichim, F. Tombari, R. B. Rusu, and S. Behnke. Registration with the point cloud library: A modular framework for aligning in 3-d. IEEE Robotics Automation Magazine, 22(4):110-124, Dec 2015. doi: 10.1109/MRA.2015.2432331 + +[12] A. Hosseininaveh Ahmadabadian, A. Karami, and R. Yazdan. An automatic $3\mathrm{\;d}$ reconstruction system for texture-less objects. Robotics + +![01963e92-3739-796c-974f-481252d47763_7_263_186_1324_719_0.jpg](images/01963e92-3739-796c-974f-481252d47763_7_263_186_1324_719_0.jpg) + +Figure 8: Texturing results using SFM cameras poses estimation over Screened Poisson Reconstruction models of the merged point clouds. In addition to the objects previously evaluated, other models generated through our pipeline are presented in this figure. + +and Autonomous Systems, 117:29 - 39, 2019. doi: 10.1016/j.robot. 2019.04.001 + +[13] Y. H. Jo and S. Hong. Three-dimensional digital documentation of cultural heritage site based on the convergence of terrestrial laser scanning and unmanned aerial vehicle photogrammetry. ISPRS International Journal of Geo-Information, 8(2), 2019. doi: 10.3390/ijgi8020053 + +[14] M. Kazhdan and H. Hoppe. Screened poisson surface reconstruction. ACM Trans. Graph., 32(3):29:1-29:13, July 2013. doi: 10.1145/ 2487228.2487237 + +[15] N. Mellado, D. Aiger, and N. J. Mitra. Super 4pcs fast global pointcloud registration via smart indexing. In Proceedings of the Symposium on Geometry Processing, SGP '14, p. 205-215. Eurographics Association, Goslar, DEU, 2014. doi: 10.1111/cgf. 12446 + +[16] O. Muratov, Y. Slynko, V. Chernov, M. Lyubimtseva, A. Shamsuarov, and V. Bucha. 3dcapture: $3\mathrm{\;d}$ reconstruction for a smartphone. In 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 893-900, June 2016. doi: 10.1109/CVPRW. 2016.116 + +[17] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon. Kinect-fusion: Real-time dense surface mapping and tracking. In 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 127-136, Oct 2011. doi: 10.1109/ISMAR.2011.6092378 + +[18] OpenKinect. libfreenect, 2012. + +[19] A. Prokos, G. Karras, and L. Grammatikopoulos. Design and evaluation of a photogrammetric $3\mathrm{\;d}$ surface scanner. hand, $2 : 2,{2009}$ . + +[20] P. Raimundo. Low-cost 3d reconstruction of cultural heritage. Master's thesis, Universidade Federal da Bahia, Salvador, 2018. + +[21] P. Raimundo et al. Low-cost 3d reconstruction of cultural heritage artifacts. Revista Brasileira de Computação Aplicada, 10(1):66-75, maio 2018. doi: 10.5335/rbca.v10i1.7791 + +[22] P. Raimundo et al. Improved point clouds from a heritage artifact depth low-cost acquisition. Revista Brasileira de Computação Aplicada, 12(1):84-94, fev. 2020. doi: 10.5335/rbca.v12i1.10019 + +[23] R. B. Rusu and S. Cousins. 3D is here: Point Cloud Library (PCL). In IEEE International Conference on Robotics and Automation (ICRA). Shanghai, China, May 9-13 2011. + +[24] J. L. Schönberger and J.-M. Frahm. Structure-from-motion revisited. + +In Conference on Computer Vision and Pattern Recognition (CVPR), 2016. + +[25] A. D. Sergeeva and V. A. Sablina. Using structure from motion for monument $3\mathrm{\;d}$ reconstruction from images with heterogeneous background. In 2018 7th Mediterranean Conference on Embedded Computing (MECO), pp. 1-4, June 2018. doi: 10.1109/MECO.2018.8406058 + +[26] J. W. Silva, L. Gomes, K. A. Agüero, O. R. P. Bellon, and L. Silva. Real-time acquisition and super-resolution techniques on $3\mathrm{\;d}$ reconstruction. In 2013 IEEE International Conference on Image Processing, pp. 2135- 2139, Sep. 2013. doi: 10.1109/ICIP.2013.6738440 + +[27] M. Waechter, N. Moehrle, and M. Goesele. Let there be color! large-scale texturing of $3\mathrm{\;d}$ reconstructions. In D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds., Computer Vision-ECCV 2014, pp. 836-850. Springer International Publishing, Cham, 2014. + +[28] M. Zollhöfer, C. Siegl, B. Riffelmacher, M. Vetter, B. Dreyer, M. Stam-minger, and F. Bauer. Low-Cost Real-Time 3D Reconstruction of Large-Scale Excavation Sites using an RGB-D Camera. In R. Klein and P. Santos, eds., Eurographics Workshop on Graphics and Cultural Heritage. The Eurographics Association, 2014. doi: 10.2312/gch. 20141298 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..1df910d7a0ad40dcd6988b4408619a05ef10af4e --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,255 @@ +§ IMPROVED LOW-COST 3D RECONSTRUCTION PIPELINE BY MERGING DATA FROM DIFFERENT COLOR AND DEPTH CAMERAS + +Category: Research + +§ ABSTRACT + +The performance of traditional 3D capture methods directly influences the quality of digitally reconstructed 3D models. To obtain complete and well-detailed low-cost three-dimensional models, this paper proposes a 3D reconstruction pipeline using point clouds from different sensors, combining captures of a low-cost depth sensor post-processed by Super-Resolution techniques with high-resolution RGB images from an external camera using Structure-from-Motion and Multi-View Stereo output data. The main contribution of this work includes the description of a complete pipeline that improves the stage of information acquisition and makes the data merging from different sensors. Several phases of the $3\mathrm{D}$ reconstruction pipeline were also specialized to improve the model's visual quality. The experimental evaluation demonstrates that the developed method produces good and reliable results for low-cost 3D reconstruction of an object. + +Keywords: Low-Cost 3D Reconstruction, Depth Sensor, Photogrammetry. + +Index Terms: Computer graphics, Shape modeling, 3D Reconstruction. + +§ 1 INTRODUCTION + +3D reconstruction makes it possible to capture the geometry and appearance of an object or scene allowing us to inspect details without risk of damaging, measure properties, and reproduce 3D models in different material [21]. In recent years, numerous advances in 3D digitization have been observed, mainly by applying pipelines for three-dimensional reconstruction using costly high-precision 3D scanners. In addition, recent researches have sought to reconstruct objects or scenes using depth images from low-cost acquisition devices (e.g., the Microsoft Kinect sensor [17]) or using Structure from Motion (SFM) [24] combined with Multi-View Stereo (MVS) [5] from RGB-images. + +Good quality 3D reconstructions require a large number of financial resources, as they require state-of-the-art equipment to capture object data in high precision and detail. On the other hand, low-resolution equipment implies a lower quality capture, even being financially more viable. Even with the ease of operation, lightweight, and portability, low-cost approaches must consider the limitations of the scanning equipment used [20]. + +The acquisition step of a 3D reconstruction pipeline refers to the use of devices to capture data from objects in a scene such as their geometry and color [22]. One result of 3D geometry capture is the production of discrete points collection that demonstrates the model shape. We call it point clouds. The data obtained by this step will be used in all other phases of the 3D reconstruction process [2]. + +Active capture methods use equipment such as scanners to infer objects geometry through a beam of light, inside or outside the visible spectrum. The scanner sensor has the advantages of fast measuring speed, robustness regarding external factors, and ease of acquiring information. Active sensors also have good performance in reconstructing texture-less and featureless surfaces [6,22]. The sensors need to be sensitive to small variations in the information acquired, since for small differences in distance, the variation in the time it takes to reach two different points is very low, requiring low equipment latency and good response time. For this reason, these systems tend to be slightly noisy [21]. Considering low-cost reconstruction approaches difficulties to capture color in high precision are disadvantage [10]. + +Passive methods are based on optical imaging techniques. They are highly flexible and work well with any modern digital camera. Image-based 3D reconstruction is practical, non-intrusive, low-cost and easily deployable outdoors. Various properties of the images can be used to retrieve the target shape, such as material, viewpoints and illumination. As opposed to active techniques, image-based techniques provide an efficient and easy way to acquire the color of a target object [10]. Although passive reconstructions mainly using SFM and MVS produce excellent results, they have limitations like the difficulty of distinguishing the target object from the background [25] and require the target object to have detailed geometry [6]. A controlled environment is needed to obtain better reconstruction results $\left\lbrack {{12},{24}}\right\rbrack$ . + +Considering the limitations imposed by the presented approaches, it is important to note that a target whose geometry has been described by only a low-cost capture method has a real challenge in expressing its completeness, with rich and small details [6]. + +This paper proposes a hybrid pipeline from a low-cost depth camera (low-resolution images) and an external color capture camera (digital camera with high-resolution RGB images) to estimate and reconstruct the surface of an object and apply a high-quality texture. Such limitations of each data acquisition approach are bypass, generating a complete and well-detailed replica of the target model with high visual quality. To achieve this effect, this project uses a variation and combination of Structure from Motion, Multi-View Stereo and depth camera capture techniques. + +Although there are mature projects aimed at low-cost 3D reconstruction, few are those who describing step-by-step how to overcome the limitations from low-cost three-dimensional data capture using the best features in all phases of the pipeline to obtain the model as realistic as possible. The main contribution of this work is the description of a complete pipeline that makes use of post-processed depth captures and merging data from different sensors, in which depth sensor data and high-resolution color images do not need to be synchronized. + +As it is a post-processed task (after capture/estimate depth data), this work also includes the detection of the region of interest, based on the average distance of the scene, removing points not belonging to the target object and allows the inclusion of new images containing regions of the target object not previously photographed to improve the texturing step results. + +In addition to this introductory section, this work is organized as follows: Section 2 presents related works, while section 3 describes the proposed pipeline. The experiments and evaluation of the pipeline are presented in section 4. Finally, section 5 discusses the final considerations and results achieved by this research. + +§ 2 RELATED WORK + +Prokos et al. [19] proposed a hybrid approach combining shape from stereo (with additional geometric constraints) and laser scanning techniques. Using two cameras and a portable laser beam, they achieved accuracy as good as some high-end laser triangulation scanners. Although, they do not include automatically detecting outliers in their results. + +The KinectFusion system [17] tracks the pose of portable depth cameras (Kinect) as they move through space and perform good three-dimensional surface reconstructions in real-time. The Kinect sensor has considerable limitations, including temporal inconsistency and the low resolution of the captured color and depth images [22]. Real-time reconstruction is not a requirement for well-detailed, accurate, and complete reconstructions. + +Silva et al. [26] provides a guided reconstruction process using Super-Resolution (SR) techniques, helping to increase the quality of the low-resolution data captured with a low-cost sensor. The method of data acquisition using low-cost depth cameras and SR is also improved by Raimundo [22]. Even with depth image improvements, a poor registration of captures can affect the final model's shape. + +Falkingham [9] demonstrates the potential applications of low-cost technology in the field of paleontology. The Microsoft Kinect was used to digitalize specimens of various sizes, and the resulting digital models were compared with models produced using SFM and MVS. The work pointed out that although Kinect generally registers morphology at a lower resolution capturing less detail than photogrammetry techniques, it offers advantages in the speed of data acquisition and generation of the $3\mathrm{D}$ mesh completed in real-time during data capture. Also, they did not use Super-Resolution to improve captures from low-cost devices and the models produced by the Kinect lack any color information. + +Zollhöfer et al. [28] used a Kinect sensor to capture the geometry of an excavation site and took advantage of a topographic map to distort the reconstructed model, significantly increasing the quality of the scene. The global distortion, with Super-Resolution techniques applied to raw scans, significantly increased the fidelity and realism of its results but is too specialized for large scales scenes. + +Paola and Inzerillo [8] in order to digitally produce the Egyptian stone from Palermo, proposed a method with a structured light scanner, smartphones and SFM to apply texture in the highly accurate mesh generated by the scanner. The main challenges were the dark color of the material and the superficiality of the groove of the hi-eroglyphs that some capture approaches have difficulty recognizing. The level of detail of the texture application showed up quite accurately. This reference work used a high-resolution 3D scanner, not aiming at low-cost reconstruction. + +Jo and Hong [13] use a combination of terrestrial laser scanning and Unmanned Aerial Vehicle (UAV) photogrammetry to establish a three-dimensional model of the Magoksa Temple in Korea. The scans were used to acquire the perpendicular geometry of buildings and locations, being aligned and merged with the photogrammetry output, producing a hybrid point cloud. The photogrammetry adds value to the $3\mathrm{D}$ model, complementing the point cloud with the upper parts of buildings, which are difficult to acquire through laser scanning. + +Chen [6] proposes a registration method to combine the data of a laser scanner and photogrammetry to reconstruct the real outdoor 3D scene. They managed greatly increasing the accuracy and convenience of operation. The two sensors can work independently, the method fuses their data even if in different scales. Mesh reconstruction and texturing were not explored by this work. + +Raimundo et al. [21] point out in their bibliographic review several studies that successfully used advanced rendering techniques such as global illumination, ambient occlusion, normal mapping, shadow baking, per-vertex lighting, and level of detail. These rendering techniques also improve the final presentation of 3D reconstructions. + +§ 3 PIPELINE PROPOSAL + +To overcome limitations of the low-cost three-dimensional data acquisition process, the following pipeline is proposed: capturing depth and color images (using a low-cost depth sensor and a digital camera); generation of point clouds from low-cost RGB-D camera depth images (using SR techniques [22]); shape estimation from RGB images (using SFM [24] and MVS [5]); merging of data from these different capture techniques; mesh generation; and texturing with high quality photos (Fig. 1). Several phases of the pipeline were specialized to achieve better accuracy and visual quality of 3D reconstructions of small and medium scale objects. The proposed pipeline works offline. + + < g r a p h i c s > + +Figure 1: Schematic diagram for the proposed pipeline and the 3D reconstruction processes of an object. + +§ 3.1 DATA ACQUISITION + +For capture using a low-cost depth sensor is established the following acquisition protocol: take several depths captures, moving the sensor around the object, and defining the limits of the capture volume. Furthermore, a turntable can also be used, obtaining a more controlled capture and align process. The number of views captured is less than that of real-time approaches due to the additional processing required to ensure the quality of each capture. Considering the quality requirements for this proposed work, an interactive tool [20] is used to acquire the raw data from the depth sensor (Fig. 2). + +The depth capture method will present results proportional to the better the captures by the device, that is, the lower the incidence of noise and the better the accuracy of the inferred depth. With this in mind, each depth image goes through a filtering step with the application of Super-Resolution [22]. To provide high-resolution information beyond what is possible with a specific sensor, several low-resolution captures are merging, recreating as much detail as possible. + +To add 3D information in greater detail and to apply a simple high-quality texturing process, photographs are taken from a digital camera around the target object. In our pipeline, these captures are independent of the depth sensor, we need just to take pictures with the fixed object, in a free movement of the camera. The set of images must be sufficient to cover most of the object's surface and the images must portray, in pairs, common parts of it. The color images will be used in the SFM pipeline. + +The SFM pipeline detects characteristics in the images (feature detection), mapping these characteristics between images and finding descriptors capable of representing a distinguishable region (feature matching). These descriptors represent vertices of the reconstruction of the 3D scene (sparse reconstruction). The greater the number of matches found between the images, the greater the degree of accuracy of calculating a $3\mathrm{D}$ transformation matrix between the images, providing the estimation of the relative position between camera poses $\left\lbrack {3,{10}}\right\rbrack$ . + + < g r a p h i c s > + +Figure 2: Software to acquire and process depth images. The slider controls the capture limits (in millimeters) and the cut limits (in pixels), effectively determining the capture volume. + +Photographs with good resolution and objects with a higher level of detail tend to bring greater precision to the photogrammetry algorithms. For objects with fewer details and features, the environment can be used to achieve better results [24]. In addition to the estimated structure to improve the depth sensor captured geometry, we use these cameras' pose estimation to apply easily and directly texture over the final model surface. + +The Multi-View Stereo process is used to improve the point cloud obtained by SFM, resulting in a dense reconstruction. As the camera parameters such as position, rotation, and focal length are already known from SFM, the MVS computes 3D vertices in regions not detected by the descriptors. Multi-View Stereo algorithms generally have good accuracy, even with few images [10]. + +For this image-based point cloud result, to highlight the target object, a method of detecting the region of interest can be used. A simple algorithm is used to detect the centroid of the set of 3D points and remove points based on a radius from it. If the floor below the object is discernible, it is also possible to use a planar segmentation algorithm to remove the plane. A statistical removal algorithm can also be used to remove outliers. If even more accurate outlier removal is required, a manual process using a user interface tool can be performed. Most of the discrepancies and the background are removed using the proposed steps, minimizing working time. + +Although image-based 3D reconstructions get greater detail than using low-cost depth sensors [9], this approach may not be able to estimate the completeness of the object (Fig. 3). This is a common result when the captures do not fully describe the target model, or it does not have a very distinguishable texture or detail. + + < g r a p h i c s > + +Figure 3: Some parts of the surface may not be estimated by the photogrammetry process. In (a) the white and smooth painting of the object (b) prevents the MVS algorithm from obtaining a greater number of points that define this part of the structure of the model, leaving this featureless surface region with a fewer density of points than others. + +The algorithms used in the next steps require a guided set of data, thus, the normals of the point clouds are estimated before performing the alignment step. A normal estimation k-neighbor algorithm is used for this task. + +§ 3.2 ALIGNMENT + +To deal with the problem of aligning the point clouds of the acquisition phase, transformations are applied to place all captures in a global coordinate system. This alignment is usually performed in a coarse and fine alignment step. + +To perform the initial alignment between the point clouds obtained by the depth sensor we use global alignment algorithms where the pairs of three-dimensional captures are roughly aligned [15]. Given the initial alignment between the captured views, the Iterative Closest Point (ICP) algorithm [11] is executed to obtain a fine alignment. After pairwise incremental registration, an algorithm for global minimization of the accumulated error is run. + +The initial alignment step may not produce good alignment results due to the nature of the depth data utilized, as the low amount of discernible points between two point clouds [20], so the registration may present drifts. With this in mind, we use the point cloud obtained by photogrammetry as an auxiliary to apply a new alignment over the depth sensors point clouds, distorting the transformation, propagating the accumulation of errors between consecutive alignments and the loop closure, improving the global registration and the quality of the aligned point cloud. + + < g r a p h i c s > + +Figure 4: Porcelain horse. With the richness of details that this object has, as in the head and saddle, we use the photogrammetry method for distinguishing them with the highest level of detail. At the same time it has a low number of characteristics in predominantly smoothness regions, as the base of the structure and the body of the animal, we use the depth sensor capture approach where this factor does not influence the 3D acquisition process. The data captured by the low-cost depth sensor aggregated information where there are few visible features, as can be seen at the base and legs of the horse. + +The point cloud generated by the image-based 3D reconstruction pipeline and the one obtained with the depth sensor captures are created from different image spectrum and are very common to have different scales. The point clouds obtained using the depth sensor must be aligned with the corresponding points of the object in the photogrammetry point cloud. + +As the depth sensor captures are already in a global coordinate system, to carry out this alignment, it is sufficient just to scale and transform a single capture to fit the cloud obtained by MVS and apply the same transformation to the others, speeding up the registration process. After that, the ICP algorithm can be reapplied, including the photogrammetry output point cloud. This last point cloud is not to be transformed, only the rest of the captures is aligned to it because the camera positions that we will utilize for texturing will use this model's coordinate system. + +The merging of point clouds from both data capture approaches will increase the information that defines the object geometry. This resulting point cloud is used on the next steps of the pipeline. + +§ 3.3 SURFACE RECONSTRUCTION + +The mesh generation step is characterized by the reconstruction of the surface, a process in which a 3D continuous surface is inferred from a collection of discrete points that prove its shape [1]. + +For this step, we use the algorithm Screened Poisson Surface Reconstruction [14]. This algorithm seeks to find a surface in which the gradient of its points is the closest to the normals of the vertices of the input point cloud. The choice of a parametric method for the surface reconstruction is justified by the robustness and the possibility of using numerical methods to improve the results. Also, the resulting meshes are almost regular and smooth. + +§ 3.4 TEXTURE SYNTHESIS + +Applying textures to reconstructed $3\mathrm{D}$ models is one of the keys to realism [27]. High-quality texture mapping aims to avoid seams, smoothing the transition of an image used for applying texture and its adjacent one [16]. + +The texture synthesis phase of the proposed pipeline comprises the combination of the high-resolution pictures captured with an external digital camera with the integrated model obtained from the previous step of the pipeline. + + < g r a p h i c s > + +Figure 5: Jaguar pan replica. Even with some visual characteristics generated by the 3D printing process, the object has very few distinguishable features because of its predominantly white texture. This factor makes difficult the reconstruction process by SFM and MVS. With this, we use the environment to assist in detecting the positions and orientation of the cameras. The captures with the depth sensor added information in the legs of the jaguar and the belly (bottom) not acquired by photogrammetry. + +The high-resolution photos taken with a digital camera with the poses calculated using SFM, will be used to perform the generation of texture coordinates and atlas of the model, avoiding a time-consuming manual process. + +The images with respective poses from SFM may not be able to apply a texture on faces not visible by any image used for the reconstruction, causing non-textured mesh surfaces in the three-dimensional model. To overcome this limitation, we post-apply the texture, merging camera relative poses result from SFM with new photos, calculating the new poses using photogrammetry result relative coordinate system. + +§ 4 EXPERIMENTS AND EVALUATION + +For evaluation, we run the proposed pipeline on some objects varying size and complexity: a porcelain horse-shaped object ("Porcelain horse", Fig. 4), a jaguar and a turtle-shaped clay pan replicas ("Jaguar pan", Fig. 5 and "Turtle pan", Fig. 6 respectively). The remnant objects used in this study are replicas of cultural objects from the Waurá tribe and belong to the collection of Federal University of Bahia Brazilian Museum of Archaeology and Ethnology (MAE/UFBA). The replicas were three-dimensionally reconstructed by Raimundo [20] and 3D printed. In addition, the turtle replica was colored by hydrographic printing. + +In our experiments we used Microsoft Kinect version 1, however, any other low-cost sensor can be used to capture depth images. This sensor is affordable and captures color and depth information with a resolution of ${640} \times {480}$ pixels. To produce point clouds from the low-cost 3D scanner, we used the Super-Resolution approach proposed by Raimundo [22] with 16 Low-Resolution (LR) depth frames. + +The photos used as input to the passive 3D reconstruction method were taken with a Redmi Note 8 camera for all evaluated models. The number of photos was arbitrarily chosen to maximize coverage of the object. For the SFM pipeline, the RGB images were processed using COLMAP [24] to calculate camera poses and sparse shape reconstruction. OpenMVS [5] was used for dense reconstruction. For the texturing stage, we used the algorithm proposed by Waechter et al. [27]. + +Some software tools were developed from third-party libraries for various purposes. For instance, OpenCV [4] and PCL [23] were used to handle and process depth images and point clouds, libfreenect [18] was used on the depth acquisition application to access and retrieve data from the Microsoft Kinect. Meshlab system [7] has been used for Poisson reconstruction and adjustments in 3D point clouds and meshes when necessary. + +Table 1: Algorithms and main components of each experiment. + +max width= + +Object Porcelain horse Jaguar pan Turtle pan + +1-4 +Dimensions (cm) ${35} \times {12} \times {31}$ ${21.5} \times {15} \times 7$ $9 \times {6.5} \times {3.5}$ + +1-4 +Texture Handmade Predominantly white Hydrographic printing + +1-4 +Num. of RGB images 108 65 29 + +1-4 +RGB images resolution ${8000} \times {6000}\mathrm{{px}}$ 4000 x1844px ${8000} \times {6000}\mathrm{{px}}$ + +1-4 +SFM algorithm COLMAP [24] COLMAP [24] COLMAP [24] + +1-4 +MVS algorithm OpenMVS [5] OpenMVS [5] OpenMVS [5] + +1-4 +Depth sensor Kinect V1 Kinect V1 Kinect V1 + +1-4 +LR frames per capture 16 16 16 + +1-4 +SR point clouds 26 22 20 + +1-4 + +The Figures 4 and 5 show the acquisition, merging, and reconstruction steps proposed by this pipeline for the Porcelain Horse and Jaguar Pan. The figures also bring the discussion of the main challenges for each reconstruction and how they were handled by the pipeline. The algorithms and main components of each experiment are described in Table 1. + +The resolution of clouds obtained by the low-cost sensor with SR is considerably lower than in clouds obtained by photogrammetry. This is evident in the turtle's captures and reconstructions (Fig. 6(b)). In such figure, is shown that the low-cost sensor presented a scale limitation. However, it has the advantage of making new captures of the object even if it has moved in the scene. The photogrammetry also presented limitations when it try to describe featureless regions of any object (as shown in Fig. 3 and Fig. 5(f)). However, this does not happen with the depth sensor since the coloring does not influence on capture. The resolution of the images used on the SFM pipeline is also a factor that directly influences the quality and details of the $3\mathrm{D}$ reconstruction. The point clouds obtained by photogrammetry were capable of representing, with good quality, distinguishable details on a millimeter scale. The merging of point clouds was helpful to express in greater detail the objects that were reconstructed, taking the advantages of both captures. + +The merged point clouds have been down-sampled to facilitate visualization and meshing generation since the aligned and combined point clouds may have an excessive and redundant number of vertices and there is no guarantee that the sampling density is sufficient for proper reconstruction [2]. Point clouds were meshed using the Screened Poisson Surface Reconstruction feature in Meshlab [7] using reconstruction depth 7 and 3 as the minimum number of samples. It is important to note that the production of a mesh is a highly dependent process on the variables used to generate the surface. We will consider as standard for all reconstructions the Poisson Surface Reconstruction the parameters defined in this paragraph. + +For quantitative validation, the 3D surfaces reconstructions of the Turtle (Fig. 6) were compared with the model used for 3D printing (ground truth in Fig. 6(d)). For this comparison, we used the Hausdorff Distance tool of Meshlab [7]. The results are discussed on Table 2 and graphically represented on Fig. 7. + +The same quantitative validation was carried out with the reconstructions of the Jaguar's 3D surfaces and its respective model used for 3D printing. The results are presented in Table 3 and as like the turtle's Hausdorff Distances, the reconstruction of the jaguar with this pipeline achieves better mean and lower values of maximum and minimum when compared with individual approaches. + +All objects studied benefited from the merging of point clouds as Poisson's surface reconstruction identifies and differentiates nearby geometric details, some of them are added by the merging. It was noticed that, when the points are linearly spaced, the resulting mesh is smoother and more accurate. + +Table 2: Hausdorff Distances for 3D surface reconstructions of the Turtle pan. Each vertex sampled from the source mesh is searched to the closest vertex on ground truth. Values in the mesh units and concerning the diagonal of the bounding box of the mesh. + +max width= + +$\mathbf{{Mesh}}$ MVS (Filtered) Kinect (SR) Merged + +1-4 +Samples 17928 pts 20639 pts 20455 pts + +1-4 +Minimum 0.000000 0.000003 0.000000 + +1-4 +Maximum 0.687741 0.172765 0.124484 + +1-4 +$\mathbf{{Mean}}$ 0.026021 0.028209 0.012780 + +1-4 +RMS 0.082436 0.038791 0.023629 + +1-4 +Reference Fig. 6(a) Fig. 6(b) Fig. 6(c) + +1-4 + +Table 3: Hausdorff Distances for 3D surface reconstructions of the Jaguar pan. + +max width= + +$\mathbf{{Mesh}}$ MVS (Filtered) Kinect (SR) Merged + +1-4 +Samples 12513 pts 13034 pts 13147 pts + +1-4 +Minimum 0.000005 0.000002 0.000001 + +1-4 +Maximum 0.750001 0.173569 0.139575 + +1-4 +$\mathbf{{Mean}}$ 0.051147 0.017597 0.019753 + +1-4 +RMS 0.091608 0.028266 0.026867 + +1-4 + +Texturing results using surfaces from merged point clouds are shown in Fig. 8. This stage is satisfactory due to the high quality of the images used and from the camera positions correctly aligned and undistorted with the target object from SFM results. + +The images with respective poses used by the SFM system did not be able to apply a texture on the bottom of the objects since bottom view was not visible. A new camera pose was manually added with the image of the bottom view on the SFM output, re-applying the texturing on this uncovered angle. + +Every procedure described in this section was performed on a notebook Avell G1550 MUV, Intel Core i7-9750H CPU @ 2.60GHz x 12, 16GB of RAM, GeForce RTX 2070 graphics card, on Ubuntu 16.04 64-bits. + +§ 5 CONCLUSION + +With the proposed pipeline, it is possible to add 3D capture information, reconstructing details beyond what a single low-cost capture method initially provides. A low-cost depth sensor allows preliminary verification of data during acquisition. The Super-Resolution methodology reduces the incidence of noise and mitigates the low amount of details from depth maps acquired using low-cost RGB-D hardware. Photogrammetry despite capturing a higher level of detail has certain limitations related to the number of resources, like geometric and feature details. + + < g r a p h i c s > + +Figure 6: Screened Poisson Surface Reconstruction results for the Turtle pan point clouds. The reconstruction depth is 7, while the minimum number of samples is 3 for all experiments. In (a) the limiting factor was the bottom part of the object that is not inferred by the photogrammetry process. (b) shows that the low-cost depth sensor was unable to identify details of the model, this is due to the small size of the object, making it difficult to obtain details, however, this mesh was able to represent the model in all directions, including the bottom. The merged mesh (c) was able to reproduce all the small details found by photogrammetry and include regions that were represented only by depth sensor captures. For comparison (d) presents the model’s ground truth used for 3D printing. + + < g r a p h i c s > + +Figure 7: Hausdorff Distance of Turtle pan mesh result using the proposed pipeline (Fig. 6(c)). 20455 sampled vertices were searched to the closest vertices on ground truth. Minimum of 0.0 (red), maximum of 0.124484 (blue), mean of 0.012780 and RMS 0.023629. Values in the mesh units and concerning the diagonal of the bounding box of the mesh. The main limitation of the results was the bottom part, which was inferred only by the depth sensor. + +The texturing process using high definition images from SFM output, adding possible missing parts, if needed, also helps to achieve greater visual realism to the reconstructed $3\mathrm{D}$ model. + +Future research involves a quantitative analysis of the 3D reconstruction after the texturing step. It is also projected an automation to align point clouds using the scale-based iterative closest point algorithm (scaled PCA-ICP) and the application of this pipeline to digital preservation of artifacts from the cultural heritage of the MAE/UFBA. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..1c1a437a86b09fdf6d66243dd81e6f14191d5e2f --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,333 @@ +# Artistic Recoloring of Image Oversegmentations + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_0_229_347_1336_250_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_0_229_347_1336_250_0.jpg) + +Figure 1: Recolorization of region-based abstraction. The figure shows the original image on the left and the recolored images produced by our proposed method using three different palettes. + +## Abstract + +We propose a method to assign vivid colors to regions of an oversegmented image. We restrict the output colors to those found in an input palette, and seek to preserve the recognizability of structure in the image. Our strategy is to match the color distances between the colors of adjacent regions with the color differences between the assigned palette colors; thus, assigned colors may be very far from the original colors, but both large local differences (edges) and small ones (uniform areas) are maintained. We use the widest path algorithm on a graph-based structure to set a priority order for recoloring regions, and traverse the resulting tree to assign colors. Our method produces vivid colorizations of region-based abstraction using arbitrary palettes. We demonstrate a set of stylizations that can be generated by our algorithm. + +Keywords: Non-photorealistic rendering, Image stylization, Recoloring, Abstraction. + +Index Terms: I.3.3 [Picture/Image Generation]-; I.4.6 [Segmentation] + +## 1 INTRODUCTION + +Color plays an important role in image aesthetics. In representational art, artists employ colors that match the perceived colors of objects in the depicted scene. Conversely, abstraction provides more freedom to manipulate colors. Figure 2 shows a modern vector illustration of an owl, an example of Fauvism by André Derain, and an abstraction of the Eiffel Tower by Robert Delaunay. The artists have expressed the image content by vivid colors assigned carefully various image regions. We aim at a method that can generate colorful images, recoloring an image using an arbitrary input palette. In this paper, we describe a method for recoloring an image based on a subdivision of the image into distinct segments, recoloring each segment based on its relationship with its neighbours. + +Our goal is to present different recoloring possibilities by assigning colors to each region of an oversegmented input image. We aim to maintain the image contrast and preserve strong edges so that the content of the scene remains recognizable. It is important to convey textures and small features, often too delicate to be preserved by existing abstraction methods. We would like to be able to create wild and vivid abstractions through use of unusual palettes. + +Manual recoloring of oversegmented images would be tedious; the images contain hundreds or thousands of segments; clicking on every segment would take a long time, even leaving aside the cognitive and interaction overhead of making selections from the palette. We provide an automatic assignment of colors to regions. The assignment can be used as is, could be used in a fast manual assessment loop (for example, if an artist wanted to choose a suitable palette for coloring a scene), or could be a good starting point for a semi-automatic approach where a user made minor modifications to the automated results. + +In this paper, we present an automatic recoloring approach for a region-based abstraction. The input is a desired palette and an oversegmented image. The method assigns a color from the palette to each region; it is based on the widest path algorithm [17], which organizes the regions into a tree based on the weight of the edges connecting them. We use color differences between adjacent regions both to order the regions and to select colors, trying to match the difference magnitude between the assigned palette colors and the original region colors. The use of color differences allows structures in the image to remain recognizable despite breaking the link between the original and depicted color. + +Our main contributions are as follows: + +- We designed a recoloring method for an oversegmented image, creating multiple abstractions colored with just one palette. Our method creates wild and high contrast images. + +- Various styles can be created by our method. We experiment with color blending between regions and produce smooth images. By reducing the palette colors, we simplify the recolored images. Moreover, we generate new colorings from a palette by applying different metrics and color spaces. + +The remainder of the paper is organized as follows. In Section 2, we briefly present related work. We describe our algorithm in Section 3. Section 4 shows results and provides some evaluation, and Section 5 gives some possible variations of the method. Finally, we conclude in Section 6 and suggest directions for future work. + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_0_938_1792_696_219_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_0_938_1792_696_219_0.jpg) + +Figure 2: Colorful representational images. A modern vector illustration of an owl; a Fauvist painting by André Derain; an abstraction of the Eiffel Tower by Robert Delaunay. + +## 2 Previous Work + +Although there is an existing body of work on recoloring photographs $\left\lbrack {5,8,{10},{12},{15},{19},{22},{26},{29}}\right\rbrack$ and researchers have investigated colorization of non-photorealistic renderings $\lbrack 2,7,{18},{23}$ , ${25},{30},{31}\rbrack$ , there is room for further exploration. Existing recoloring methods little address region-based abstractions. The closest approach, by Xu and Kaplan [28], used optimization to assign black or white to each region of an input image. On the other hand, there are a few approaches in the pattern colorization problem, intending to colorize the graphic patterns $\left\lbrack {3,{11}}\right\rbrack$ . Bohra and Gandhi $\left\lbrack 3\right\rbrack$ posed the colorization problem as an optimal graph matching problem over color groups in the reference and a grayscale template image. In generating playful palettes [6], the approximation results converge for a number of blobs beyond six or seven. Similarly, the ColorArt method of Bohra and Gandhi [3] enforce an equal number of colors in the reference palette and the template image, and cannot find matches otherwise. + +Below, we review some of the previous research on recoloring methods in NPR and color palette selection. These approaches can be broadly classified into example-based recoloring and palette-based recoloring. + +## Example based Recoloring (Color Transfer) + +Recoloring methods were first proposed by Reinhard et al. [19], where the colors of one image were transferred to another. They converted the RGB signals to Ruderman et al.'s [20] perception-based color space ${L\alpha \beta }$ . To produce colors they shifted and scaled the ${L\alpha \beta }$ space using simple statistics. + +Towards the color transfer and color style transfer techniques, Neumann and Neumann [15] extracted palette colors from an arbitrary target image and applied a 3D histogram matching. They attempt to keep the original hues after style transformation, where all of the colors, having the same hue on the original image would have the same hue after the matching step. They performed matching transformations on $3\mathrm{D}$ cumulative distribution functions belonging to the original and the style images. However, due to the lack of spatial information, the same histogram in itself looks to be not enough for a true style cloning. Especially, the problems caused by unpredictable noise and gradient effects. They suggest using image segmentation and 3-dimensional smoothing of the color histogram to improve the result. + +Levin et al. [10] introduced an interactive colorization method for black and white images. They used a quadratic cost function derived from the color differences between a pixel and its weighted average neighborhood colors. The user by scribbling indicates the desired color in the interior of the region, instead of tracing out its precise boundary. Then the colors automatically propagate to the remaining pixels in the image sequence. However, sometimes it fails at strong edges, due to a sensitive scale parameter. Inspired by Levin et al. [10], Yatziv and Sapiro [29] proposed a fast colorization of images, which was based on the concepts of luminance-weighted chrominance blending and fast intrinsic geodesic distance computations. Sykora et al. [25] developed a tool called Lazybrush for the colorization of cartoons which integrates textures to the images to create 3D-like effects [24], while Casaca et al. [4] used Laplacian coordinates for image division and used a color theme for fast colorization. Fang et al. [7] proposed an interactive optimization method for colorization of the hand-drawn grayscale images. To maintain a smooth color transitions and control the color overflow in the texture areas, they used a smooth feature map to adjust the feature vectors. + +A few works in NPR composite the colors for recoloring. The compositing could be applied by using alpha blending [13] or the Kubelka-Munk [9] equation (KM). For a paint-like effect NPR researchers mostly worked with the physic-based Kubelka-Munk equation to predict the reflectance of a layer of pigment. Some researchers tried to mimic the actual artist decision of mixing the colors to generate the palettes. For example, a data-driven color compositing framework by Lu et al. [12] derived three models based on optimized alpha blending, RBF interpolation and KM optimization to improve the prediction of compositing colors. Later, the KM pigment-Based model used for recoloring of styles such as watercolor painting [2]. The compositing by KM model leaves traces of overlapping stroke layers which can produce near natural painting effects. + +## Palette based Recoloring + +The interest in colorization and recoloring methods, opened up new research ideas on color palettes, rather than simple approaches like averaging colors of the active regions [8]. For many recoloring methods, usually a user would scribble each region of the image with a color, and the algorithm would do the rest of the work. Early methods to select the color palettes used Gaussian mixture models or K-means to cluster the image pixels. Chang et al. [5] introduced a photo-recoloring method by user-modified palettes. To select the color palettes, they compute the kmeans clustering on image colors, then discard the black entry. Tan et al. [26] proposed a technique to decompose an image into layers to extract the palette colors. Each layer of decomposition represents a coat of paint of a single color applied with varying opacity throughout the image. To determine a color palette capable of reproducing the image, they analyzed the image in RGB-space geometrically in a simplified convex hull. + +Most recent methods on generating the palettes have more flexibility for editing, such as Playful Palette [23], which is a set of blobs of color that blend together to create gradients and gamuts. The editable palette keeps the history of previous palettes. DiVerdi et al. [6] also proposed an approximation of image colors based on the Playful Palette. In this technique, within an optimization framework, an objective function minimizes the distance between the original image and the recolored one by palette colors, based on the self organizing map. The approximation algorithm is an order of magnitude faster than Playful Palette [23], while the quality is lower due to small amounts of shrinkage which caused by both the self organizing map and the clustering step. + +There are a limited number of works in the literature that assign colors to regions. For example, Qu et al. [18] proposed a colorization technique for the black and white manga using the Gabor wavelet filter. A user scribbles on the drawing to connect the regions; the algorithm then assigns colors to different hatching patterns, halfton-ing, and screening. The vector arts algorithms are used to recolor the distinct regions, which are more comparable to our recoloring method. Xu and Kaplan introduced artistic thresholding [28] where a graph data structure is applied on the segmentation of a source image. They employed an energy function to measure the quality of different black and white coloring of the segments. However, they failed to show the high-level features that crossing through the foreground objects. Lin et al. [11] proposed a palette-based recoloring method with a probabilistic model. They learn and predict the distribution of properties such as saturation, lightness, and contrast for individual regions and neighboring regions. Then score pattern colorings using the predicted distributions and color compatibility model by O'Donovan et al. [16]. Bohra and Gandhi [3] proposed an exemplar-based colorization algorithm for grayscale graphic arts from a reference image based on color graph and composition matching method. They retrieve palettes using the spatial features of the input image. They aim to preserve the artist's intent in the composition of different colors and spatial adjacency between these colors in the image. + +## 3 RECOLORING ALGORITHM + +We present a recoloring algorithm that automatically assigns colors to regions of an oversegmented image. Our system takes as input a set of regions and a palette cotaining a set of colors and assigns a color to each region. The recolored image should convey recognizable objects in the image. Edges are essential to the visibility of the structures. Neighboring regions will be assigned distinct colors to express an edge, and regions of similar colors will be assigned similar colors. The human visual system is sensitive to brightness contrast; to help preserve contrast in our recoloring, we take into account the regions' relative luminance changes when selecting region colors. + +We take the strategy of emphasizing color differences between regions, seeking to match the original color distance value between adjacent regions without preserving the colors themselves. Neighbouring regions with large differences will be assigned distant colors, preserving the boundary, while similar-colored regions will be assigned similar output colors or even the same color. + +We use a graph structure to organize the segmented image, where each region is a node and edges link adjacent regions. To simplify the color decisions, we will construct a tree over the graph, with a tree traversal assigning colors to nodes based on the decision for the parent node. We therefore assign weights to the edges reflecting their priority order, with large regions, regions of very similar color, and regions of very different color receiving high priority. Small regions and regions with intermediate color differences receive lower priority. Once weights have been assigned, we find the tree within the graph that maximizes the weight of the minimum-weight edges, a construction that corresponds to the widest path problem. + +Our algorithm has two main steps. First, we create a tree by applying the widest path algorithm [17] on the region graph. Then we assign colors to regions by traversing the tree beginning from its root. For each region in the graph, we choose the color from the palette that best matches the color difference with its parent. Before starting, we apply histogram matching between the regions' color differences and the palette color differences. The histogram matching allows us to best convey the image content while using the full extent of an arbitrary input palette, even one that has a color distribution very different from that of the input image. + +We show a flowchart of our recoloring approach in Figure 3. + +The input is (a) an oversegmented image and (b) a palette to use in the recoloring. We compute (c) the adjacency graph of the segments and aggregate the differences between adjacent colors into the set $\{ Q\}$ . We also compute (e) the set of differences between palette colors, yielding $\{ {\Delta P}\}$ . The widest path algorithm (d) gives us a tree linking all nodes of the adjacency graph. We match the histogram (f) of $\{ Q\}$ to that of $\{ P\}$ and (g) assign colors to all regions by traversing the widest-path tree, resulting in (h) the fully recolored image. + +Before we explain the recoloring proper, we introduce the widest path problem. We will employ the widest path algorithm to create a tree over the input oversegmentation. The tree structure organizes the regions with the largest edge weights to get processed earlier to maintain an edge and contrast preservation for the recolored image. We will traverse the tree and assign a color to each region, matching the edge's target color difference with the color difference with its parent's color. In practice, it can be practical to combine the tree creation and traversal, since the widest-path algorithm involves a best-first traversal of the tree as it is being built. Prior to color assignment, we apply histogram matching to align the regions' color differences with the palette's for better use of palette colors. + +### 3.1 Tree Creation: The Widest Path Problem + +Pollack [17] introduced the widest path problem. Consider a weighted graph consisting of nodes and edges $G = \left( {V, E}\right)$ , where an edge $\left( {u, v}\right) \in E$ connects node $u$ to $v$ . Let $w\left( {u, v}\right)$ be the weight, called capacity, of edge $\left( {u, v}\right) \in E$ ; capacity represents the maximum flow that can pass from $U$ to $V$ through that edge. The minimum weight among traversed edges defines the capacity of a path. Formally, the capacity $C\left( {u, v}\right)$ of a path between nodes $u$ and $v$ is given by the following: + +$$ +C\left( {u, v}\right) = \min \left( {w\left( {u, a}\right) , w\left( {a, b}\right) ,.., w\left( {d, v}\right) }\right) \tag{1} +$$ + +where $w\left( {u, a}\right) , w\left( {a, b}\right) ,\ldots , w\left( {d, v}\right)$ are the edge weights along the path. The widest path between $u$ and $v$ is the path with the maximum capacity among all possible paths. + +In a single-source widest path problem, we calculate for each node $t \in V$ a value $B\left( t\right)$ , the maximum path capacity among all the paths from source $s$ to $t$ . The value $B\left( t\right)$ is the width of the node. The union of widest paths from the source to each node is a tree, which we use to order the color assignment process. We can choose any node as the source; our implementation uses the region containing the image centre. + +The widest path algorithm can be implemented as a variant of Dijkstra's algorithm, building a tree outward from the source node $s$ to every node in the graph. All nodes of the graph $t \in V$ are given a tentative width value; the source node $S$ will be assigned to $B\left( s\right) = + \infty$ and all other nodes $v \neq s$ will have $B\left( v\right) = - \infty$ . A priority queue holds the nodes; at each step of the algorithm, we take the node with the highest current width from the queue and process it, stopping when the queue is empty. Suppose the node $u$ is on top of the queue with a width $B\left( u\right)$ ; for every outgoing edge(u, v), we update the value of the neighbour node $V$ as follows: + +$$ +B\left( v\right) \leftarrow \max \{ B\left( v\right) ,\min \{ B\left( u\right) , w\left( {u, v}\right) \} \} \tag{2} +$$ + +where $w$ is the edge weight between nodes $u$ and $v$ . If the value $B\left( v\right)$ was changed, node $u$ will be set as the parent node and $v$ will be pushed into the queue. When the algorithm completes, all non-root nodes in the graph will have been assigned a single parent, thus providing a tree rooted at $S$ . + +In our application, one possibility for edge weight is to use the difference in color values between the two regions. This would ensure that the widest path tree linked dissimilar regions, resulting in good edge preservation. However, regions of similar color could easily be divided. We want to preserve small color distances as well, so small color differences should yield a large edge weight. Distances intermediate between large and small are of the least importance. Hence, we base our edge weight on the difference from the median color distance, as follows. + +We calculate the color distances across each edge in the adjacency graph; call the set of color distances $Q$ , with + +$$ +Q = \left\{ {\Delta \left( {{c}_{i},{c}_{j}}\right) }\right\} = \left\{ {q}_{ij}\right\} ,\;i \neq j,\;{r}_{i},{r}_{j} \in R +$$ + +where ${c}_{i}$ and ${c}_{j}$ are the colors of regions ${r}_{i}$ and ${r}_{j}$ and $\Delta$ is the function computing the color distance. Compute the median value $\bar{q}$ from the distances in $Q$ . + +We also want to take into account the size of the region, such that larger regions have greater importance; we prefer that a larger region have higher priority and thus influence the smaller regions that are processed afterwards, compared to the converse. Depending on the oversegmentation, action may not be necessary; we suggest a process to improve results on oversegmentations with a dramatic variation in region size. + +We compute for each region a factor $b$ , the ratio of the region’s size (in pixels) to the average region size. Then, when we traverse an edge, we use the $b$ of the destination region to determine the weight. In our implementation, we compute and store a single edge weight; there is no ambiguity about the factor $b$ because we only ever traverse a given edge in one direction, moving outward from the source node. + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_3_168_159_1473_489_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_3_168_159_1473_489_0.jpg) + +Figure 3: Recoloring algorithm pipeline. + +To summarize: when traversing an edge, the edge weight is the distance between its target color difference and the median color difference, multiplied by a the factor $\left( {1 + b}\right)$ for the destination region: + +$$ +w\left( {{r}_{i},{r}_{j}}\right) = \left( {1 + b}\right) \left| {\Delta \left( {{c}_{i},{c}_{j}}\right) - \bar{q}}\right| . \tag{3} +$$ + +The factor $\left( {1 + b}\right)$ takes size into account, but ensures the region’s color differences can still affect the traversal order even for very small regions (b near zero). Note that the function $\Delta$ depends on the colorspace used. A simple possibility is Euclidean distance in RGB, but more perceptually based color distances are possible. We discuss color distance metrics in section 5.2. + +### 3.2 Histogram Matching + +We plan to match color differences in the output to the color differences in the input. However, the input palette can have an arbitrary set of colors, and we also want to make use of the full palette. For example, imagine a low-contrast image recolored with a palette of more varied colors. The smallest palette difference might be quite large; if so, the muted areas of the original will be matched with difference zero, resulting in loss of detail in such regions. A narrow palette recoloring a high-contrast image will have similar problems in the opposite direction. + +To adapt the palette usage to the input image color distribution, we apply histogram matching to color differences. We emphasize that we are not matching the colors themselves, but the distributions of differences. Histogram matching is applied between the region color differences (the distribution of values in $Q$ , computed in Section 3.1) and the pairwise color differences of the colors in the palette (call this dataset ${\Delta P}$ ). + +The histogram matching computes a new target color difference for each graph edge; call this target ${q}^{\prime }\left( {u, v}\right)$ for the edge linking regions $u$ and $v$ . The matching ensures that the distribution of values $\left\{ {q}^{\prime }\right\}$ is the same as the distribution of values in $\{ {\Delta P}\}$ . The values ${q}^{\prime }$ are then used for color assignment, selecting color pairs from the palette which correspond to the same place in the distribution: medium palette differences where medium image color differences existed, small differences where the original image color differences were small, with the largest palette differences reserved for the largest differences in the original image. + +Figure 4 shows the input image with average region colors and the recolored images before and after histogram matching. The bar graphs under each image show the proportion of each color in the image. We can see that the histogram matching used the palette colors more evenly, increasing contrast and highlighting more details. For example, strong edges on the leaf boundary became distinguishable from the nearby regions, and the markings on the lizard became more prominent. + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_3_936_854_700_368_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_3_936_854_700_368_0.jpg) + +Figure 4: Histogram matching result. Left to right: original image, result without histogram matching, and result using histogram matching. + +### 3.3 Color Assignment + +The widest path algorithm provided a tree, and the histogram matching provided palette-customized target distances for the edges. We now traverse the tree and assign a color to each region along the way. We begin by assigning the closet palette color to the tree's root node; recall that the root was the most central region in the image. At each subsequent step, we assign a color from the palette $P$ to the current region $\alpha$ based on the palette color ${p}_{\beta }$ previously assigned to the parent region $\beta$ and the target color difference ${q}^{\prime }\left( {\alpha ,\beta }\right)$ . We also consider the luminance difference between regions ${r}_{\alpha }$ and ${r}_{\beta }$ so as to help maintain larger-scale intensity gradients. Recall our intention in preserving color differences: two regions with large color differences should be assigned two very different colors, and regions with a small color difference should get very similar colors, possibly the same color. Owing to histogram matching, "large" and "small" are calibrated to the content of the particular image and palette being combined. + +We impose a luminance constraint on potential palette colors, in an effort to respect the relative ordering of the regions' luminances. Suppose the luminances of two regions ${r}_{\alpha }$ and ${r}_{\beta }$ are ${L}_{\alpha }$ and ${L}_{\beta }$ , where ${L}_{\alpha } < {L}_{\beta }$ . We then constrain the set of eligible palette colors for region $\alpha$ such that only colors ${p}_{\alpha }$ that satisfy ${L}_{{p}_{\alpha }} < {L}_{{p}_{\beta }}$ are considered. A similar constraint is imposed if ${L}_{\alpha } > {L}_{\beta }$ . + +For region ${r}_{\alpha }$ and its parent ${r}_{\beta }$ , we have the target edge difference ${q}^{\prime }\left( {\alpha ,\beta }\right)$ . Denote by ${p}_{\beta }$ the palette color already assigned to the parent region ${r}_{\beta }$ . We choose the palette color ${p}_{\alpha }$ for region ${r}_{\alpha }$ so as to minimize the distance $D$ : + +$$ +D = \left| {{q}^{\prime }\left( {i, j}\right) - \Delta \left( {{p}_{\alpha },{p}_{\beta }}\right) }\right| \tag{4} +$$ + +where $\Delta$ is the distance metric between two colors. The only colors considered for ${p}_{\alpha }$ are those that satisfy the luminance constraint. + +For colors with the lowest and highest luminance in the palette, there may be no available colors satisfying the luminance constraint. In such cases, the constraint is ignored and all palette colors are considered. + +Since the source region has no parent, the above process can not be used to find its color. Instead, we assign the closest color from the palette, as determined by the difference metric $\Delta$ . The source region itself is the region containing the centre of the image; while the output is weakly dependent on the choice of starting region, we do not view the starting region as a critical decision. Figure 5 shows some examples of varied outcomes from moving the starting region. + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_4_195_700_631_241_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_4_195_700_631_241_0.jpg) + +Figure 5: Changing the source region locations. The starting region is indicated by a dot. + +## 4 RESULTS AND DISCUSSION + +In this section, we present some results. Figure 6 shows a variety of recolored images generated by our algorithm using various palettes. We succeeded in maintaining strong edges, and objects in the recolored abstractions remain recognizable. Our algorithm retains textures and produces vivid recolored images by selecting varied colors from the palette. It assigns the same colors over flat regions and distinct colors to illustrate structures. We ran our algorithm on a variety of images with different textures and contrasts. We obtained most of our palettes from the website COLRD (http://colrd.com/); others we created manually by sampling from colorful images. + +In Figure 6, we present a set of examples from our recoloring algorithm, which were generated with four different palettes. From left to right, the columns show the original image, followed by recolored images using different palettes. We chose images presenting different features. The delicate features and textures in the abstractions stay visible after recoloring despite the input photographs having been radically altered by the recoloring. + +In the starfish image, the structure and the patterns on the arms become visible. The uniform colors on the background turn to a vivid splash of colors, emphasizing the textureness of the terrain. The algorithm has chosen the darkest colors to assign to the shadows and the lightest ones to the surface of the creature. + +The Venice canal is a crowded image composed of soft textures and structures with hard edges. The algorithm is able to preserve recognizable objects such as the boats and windows. Even tiny letters on the wall and pedestrians on the canal's side are visible. The recoloring process preserved the buildings' rigid structures; meanwhile, it captured shadows and the water's soft movements. In capturing such features, adopting a highly irregular oversegmentation was necessary. + +The lizard image is an example of a low contrast image with textured areas covered by dull colors. The algorithm highlighted the textures by assigning wild colors to the homogeneous regions on the leaf. At the same time, the substantial edges like the lizard's body patterns and the leaf edges are preserved naturally by our algorithm. + +In the next example, we used a high contrast image as input. The algorithm assigned the darkest colors from each palette to the coat of the man and separated it from the background using a very light color. Further, the small features on the face and the Chinese characters are mostly readable. + +The rust image contains a different type of textures on the wall and the grass, plus soft textureless areas on the machinery. The brick patterns on the wall exaggerated through colorful palettes made the final images more interesting than the original flat image. The high-frequency details of the grass are retained. The smooth transition of colors on the top right of the image illustrated the shadows of the leaves. + +We demonstrated strong edge preservation in all examples. Additionally, the image textures preserved, and palette colors are uniformly used to maintain a good contrast. + +### 4.1 Comparison with Naïve Methods + +Figure 7 gives a comparison between our method and two naïve alternatives. The first column shows the input image. The second column shows the input segments recolored by replacing the segment's color with the palette color closest to the segment's average color. The third column shows a random assignment of palette colors to segments. The final column shows our method. Recoloring with closest palette colors preserves some image content, but the result shows large regions of constant color; many of the palette colors are underused, an issue that can worsen when there is a significant mismatch between the original image color distribution and the palette, as in the upper example. Random assignment provides an even distribution of palette colors, but the image content can become unrecognizable for highly textured images, as in the lower example Our method uses the palette more effectively, showing local details and large-scale content and exercising the full range of available colors. + +### 4.2 Comparison with ColorArt + +In this section, we compare our recoloring method with ColorArt, an optimization-based recoloring method for graphic arts [3]. This method assigns colors to regions by solving a graph matching problem over color groups in the reference and the template image. In searching for a reference image, this algorithm uses the same number of color groups as in the template image. + +Figure 8 shows the generated images by ColorArt method on the right and ours in the middle, both using the sunset palette. We created a colorful leaf surrounded by a light background as in the input image, showing the algorithm respects the changes in lightness Moreover, assignment of different colors on the leaf presented an interesting texture. The leaf image generated by the ColorArt method has reversed the image tones. In the sketch image, we preserved the edges and showed a recognizable face in the image. In contrast, the ColorArt algorithm had difficulty with the edges and the gradual gradients, resulting in a somewhat incoherent output. + +### 4.3 Recoloring with SLIC0 Oversegmentation + +Our recoloring algorithm does not make any assumptions about the input oversegmentation. Figure 9, shows results from an over-segmantation from SLIC0 [1]. The starfish and owl images have approximately 2000 and 5000 segments, respectively. + +Note that more irregular regions can better represent complex image contours and textures, allowing the recolored abstractions to better display the image content. In starfish, the structures and shadows are represented by distinct colors that contrast between the object and the background. The strong edges, such as the arms of the starfish, are preserved; however, the thin features are not captured by SLIC0's uniform regions, and the background terrain does not present any significant information. In the owl image, the small regions on the chest convey the feather textures, while definite regions such as dark eyes kept their well-defined structures. Given a suitably detailed oversegmentation, we can produce appealing results. + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_5_165_257_1469_1696_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_5_165_257_1469_1696_0.jpg) + +Figure 6: Region recoloring results. Top row: visualization of palettes used. Images, top to bottom: starfish, Venice, lizard, lanterns, and rust. + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_6_152_148_718_463_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_6_152_148_718_463_0.jpg) + +Figure 7: A comparison with naïve recolorings. Left to right: The original image, the results from naïve method, random recoloring, and ours. + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_6_156_890_707_247_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_6_156_890_707_247_0.jpg) + +Figure 9: Recoloring with SLIC0 oversegmentation. + +### 4.4 Performance + +We ran our algorithm on an Intel(R) Core(TM) i7-6700 with a 3.4 $\mathrm{{GHz}}\mathrm{{CPU}}$ and ${16.0}\mathrm{{GB}}$ of $\mathrm{{RAM}}$ . The processing time increases with the number of regions and edges in the graph. The time complexity of single-source widest path is $\mathcal{O}\left( {m + n\log n}\right)$ for $m$ edges and $n$ vertices, using a heap-based priority queue in a Dijkstra search. + +Table 1 shows the timing for creating the trees and color assignments of different images. For small images like the starfish, containing about ${1.4}\mathrm{\;K}$ regions, the recoloring algorithm takes about 0.007 seconds to construct the tree, while it takes about ${0.3}\mathrm{\;s}$ for larger images such as rust with ${7.2}\mathrm{K}$ regions. With a palette of 10 colors, the color assignments will take 0.05 and 1.2s to recolor the starfish and rust respectively. The color assignment will take longer for the larger palettes and images with a larger number of regions. We show the timing of tree creation and color assignments for all images in the gallery. The pre-processing steps of creating the graph adjacency list and histogram matching are also presented in the table. + +### 4.5 Limitations + +Although in our experience our method works well for most combinations of image and palette, there are cases where the output is unappealing. When two similar regions are not neighbours, they may receive different colors; e.g., a sky area may be broken up by branches and different parts of the sky could be colored differently. Even adjacent regions may not receive similar colors if their average colors differ, introducing spurious edges into regions with slowly changing colors such as gradients or smooth surfaces. Out-of-focus backgrounds and faces are common examples producing such effects, + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_6_938_148_696_713_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_6_938_148_696_713_0.jpg) + +Figure 8: Comparison with ColorArt. Left: input; middle: our results; right: ColorArt results. + +Figure 10 shows two failure examples. On the top middle image, the face and background regions got very different colors with an unappealing outcome. However, a palette containing several similar colors allows the algorithm to better match the image gradients, with dramatically better results. In the bottom images, large regions of different colors appear in the sky, which does not look attractive. Because the original regions have different average colors, our algorithm is likely to separate them regardless of the palette. + +Our algorithm is at present strictly automated with no provision for direct user control beyond choice of palette and parameter settings. While these parameters provide considerable scope for generating variant recolorings, so that a user would have a wide range of results to choose from, direct control is not yet implemented. One might imagine annotating the image to enforce specific color selections or linking regions to ensure that their output colors are always the same. While it would be straightforward to add some control of this type, we have not yet implemented such features. + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_6_938_1521_696_314_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_6_938_1521_696_314_0.jpg) + +Figure 10: Failure cases. + +## 5 VARIATIONS + +In this section, we explore some variations of our region recoloring algorithm. We apply a blending mechanism to create recolored images with smooth color transitions. Then, we demonstrate that different color spaces and distance metrics can generate various results from one palette. + +
ImageTree creationColor assignmentGraphTotal#Vertices
lanterns0.003s0.02s0.064s0.087s1K
starfish0.007s0.05s0.07s0.127s1.4K
lizard0.015s0.08s0.1s0.195s1.9K
rust0.3s1.2s0.9s2.4s7.2K
Venice0.3s1.4s0.9s2.6s7.7K
+ +Table 1: Timing results for images with varying numbers of regions. + +### 5.1 Blending + +In transferring the colors, we have intended to strictly preserve the palette and not add colors. However, we can also blend the colors, giving a more painterly style and to adding a sense of depth to a flattened abstraction. Postprocessing the recolored image introduces new intermediate shades of colors. We suggest cross-filtering the recolored image with an edge-preserving filter [14]. This process smooths the areas away from edges while retaining the colors close to strong edges. In addition, blending across the jagged region boundaries smooths regions which originally possessed similar colors. + +The cross-filtering mask size will affect the outcome. Larger masks will produce a stronger blending effect; small features will be smoothed out, and the output image will become blurry in regions lacking edges. Figure 11 illustrates examples of blending using masks of sizes $n = {20},{100}$ , and 300 . Blending with $n = {20}$ only slightly modifies the image; for larger masks, the blending is more apparent. At $n = {300}$ we can see a definite blurring in originally smooth areas, although blurring does not happen across original edges. Using a gray palette, can can obtain an effect resembling a charcoal drawing with larger masks. + +### 5.2 Color Spaces and Distance Metrics + +We can employ different functions for our color distance function $\Delta$ . Different choices of color space and distance metric can affect the colorization results. Changing the distance metric will cause both the widest-path tree and the color assignment to change. + +We have experimented with computing color distances with the Euclidean distance in RGB as well as using perceptually uniform measures CIE94, CIEDE2000, CIE76, and CMC colorimetric distances $\left\lbrack {{21},{27}}\right\rbrack$ . + +Both CIE94 and CIEDE2000 are defined in the Lch color space. However, CIE94 differences in lightness, chroma, and hue are calculated from Lab coordinates. CMC is quasimetric, designed based on the Lch color model. The CIE76 metric uses Euclidean distance in Lab space. + +In Figure 12, we show different outcomes from different metrics using two palettes. We can observe the strong edge and contrast preservation, which is an apparent result of perceptual uniform metrics. More importantly, each metric gives a unique recolorization to the abstraction, which allows a user to choose from different output images. We can get interesting results from each metric. However, in our judgement, more attractive results are obtained from Euclidean and CMC metrics; the delicate features and image contrast are maintained, and objects are generally preserved. CIE94 and CIE2000 metrics are also effective, but we found that the CIE76 metric rarely creates interesting results. + +## 6 CONCLUSIONS AND FUTURE WORK + +In this paper, we presented a graph-based recoloring method that takes an oversegmented image and a palette as input, and then assigns colors to each region. The result uses the palette colors to portray the image content, but without attempting to match the input colors. Designing our algorithm with the widest path allowed us to maintain the image contrast and objects' recognizability. We demonstrated our results with different palettes. We achieved vivid recolorization effects, effective for most combinations of input images and palettes. + +In the future, we would like to investigate non-convex palette color augmentation, adding new colors extending an input palette while matching the palette's theme. We would like to extend the color assignment to consider color harmony, scoring based on compatibility of colors and thus effecting the ability of certain colors to be neighbors. Furthermore, we would like to be able to recolor smoothly changing regions like the sky more uniformly. Adding elements of user control would allow for better cooperation between the present automated method and the user's intent. + +## ACKNOWLEDGMENTS + +The authors thank ... + +## REFERENCES + +[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. Slic superpixels compared to state-of-the-art superpixel methods. 2012. + +[2] E. Aharoni-Mack, Y. Shambik, and D. Lischinski. Pigment-Based Recoloring of Watercolor Paintings. In H. Winnemöller and L. Bartram, eds., Non-Photorealistic Animation and Rendering. Association for Computing Machinery, Inc (ACM), 2017. doi: 10.1145/3092919. 3092926 + +[3] M. Bohra and V. Gandhi. Colorart: Suggesting colorizations for graphic arts using optimal color-graph matching. 2020. + +[4] W. Casaca, M. Colnago, and L. G. Nonato. Interactive image colorization using Laplacian coordinates. In G. Azzopardi and N. Petkov, eds., Computer Analysis of Images and Patterns, pp. 675-686. Springer International Publishing, Cham, 2015. + +[5] H. Chang, O. Fried, Y. Liu, S. DiVerdi, and A. Finkelstein. Palette-based photo recoloring. ACM Trans. Graph., 34(4):139:1-139:11, July 2015. doi: 10.1145/2766978 + +[6] S. DiVerdi, J. Lu, J. Echevarria, and M. Shugrina. Generating Playful Palettes from Images. In C. S. Kaplan, A. Forbes, and S. DiVerdi, eds., ACM/EG Expressive Symposium. The Eurographics Association, 2019. doi: 10.2312/exp.20191078 + +[7] L. Fang, J. Wang, G. Lu, D. Zhang, and J. Fu. Hand-drawn grayscale image colorful colorization based on natural image. The Visual Computer, 35(11):1667-1681, Nov 2019. doi: 10.1007/s00371-018-1613-8 + +[8] G. R. Greenfield and D. H. House. Image recoloring induced by palette color associations. Journal of WSCG, 11:189-196, 2003. + +[9] P. Kubelka. New contributions to the optics of intensely light-scattering materials. part i. J. Opt. Soc. Am., 38(5):448-457, May 1948. doi: 10. 1364/JOSA.38.000448 + +[10] A. Levin, D. Lischinski, and Y. Weiss. Colorization using optimization. In ACM SIGGRAPH 2004 Papers, SIGGRAPH '04, pp. 689-694. ACM, New York, NY, USA, 2004. doi: 10.1145/1186562.1015780 + +[11] S. Lin, D. Ritchie, M. Fisher, and P. Hanrahan. Probabilistic color-by-numbers: Suggesting pattern colorizations using factor graphs. In ACM SIGGRAPH 2013 papers, SIGGRAPH '13, 2013. + +[12] J. Lu, S. DiVerdi, W. A. Chen, C. Barnes, and A. Finkelstein. Realpig-ment: Paint compositing by example. In Proceedings of the Workshop on Non-Photorealistic Animation and Rendering, NPAR '14, pp. 21-30. ACM, New York, NY, USA, 2014. doi: 10.1145/2630397.2630401 + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_8_168_146_1463_517_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_8_168_146_1463_517_0.jpg) + +Figure 11: Postprocessing with an edge-aware filter. Left to right: original image; cross-filtered images with mask size 20,100, and 300. + +![01963e9c-2eee-74af-a6f8-5b6dfbfe3668_8_179_744_1441_356_0.jpg](images/01963e9c-2eee-74af-a6f8-5b6dfbfe3668_8_179_744_1441_356_0.jpg) + +Figure 12: Colorization with different distance measures. + +[13] J. McCann and N. S. Pollard. Soft stacking. In Computer Graphics Forum, vol. 31, pp. 469-478, 2012. + +[14] D. Mould. Texture-preserving abstraction. In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, NPAR '12, pp. 75-82. Eurographics Association, Goslar Germany, Germany, 2012. + +[15] L. Neumann and A. Neumann. Color style transfer techniques using hue, lightness and saturation histogram matching. In Proceedings of the First Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging, Computational Aesthetics' 05, pp. 111-122. Eurographics Association, 2005. + +[16] P. O'Donovan, A. Agarwala, and A. Hertzmann. Color Compatibility From Large Datasets. ACM Transactions on Graphics, 30(4), 2011. + +[17] M. Pollack. Letter to the editor-the maximum capacity through a network. Operations Research, 8(5):733-736, Oct. 1960. + +[18] Y. Qu, T.-T. Wong, and P.-A. Heng. Manga colorization. ACM Trans. Graph., 25(3):1214-1220, July 2006. + +[19] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley. Color transfer between images. IEEE Comput. Graph. Appl., 21(5):34-41, Sept. 2001. + +[20] D. L. Ruderman, T. W. Cronin, and C.-C. Chiao. Statistics of cone responses to natural images: Implications for visual coding. Journal of the Optical Society of America A, 15:2036-2045, 1998. + +[21] A. T. Sanda Mahama, A. Dossa, and P. Gouton. Choice of distance metrics for RGB color image analysis. Electronic Imaging, 2016:1-4, 022016. + +[22] L. Shapira, A. Shamir, and D. Cohen-Or. Image appearance exploration by model-based navigation. Comput. Graph. Forum, 28:629-638, 2009. + +[23] M. Shugrina, J. Lu, and S. Diverdi. Playful palette: An interactive parametric color mixer for artists. ACM Trans. Graph., 36(4):61:1- 61:10, July 2017. doi: 10.1145/3072959.3073690 + +[24] D. Sýkora, M. Ben-Chen, M. Cadík, B. Whited, and M. Simmons. Tex- + +toons: practical texture mapping for hand-drawn cartoon animations. In ${NPAR},{2011}$ . + +[25] D. Sýkora, J. Dingliana, and S. Collins. Lazybrush: Flexible painting tool for hand-drawn cartoons. Computer Graphics Forum, 28(2):599- 608, 2009. + +[26] J. Tan, J.-M. Lien, and Y. Gingold. Decomposing images into layers via RGB-space geometry. ACM Trans. Graph., 36(1), Nov. 2016. doi: 10.1145/2988229 + +[27] Wikipedia contributors. Color difference - Wikipedia, the free encyclopedia, 2019. [Online; accessed 17-September-2019]. + +[28] J. Xu and C. S. Kaplan. Artistic thresholding. In Proceedings of the 6th International Symposium on Non-photorealistic Animation and Rendering, NPAR '08, pp. 39-47. ACM, New York, NY, USA, 2008. doi: 10.1145/1377980.1377990 + +[29] L. Yatziv and G. Sapiro. Fast image and video colorization using chrominance blending. Trans. Img. Proc., 15(5):1120-1129, May 2006. doi: 10.1109/TIP.2005.864231 + +[30] K. Zeng, M. Zhao, C. Xiong, and S.-C. Zhu. From image parsing to painterly rendering. ACM Trans. Graph., 29(1):2:1-2:11, Dec. 2009. doi: 10.1145/1640443.1640445 + +[31] M. Zhao and S.-C. Zhu. Sisley the abstract painter. In Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, NPAR '10, pp. 99-107. ACM, New York, NY, USA, 2010. doi: 10.1145/1809939.1809951 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..7478541367855ddb615466198afd6749ab9626fb --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,278 @@ +§ ARTISTIC RECOLORING OF IMAGE OVERSEGMENTATIONS + + < g r a p h i c s > + +Figure 1: Recolorization of region-based abstraction. The figure shows the original image on the left and the recolored images produced by our proposed method using three different palettes. + +§ ABSTRACT + +We propose a method to assign vivid colors to regions of an oversegmented image. We restrict the output colors to those found in an input palette, and seek to preserve the recognizability of structure in the image. Our strategy is to match the color distances between the colors of adjacent regions with the color differences between the assigned palette colors; thus, assigned colors may be very far from the original colors, but both large local differences (edges) and small ones (uniform areas) are maintained. We use the widest path algorithm on a graph-based structure to set a priority order for recoloring regions, and traverse the resulting tree to assign colors. Our method produces vivid colorizations of region-based abstraction using arbitrary palettes. We demonstrate a set of stylizations that can be generated by our algorithm. + +Keywords: Non-photorealistic rendering, Image stylization, Recoloring, Abstraction. + +Index Terms: I.3.3 [Picture/Image Generation]-; I.4.6 [Segmentation] + +§ 1 INTRODUCTION + +Color plays an important role in image aesthetics. In representational art, artists employ colors that match the perceived colors of objects in the depicted scene. Conversely, abstraction provides more freedom to manipulate colors. Figure 2 shows a modern vector illustration of an owl, an example of Fauvism by André Derain, and an abstraction of the Eiffel Tower by Robert Delaunay. The artists have expressed the image content by vivid colors assigned carefully various image regions. We aim at a method that can generate colorful images, recoloring an image using an arbitrary input palette. In this paper, we describe a method for recoloring an image based on a subdivision of the image into distinct segments, recoloring each segment based on its relationship with its neighbours. + +Our goal is to present different recoloring possibilities by assigning colors to each region of an oversegmented input image. We aim to maintain the image contrast and preserve strong edges so that the content of the scene remains recognizable. It is important to convey textures and small features, often too delicate to be preserved by existing abstraction methods. We would like to be able to create wild and vivid abstractions through use of unusual palettes. + +Manual recoloring of oversegmented images would be tedious; the images contain hundreds or thousands of segments; clicking on every segment would take a long time, even leaving aside the cognitive and interaction overhead of making selections from the palette. We provide an automatic assignment of colors to regions. The assignment can be used as is, could be used in a fast manual assessment loop (for example, if an artist wanted to choose a suitable palette for coloring a scene), or could be a good starting point for a semi-automatic approach where a user made minor modifications to the automated results. + +In this paper, we present an automatic recoloring approach for a region-based abstraction. The input is a desired palette and an oversegmented image. The method assigns a color from the palette to each region; it is based on the widest path algorithm [17], which organizes the regions into a tree based on the weight of the edges connecting them. We use color differences between adjacent regions both to order the regions and to select colors, trying to match the difference magnitude between the assigned palette colors and the original region colors. The use of color differences allows structures in the image to remain recognizable despite breaking the link between the original and depicted color. + +Our main contributions are as follows: + + * We designed a recoloring method for an oversegmented image, creating multiple abstractions colored with just one palette. Our method creates wild and high contrast images. + + * Various styles can be created by our method. We experiment with color blending between regions and produce smooth images. By reducing the palette colors, we simplify the recolored images. Moreover, we generate new colorings from a palette by applying different metrics and color spaces. + +The remainder of the paper is organized as follows. In Section 2, we briefly present related work. We describe our algorithm in Section 3. Section 4 shows results and provides some evaluation, and Section 5 gives some possible variations of the method. Finally, we conclude in Section 6 and suggest directions for future work. + + < g r a p h i c s > + +Figure 2: Colorful representational images. A modern vector illustration of an owl; a Fauvist painting by André Derain; an abstraction of the Eiffel Tower by Robert Delaunay. + +§ 2 PREVIOUS WORK + +Although there is an existing body of work on recoloring photographs $\left\lbrack {5,8,{10},{12},{15},{19},{22},{26},{29}}\right\rbrack$ and researchers have investigated colorization of non-photorealistic renderings $\lbrack 2,7,{18},{23}$ , ${25},{30},{31}\rbrack$ , there is room for further exploration. Existing recoloring methods little address region-based abstractions. The closest approach, by Xu and Kaplan [28], used optimization to assign black or white to each region of an input image. On the other hand, there are a few approaches in the pattern colorization problem, intending to colorize the graphic patterns $\left\lbrack {3,{11}}\right\rbrack$ . Bohra and Gandhi $\left\lbrack 3\right\rbrack$ posed the colorization problem as an optimal graph matching problem over color groups in the reference and a grayscale template image. In generating playful palettes [6], the approximation results converge for a number of blobs beyond six or seven. Similarly, the ColorArt method of Bohra and Gandhi [3] enforce an equal number of colors in the reference palette and the template image, and cannot find matches otherwise. + +Below, we review some of the previous research on recoloring methods in NPR and color palette selection. These approaches can be broadly classified into example-based recoloring and palette-based recoloring. + +§ EXAMPLE BASED RECOLORING (COLOR TRANSFER) + +Recoloring methods were first proposed by Reinhard et al. [19], where the colors of one image were transferred to another. They converted the RGB signals to Ruderman et al.'s [20] perception-based color space ${L\alpha \beta }$ . To produce colors they shifted and scaled the ${L\alpha \beta }$ space using simple statistics. + +Towards the color transfer and color style transfer techniques, Neumann and Neumann [15] extracted palette colors from an arbitrary target image and applied a 3D histogram matching. They attempt to keep the original hues after style transformation, where all of the colors, having the same hue on the original image would have the same hue after the matching step. They performed matching transformations on $3\mathrm{D}$ cumulative distribution functions belonging to the original and the style images. However, due to the lack of spatial information, the same histogram in itself looks to be not enough for a true style cloning. Especially, the problems caused by unpredictable noise and gradient effects. They suggest using image segmentation and 3-dimensional smoothing of the color histogram to improve the result. + +Levin et al. [10] introduced an interactive colorization method for black and white images. They used a quadratic cost function derived from the color differences between a pixel and its weighted average neighborhood colors. The user by scribbling indicates the desired color in the interior of the region, instead of tracing out its precise boundary. Then the colors automatically propagate to the remaining pixels in the image sequence. However, sometimes it fails at strong edges, due to a sensitive scale parameter. Inspired by Levin et al. [10], Yatziv and Sapiro [29] proposed a fast colorization of images, which was based on the concepts of luminance-weighted chrominance blending and fast intrinsic geodesic distance computations. Sykora et al. [25] developed a tool called Lazybrush for the colorization of cartoons which integrates textures to the images to create 3D-like effects [24], while Casaca et al. [4] used Laplacian coordinates for image division and used a color theme for fast colorization. Fang et al. [7] proposed an interactive optimization method for colorization of the hand-drawn grayscale images. To maintain a smooth color transitions and control the color overflow in the texture areas, they used a smooth feature map to adjust the feature vectors. + +A few works in NPR composite the colors for recoloring. The compositing could be applied by using alpha blending [13] or the Kubelka-Munk [9] equation (KM). For a paint-like effect NPR researchers mostly worked with the physic-based Kubelka-Munk equation to predict the reflectance of a layer of pigment. Some researchers tried to mimic the actual artist decision of mixing the colors to generate the palettes. For example, a data-driven color compositing framework by Lu et al. [12] derived three models based on optimized alpha blending, RBF interpolation and KM optimization to improve the prediction of compositing colors. Later, the KM pigment-Based model used for recoloring of styles such as watercolor painting [2]. The compositing by KM model leaves traces of overlapping stroke layers which can produce near natural painting effects. + +§ PALETTE BASED RECOLORING + +The interest in colorization and recoloring methods, opened up new research ideas on color palettes, rather than simple approaches like averaging colors of the active regions [8]. For many recoloring methods, usually a user would scribble each region of the image with a color, and the algorithm would do the rest of the work. Early methods to select the color palettes used Gaussian mixture models or K-means to cluster the image pixels. Chang et al. [5] introduced a photo-recoloring method by user-modified palettes. To select the color palettes, they compute the kmeans clustering on image colors, then discard the black entry. Tan et al. [26] proposed a technique to decompose an image into layers to extract the palette colors. Each layer of decomposition represents a coat of paint of a single color applied with varying opacity throughout the image. To determine a color palette capable of reproducing the image, they analyzed the image in RGB-space geometrically in a simplified convex hull. + +Most recent methods on generating the palettes have more flexibility for editing, such as Playful Palette [23], which is a set of blobs of color that blend together to create gradients and gamuts. The editable palette keeps the history of previous palettes. DiVerdi et al. [6] also proposed an approximation of image colors based on the Playful Palette. In this technique, within an optimization framework, an objective function minimizes the distance between the original image and the recolored one by palette colors, based on the self organizing map. The approximation algorithm is an order of magnitude faster than Playful Palette [23], while the quality is lower due to small amounts of shrinkage which caused by both the self organizing map and the clustering step. + +There are a limited number of works in the literature that assign colors to regions. For example, Qu et al. [18] proposed a colorization technique for the black and white manga using the Gabor wavelet filter. A user scribbles on the drawing to connect the regions; the algorithm then assigns colors to different hatching patterns, halfton-ing, and screening. The vector arts algorithms are used to recolor the distinct regions, which are more comparable to our recoloring method. Xu and Kaplan introduced artistic thresholding [28] where a graph data structure is applied on the segmentation of a source image. They employed an energy function to measure the quality of different black and white coloring of the segments. However, they failed to show the high-level features that crossing through the foreground objects. Lin et al. [11] proposed a palette-based recoloring method with a probabilistic model. They learn and predict the distribution of properties such as saturation, lightness, and contrast for individual regions and neighboring regions. Then score pattern colorings using the predicted distributions and color compatibility model by O'Donovan et al. [16]. Bohra and Gandhi [3] proposed an exemplar-based colorization algorithm for grayscale graphic arts from a reference image based on color graph and composition matching method. They retrieve palettes using the spatial features of the input image. They aim to preserve the artist's intent in the composition of different colors and spatial adjacency between these colors in the image. + +§ 3 RECOLORING ALGORITHM + +We present a recoloring algorithm that automatically assigns colors to regions of an oversegmented image. Our system takes as input a set of regions and a palette cotaining a set of colors and assigns a color to each region. The recolored image should convey recognizable objects in the image. Edges are essential to the visibility of the structures. Neighboring regions will be assigned distinct colors to express an edge, and regions of similar colors will be assigned similar colors. The human visual system is sensitive to brightness contrast; to help preserve contrast in our recoloring, we take into account the regions' relative luminance changes when selecting region colors. + +We take the strategy of emphasizing color differences between regions, seeking to match the original color distance value between adjacent regions without preserving the colors themselves. Neighbouring regions with large differences will be assigned distant colors, preserving the boundary, while similar-colored regions will be assigned similar output colors or even the same color. + +We use a graph structure to organize the segmented image, where each region is a node and edges link adjacent regions. To simplify the color decisions, we will construct a tree over the graph, with a tree traversal assigning colors to nodes based on the decision for the parent node. We therefore assign weights to the edges reflecting their priority order, with large regions, regions of very similar color, and regions of very different color receiving high priority. Small regions and regions with intermediate color differences receive lower priority. Once weights have been assigned, we find the tree within the graph that maximizes the weight of the minimum-weight edges, a construction that corresponds to the widest path problem. + +Our algorithm has two main steps. First, we create a tree by applying the widest path algorithm [17] on the region graph. Then we assign colors to regions by traversing the tree beginning from its root. For each region in the graph, we choose the color from the palette that best matches the color difference with its parent. Before starting, we apply histogram matching between the regions' color differences and the palette color differences. The histogram matching allows us to best convey the image content while using the full extent of an arbitrary input palette, even one that has a color distribution very different from that of the input image. + +We show a flowchart of our recoloring approach in Figure 3. + +The input is (a) an oversegmented image and (b) a palette to use in the recoloring. We compute (c) the adjacency graph of the segments and aggregate the differences between adjacent colors into the set $\{ Q\}$ . We also compute (e) the set of differences between palette colors, yielding $\{ {\Delta P}\}$ . The widest path algorithm (d) gives us a tree linking all nodes of the adjacency graph. We match the histogram (f) of $\{ Q\}$ to that of $\{ P\}$ and (g) assign colors to all regions by traversing the widest-path tree, resulting in (h) the fully recolored image. + +Before we explain the recoloring proper, we introduce the widest path problem. We will employ the widest path algorithm to create a tree over the input oversegmentation. The tree structure organizes the regions with the largest edge weights to get processed earlier to maintain an edge and contrast preservation for the recolored image. We will traverse the tree and assign a color to each region, matching the edge's target color difference with the color difference with its parent's color. In practice, it can be practical to combine the tree creation and traversal, since the widest-path algorithm involves a best-first traversal of the tree as it is being built. Prior to color assignment, we apply histogram matching to align the regions' color differences with the palette's for better use of palette colors. + +§ 3.1 TREE CREATION: THE WIDEST PATH PROBLEM + +Pollack [17] introduced the widest path problem. Consider a weighted graph consisting of nodes and edges $G = \left( {V,E}\right)$ , where an edge $\left( {u,v}\right) \in E$ connects node $u$ to $v$ . Let $w\left( {u,v}\right)$ be the weight, called capacity, of edge $\left( {u,v}\right) \in E$ ; capacity represents the maximum flow that can pass from $U$ to $V$ through that edge. The minimum weight among traversed edges defines the capacity of a path. Formally, the capacity $C\left( {u,v}\right)$ of a path between nodes $u$ and $v$ is given by the following: + +$$ +C\left( {u,v}\right) = \min \left( {w\left( {u,a}\right) ,w\left( {a,b}\right) ,..,w\left( {d,v}\right) }\right) \tag{1} +$$ + +where $w\left( {u,a}\right) ,w\left( {a,b}\right) ,\ldots ,w\left( {d,v}\right)$ are the edge weights along the path. The widest path between $u$ and $v$ is the path with the maximum capacity among all possible paths. + +In a single-source widest path problem, we calculate for each node $t \in V$ a value $B\left( t\right)$ , the maximum path capacity among all the paths from source $s$ to $t$ . The value $B\left( t\right)$ is the width of the node. The union of widest paths from the source to each node is a tree, which we use to order the color assignment process. We can choose any node as the source; our implementation uses the region containing the image centre. + +The widest path algorithm can be implemented as a variant of Dijkstra's algorithm, building a tree outward from the source node $s$ to every node in the graph. All nodes of the graph $t \in V$ are given a tentative width value; the source node $S$ will be assigned to $B\left( s\right) = + \infty$ and all other nodes $v \neq s$ will have $B\left( v\right) = - \infty$ . A priority queue holds the nodes; at each step of the algorithm, we take the node with the highest current width from the queue and process it, stopping when the queue is empty. Suppose the node $u$ is on top of the queue with a width $B\left( u\right)$ ; for every outgoing edge(u, v), we update the value of the neighbour node $V$ as follows: + +$$ +B\left( v\right) \leftarrow \max \{ B\left( v\right) ,\min \{ B\left( u\right) ,w\left( {u,v}\right) \} \} \tag{2} +$$ + +where $w$ is the edge weight between nodes $u$ and $v$ . If the value $B\left( v\right)$ was changed, node $u$ will be set as the parent node and $v$ will be pushed into the queue. When the algorithm completes, all non-root nodes in the graph will have been assigned a single parent, thus providing a tree rooted at $S$ . + +In our application, one possibility for edge weight is to use the difference in color values between the two regions. This would ensure that the widest path tree linked dissimilar regions, resulting in good edge preservation. However, regions of similar color could easily be divided. We want to preserve small color distances as well, so small color differences should yield a large edge weight. Distances intermediate between large and small are of the least importance. Hence, we base our edge weight on the difference from the median color distance, as follows. + +We calculate the color distances across each edge in the adjacency graph; call the set of color distances $Q$ , with + +$$ +Q = \left\{ {\Delta \left( {{c}_{i},{c}_{j}}\right) }\right\} = \left\{ {q}_{ij}\right\} ,\;i \neq j,\;{r}_{i},{r}_{j} \in R +$$ + +where ${c}_{i}$ and ${c}_{j}$ are the colors of regions ${r}_{i}$ and ${r}_{j}$ and $\Delta$ is the function computing the color distance. Compute the median value $\bar{q}$ from the distances in $Q$ . + +We also want to take into account the size of the region, such that larger regions have greater importance; we prefer that a larger region have higher priority and thus influence the smaller regions that are processed afterwards, compared to the converse. Depending on the oversegmentation, action may not be necessary; we suggest a process to improve results on oversegmentations with a dramatic variation in region size. + +We compute for each region a factor $b$ , the ratio of the region’s size (in pixels) to the average region size. Then, when we traverse an edge, we use the $b$ of the destination region to determine the weight. In our implementation, we compute and store a single edge weight; there is no ambiguity about the factor $b$ because we only ever traverse a given edge in one direction, moving outward from the source node. + + < g r a p h i c s > + +Figure 3: Recoloring algorithm pipeline. + +To summarize: when traversing an edge, the edge weight is the distance between its target color difference and the median color difference, multiplied by a the factor $\left( {1 + b}\right)$ for the destination region: + +$$ +w\left( {{r}_{i},{r}_{j}}\right) = \left( {1 + b}\right) \left| {\Delta \left( {{c}_{i},{c}_{j}}\right) - \bar{q}}\right| . \tag{3} +$$ + +The factor $\left( {1 + b}\right)$ takes size into account, but ensures the region’s color differences can still affect the traversal order even for very small regions (b near zero). Note that the function $\Delta$ depends on the colorspace used. A simple possibility is Euclidean distance in RGB, but more perceptually based color distances are possible. We discuss color distance metrics in section 5.2. + +§ 3.2 HISTOGRAM MATCHING + +We plan to match color differences in the output to the color differences in the input. However, the input palette can have an arbitrary set of colors, and we also want to make use of the full palette. For example, imagine a low-contrast image recolored with a palette of more varied colors. The smallest palette difference might be quite large; if so, the muted areas of the original will be matched with difference zero, resulting in loss of detail in such regions. A narrow palette recoloring a high-contrast image will have similar problems in the opposite direction. + +To adapt the palette usage to the input image color distribution, we apply histogram matching to color differences. We emphasize that we are not matching the colors themselves, but the distributions of differences. Histogram matching is applied between the region color differences (the distribution of values in $Q$ , computed in Section 3.1) and the pairwise color differences of the colors in the palette (call this dataset ${\Delta P}$ ). + +The histogram matching computes a new target color difference for each graph edge; call this target ${q}^{\prime }\left( {u,v}\right)$ for the edge linking regions $u$ and $v$ . The matching ensures that the distribution of values $\left\{ {q}^{\prime }\right\}$ is the same as the distribution of values in $\{ {\Delta P}\}$ . The values ${q}^{\prime }$ are then used for color assignment, selecting color pairs from the palette which correspond to the same place in the distribution: medium palette differences where medium image color differences existed, small differences where the original image color differences were small, with the largest palette differences reserved for the largest differences in the original image. + +Figure 4 shows the input image with average region colors and the recolored images before and after histogram matching. The bar graphs under each image show the proportion of each color in the image. We can see that the histogram matching used the palette colors more evenly, increasing contrast and highlighting more details. For example, strong edges on the leaf boundary became distinguishable from the nearby regions, and the markings on the lizard became more prominent. + + < g r a p h i c s > + +Figure 4: Histogram matching result. Left to right: original image, result without histogram matching, and result using histogram matching. + +§ 3.3 COLOR ASSIGNMENT + +The widest path algorithm provided a tree, and the histogram matching provided palette-customized target distances for the edges. We now traverse the tree and assign a color to each region along the way. We begin by assigning the closet palette color to the tree's root node; recall that the root was the most central region in the image. At each subsequent step, we assign a color from the palette $P$ to the current region $\alpha$ based on the palette color ${p}_{\beta }$ previously assigned to the parent region $\beta$ and the target color difference ${q}^{\prime }\left( {\alpha ,\beta }\right)$ . We also consider the luminance difference between regions ${r}_{\alpha }$ and ${r}_{\beta }$ so as to help maintain larger-scale intensity gradients. Recall our intention in preserving color differences: two regions with large color differences should be assigned two very different colors, and regions with a small color difference should get very similar colors, possibly the same color. Owing to histogram matching, "large" and "small" are calibrated to the content of the particular image and palette being combined. + +We impose a luminance constraint on potential palette colors, in an effort to respect the relative ordering of the regions' luminances. Suppose the luminances of two regions ${r}_{\alpha }$ and ${r}_{\beta }$ are ${L}_{\alpha }$ and ${L}_{\beta }$ , where ${L}_{\alpha } < {L}_{\beta }$ . We then constrain the set of eligible palette colors for region $\alpha$ such that only colors ${p}_{\alpha }$ that satisfy ${L}_{{p}_{\alpha }} < {L}_{{p}_{\beta }}$ are considered. A similar constraint is imposed if ${L}_{\alpha } > {L}_{\beta }$ . + +For region ${r}_{\alpha }$ and its parent ${r}_{\beta }$ , we have the target edge difference ${q}^{\prime }\left( {\alpha ,\beta }\right)$ . Denote by ${p}_{\beta }$ the palette color already assigned to the parent region ${r}_{\beta }$ . We choose the palette color ${p}_{\alpha }$ for region ${r}_{\alpha }$ so as to minimize the distance $D$ : + +$$ +D = \left| {{q}^{\prime }\left( {i,j}\right) - \Delta \left( {{p}_{\alpha },{p}_{\beta }}\right) }\right| \tag{4} +$$ + +where $\Delta$ is the distance metric between two colors. The only colors considered for ${p}_{\alpha }$ are those that satisfy the luminance constraint. + +For colors with the lowest and highest luminance in the palette, there may be no available colors satisfying the luminance constraint. In such cases, the constraint is ignored and all palette colors are considered. + +Since the source region has no parent, the above process can not be used to find its color. Instead, we assign the closest color from the palette, as determined by the difference metric $\Delta$ . The source region itself is the region containing the centre of the image; while the output is weakly dependent on the choice of starting region, we do not view the starting region as a critical decision. Figure 5 shows some examples of varied outcomes from moving the starting region. + + < g r a p h i c s > + +Figure 5: Changing the source region locations. The starting region is indicated by a dot. + +§ 4 RESULTS AND DISCUSSION + +In this section, we present some results. Figure 6 shows a variety of recolored images generated by our algorithm using various palettes. We succeeded in maintaining strong edges, and objects in the recolored abstractions remain recognizable. Our algorithm retains textures and produces vivid recolored images by selecting varied colors from the palette. It assigns the same colors over flat regions and distinct colors to illustrate structures. We ran our algorithm on a variety of images with different textures and contrasts. We obtained most of our palettes from the website COLRD (http://colrd.com/); others we created manually by sampling from colorful images. + +In Figure 6, we present a set of examples from our recoloring algorithm, which were generated with four different palettes. From left to right, the columns show the original image, followed by recolored images using different palettes. We chose images presenting different features. The delicate features and textures in the abstractions stay visible after recoloring despite the input photographs having been radically altered by the recoloring. + +In the starfish image, the structure and the patterns on the arms become visible. The uniform colors on the background turn to a vivid splash of colors, emphasizing the textureness of the terrain. The algorithm has chosen the darkest colors to assign to the shadows and the lightest ones to the surface of the creature. + +The Venice canal is a crowded image composed of soft textures and structures with hard edges. The algorithm is able to preserve recognizable objects such as the boats and windows. Even tiny letters on the wall and pedestrians on the canal's side are visible. The recoloring process preserved the buildings' rigid structures; meanwhile, it captured shadows and the water's soft movements. In capturing such features, adopting a highly irregular oversegmentation was necessary. + +The lizard image is an example of a low contrast image with textured areas covered by dull colors. The algorithm highlighted the textures by assigning wild colors to the homogeneous regions on the leaf. At the same time, the substantial edges like the lizard's body patterns and the leaf edges are preserved naturally by our algorithm. + +In the next example, we used a high contrast image as input. The algorithm assigned the darkest colors from each palette to the coat of the man and separated it from the background using a very light color. Further, the small features on the face and the Chinese characters are mostly readable. + +The rust image contains a different type of textures on the wall and the grass, plus soft textureless areas on the machinery. The brick patterns on the wall exaggerated through colorful palettes made the final images more interesting than the original flat image. The high-frequency details of the grass are retained. The smooth transition of colors on the top right of the image illustrated the shadows of the leaves. + +We demonstrated strong edge preservation in all examples. Additionally, the image textures preserved, and palette colors are uniformly used to maintain a good contrast. + +§ 4.1 COMPARISON WITH NAÏVE METHODS + +Figure 7 gives a comparison between our method and two naïve alternatives. The first column shows the input image. The second column shows the input segments recolored by replacing the segment's color with the palette color closest to the segment's average color. The third column shows a random assignment of palette colors to segments. The final column shows our method. Recoloring with closest palette colors preserves some image content, but the result shows large regions of constant color; many of the palette colors are underused, an issue that can worsen when there is a significant mismatch between the original image color distribution and the palette, as in the upper example. Random assignment provides an even distribution of palette colors, but the image content can become unrecognizable for highly textured images, as in the lower example Our method uses the palette more effectively, showing local details and large-scale content and exercising the full range of available colors. + +§ 4.2 COMPARISON WITH COLORART + +In this section, we compare our recoloring method with ColorArt, an optimization-based recoloring method for graphic arts [3]. This method assigns colors to regions by solving a graph matching problem over color groups in the reference and the template image. In searching for a reference image, this algorithm uses the same number of color groups as in the template image. + +Figure 8 shows the generated images by ColorArt method on the right and ours in the middle, both using the sunset palette. We created a colorful leaf surrounded by a light background as in the input image, showing the algorithm respects the changes in lightness Moreover, assignment of different colors on the leaf presented an interesting texture. The leaf image generated by the ColorArt method has reversed the image tones. In the sketch image, we preserved the edges and showed a recognizable face in the image. In contrast, the ColorArt algorithm had difficulty with the edges and the gradual gradients, resulting in a somewhat incoherent output. + +§ 4.3 RECOLORING WITH SLIC0 OVERSEGMENTATION + +Our recoloring algorithm does not make any assumptions about the input oversegmentation. Figure 9, shows results from an over-segmantation from SLIC0 [1]. The starfish and owl images have approximately 2000 and 5000 segments, respectively. + +Note that more irregular regions can better represent complex image contours and textures, allowing the recolored abstractions to better display the image content. In starfish, the structures and shadows are represented by distinct colors that contrast between the object and the background. The strong edges, such as the arms of the starfish, are preserved; however, the thin features are not captured by SLIC0's uniform regions, and the background terrain does not present any significant information. In the owl image, the small regions on the chest convey the feather textures, while definite regions such as dark eyes kept their well-defined structures. Given a suitably detailed oversegmentation, we can produce appealing results. + + < g r a p h i c s > + +Figure 6: Region recoloring results. Top row: visualization of palettes used. Images, top to bottom: starfish, Venice, lizard, lanterns, and rust. + + < g r a p h i c s > + +Figure 7: A comparison with naïve recolorings. Left to right: The original image, the results from naïve method, random recoloring, and ours. + + < g r a p h i c s > + +Figure 9: Recoloring with SLIC0 oversegmentation. + +§ 4.4 PERFORMANCE + +We ran our algorithm on an Intel(R) Core(TM) i7-6700 with a 3.4 $\mathrm{{GHz}}\mathrm{{CPU}}$ and ${16.0}\mathrm{{GB}}$ of $\mathrm{{RAM}}$ . The processing time increases with the number of regions and edges in the graph. The time complexity of single-source widest path is $\mathcal{O}\left( {m + n\log n}\right)$ for $m$ edges and $n$ vertices, using a heap-based priority queue in a Dijkstra search. + +Table 1 shows the timing for creating the trees and color assignments of different images. For small images like the starfish, containing about ${1.4}\mathrm{\;K}$ regions, the recoloring algorithm takes about 0.007 seconds to construct the tree, while it takes about ${0.3}\mathrm{\;s}$ for larger images such as rust with ${7.2}\mathrm{K}$ regions. With a palette of 10 colors, the color assignments will take 0.05 and 1.2s to recolor the starfish and rust respectively. The color assignment will take longer for the larger palettes and images with a larger number of regions. We show the timing of tree creation and color assignments for all images in the gallery. The pre-processing steps of creating the graph adjacency list and histogram matching are also presented in the table. + +§ 4.5 LIMITATIONS + +Although in our experience our method works well for most combinations of image and palette, there are cases where the output is unappealing. When two similar regions are not neighbours, they may receive different colors; e.g., a sky area may be broken up by branches and different parts of the sky could be colored differently. Even adjacent regions may not receive similar colors if their average colors differ, introducing spurious edges into regions with slowly changing colors such as gradients or smooth surfaces. Out-of-focus backgrounds and faces are common examples producing such effects, + + < g r a p h i c s > + +Figure 8: Comparison with ColorArt. Left: input; middle: our results; right: ColorArt results. + +Figure 10 shows two failure examples. On the top middle image, the face and background regions got very different colors with an unappealing outcome. However, a palette containing several similar colors allows the algorithm to better match the image gradients, with dramatically better results. In the bottom images, large regions of different colors appear in the sky, which does not look attractive. Because the original regions have different average colors, our algorithm is likely to separate them regardless of the palette. + +Our algorithm is at present strictly automated with no provision for direct user control beyond choice of palette and parameter settings. While these parameters provide considerable scope for generating variant recolorings, so that a user would have a wide range of results to choose from, direct control is not yet implemented. One might imagine annotating the image to enforce specific color selections or linking regions to ensure that their output colors are always the same. While it would be straightforward to add some control of this type, we have not yet implemented such features. + + < g r a p h i c s > + +Figure 10: Failure cases. + +§ 5 VARIATIONS + +In this section, we explore some variations of our region recoloring algorithm. We apply a blending mechanism to create recolored images with smooth color transitions. Then, we demonstrate that different color spaces and distance metrics can generate various results from one palette. + +max width= + +Image Tree creation Color assignment Graph Total #Vertices + +1-6 +lanterns 0.003s 0.02s 0.064s 0.087s 1K + +1-6 +starfish 0.007s 0.05s 0.07s 0.127s 1.4K + +1-6 +lizard 0.015s 0.08s 0.1s 0.195s 1.9K + +1-6 +rust 0.3s 1.2s 0.9s 2.4s 7.2K + +1-6 +Venice 0.3s 1.4s 0.9s 2.6s 7.7K + +1-6 + +Table 1: Timing results for images with varying numbers of regions. + +§ 5.1 BLENDING + +In transferring the colors, we have intended to strictly preserve the palette and not add colors. However, we can also blend the colors, giving a more painterly style and to adding a sense of depth to a flattened abstraction. Postprocessing the recolored image introduces new intermediate shades of colors. We suggest cross-filtering the recolored image with an edge-preserving filter [14]. This process smooths the areas away from edges while retaining the colors close to strong edges. In addition, blending across the jagged region boundaries smooths regions which originally possessed similar colors. + +The cross-filtering mask size will affect the outcome. Larger masks will produce a stronger blending effect; small features will be smoothed out, and the output image will become blurry in regions lacking edges. Figure 11 illustrates examples of blending using masks of sizes $n = {20},{100}$ , and 300 . Blending with $n = {20}$ only slightly modifies the image; for larger masks, the blending is more apparent. At $n = {300}$ we can see a definite blurring in originally smooth areas, although blurring does not happen across original edges. Using a gray palette, can can obtain an effect resembling a charcoal drawing with larger masks. + +§ 5.2 COLOR SPACES AND DISTANCE METRICS + +We can employ different functions for our color distance function $\Delta$ . Different choices of color space and distance metric can affect the colorization results. Changing the distance metric will cause both the widest-path tree and the color assignment to change. + +We have experimented with computing color distances with the Euclidean distance in RGB as well as using perceptually uniform measures CIE94, CIEDE2000, CIE76, and CMC colorimetric distances $\left\lbrack {{21},{27}}\right\rbrack$ . + +Both CIE94 and CIEDE2000 are defined in the Lch color space. However, CIE94 differences in lightness, chroma, and hue are calculated from Lab coordinates. CMC is quasimetric, designed based on the Lch color model. The CIE76 metric uses Euclidean distance in Lab space. + +In Figure 12, we show different outcomes from different metrics using two palettes. We can observe the strong edge and contrast preservation, which is an apparent result of perceptual uniform metrics. More importantly, each metric gives a unique recolorization to the abstraction, which allows a user to choose from different output images. We can get interesting results from each metric. However, in our judgement, more attractive results are obtained from Euclidean and CMC metrics; the delicate features and image contrast are maintained, and objects are generally preserved. CIE94 and CIE2000 metrics are also effective, but we found that the CIE76 metric rarely creates interesting results. + +§ 6 CONCLUSIONS AND FUTURE WORK + +In this paper, we presented a graph-based recoloring method that takes an oversegmented image and a palette as input, and then assigns colors to each region. The result uses the palette colors to portray the image content, but without attempting to match the input colors. Designing our algorithm with the widest path allowed us to maintain the image contrast and objects' recognizability. We demonstrated our results with different palettes. We achieved vivid recolorization effects, effective for most combinations of input images and palettes. + +In the future, we would like to investigate non-convex palette color augmentation, adding new colors extending an input palette while matching the palette's theme. We would like to extend the color assignment to consider color harmony, scoring based on compatibility of colors and thus effecting the ability of certain colors to be neighbors. Furthermore, we would like to be able to recolor smoothly changing regions like the sky more uniformly. Adding elements of user control would allow for better cooperation between the present automated method and the user's intent. + +§ ACKNOWLEDGMENTS + +The authors thank ... \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Hce9RpAIZbc/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Hce9RpAIZbc/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..8becfb8fb626299aa6c6d2143a3afeb115d115f9 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Hce9RpAIZbc/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,327 @@ +# Automatic Slouching Detection and Correction Utilizing Electrical Muscle Stimulation + +Category: Research + +![01963e88-d534-77ba-a20f-2fb9d83806d9_0_227_393_1345_445_0.jpg](images/01963e88-d534-77ba-a20f-2fb9d83806d9_0_227_393_1345_445_0.jpg) + +Figure 1: Improper posture can have long term health ramifications. Presented here are images of slouched and corrected posture using Electrical Muscle Stimulation: (A)Mobile Gaming - Slouched Posture, (B)Mobile Gaming - Corrected posture, (C)Text Entry - Slouched posture, (D) Text Entry - Corrected posture + +## Abstract + +Habitually poor posture can lead to repetitive strain injuries that lower an individual's quality of life and productivity. Slouching over computer screens and smart phones is one common example that leads to soreness, and stiffness in the neck, shoulders, upper and lower back regions. To help cultivate good postural habits, researchers have proposed slouch detection systems that alert users when their posture requires attention. However, such notifications are disruptive and can be easily ignored. We address these issues with a new physiological feedback system that uses inertial measurement unit sensors to detect slouching, and electrical muscle stimulation to automatically correct posture. In a user study involving 36 participants, we compare our automatic approach against two alternative feedback systems and through two unique contexts-text entry and gaming. We find that our approach was perceived to be more accurate, interesting, and outperforms alternative techniques in the gaming but not text entry scenario. + +Index Terms: Human-centered computing-Human computer interaction (HCI)-Wearable computing-Preventive Healthcare Posture correction-Slouching-Electrical Muscle Stimulation; + +## 1 INTRODUCTION + +The alignment of our body parts, bolstered by the correct muscular tension against gravity, plays a crucial role in maintaining good and healthy posture. Poor posture in working environments leads to Repetitive Strain Injuries (RSI) and long-term musculoskeletal disorders [43], which are becoming increasingly prevalent in working age population. Improper occupational standards, poor workstation arrangements, and gaming in unnatural seated positions are often the biggest factors contributing to RSI at workplace and at home. Poor posture has also been linked to health deterioration and low productivity [2]. Repetitive processes, such as the use of computer systems and smart phones, present a high risk of RSI, examples of which include wrist extension, neck cradling, forward neck, slouching, and uneven weight distribution on the legs. In this paper, we address slouching, which is one of these RSI. The RSI associated with slouching, if not detected, analyzed, and corrected at an early stage, may lead to the development of poor posture habits which induce intense pain, trigger point pain, and muscle tightness in the chest, neck, shoulders, and back regions. + +In the United States, nearly \$90 billion [8] are spent every year for the treatment of RSI and lower back pain arising out of poor workplace postures, primarily slouching [10]. Recently, a study showed that slouching affects the transverses abdominus muscle [43] which is responsible stabilizing the torso in an upright position while standing and sitting. The muscle dysfunction, or dystrophy, caused by slouching is directly associated with lower back pain. Lower back pain/injuries are one of the noted root causes of disability in the world, where it is estimated that around ${80}\%$ of the world population will experience it at some point in their lives $\left\lbrack {{19},{46}}\right\rbrack$ . Current intervention technologies only deal with the detection of slouching and rely completely on the user's willingness to self-correct their postures when a feedback alert is presented. As a result, there is a dire need for the implementation of a wearable intervention technology capable of automatically detecting slouched posture and subsequently correcting posture to help users maintain good posture during their work and gaming activities. + +Electrical muscle stimulation (EMS) can be applied to cause involuntary muscular contractions and generate a physiological response $\left\lbrack {9,{42},{49}}\right\rbrack$ . In this work, we integrated EMS with a slouch detection system to automatically correct poor habitually slouched posture and restore correct posture through these involuntary muscular contractions. The main contributions of this work include the development of a wearable intervention prototype that can autonomously detect slouching and correct posture through a physiological feedback loop utilizing EMS. We also evaluate the performance of our approach in breaking the habit of slouching through automatic detection and subsequent correction of slouched posture, and for training and developing good postural habits. + +## 2 RELATED WORK + +Although, there is a consensus that posture can be improved and fixed through ergonomic solutions such as adjusting desk and chair heights, monitor viewing angles, and keyboard and mouse positions [2], there exists only a small number of reliable techniques for continued posture monitoring and detection of poor posture especially slouching. With the increase in computational power, sensor technology, and advances in wearable technology, posture monitoring has been attracting increased attention for developing detection and alert-based wearables that allows users to self-correct their posture based on feedback from the system $\left\lbrack {{13},{16},{23}}\right\rbrack$ . This development of sensors and their integration into wearables has also enabled real time monitoring and live feedback systems that are independent of the location of the work environment. Despite these efforts and due to the novelty of wearable intervention technologies, current researchers are focused on developing more accurate monitoring techniques and development of predictive algorithms for better detection of poor posture. As a result, aspects such as integration of embedded sensors into wearables, aesthetics, usability, wearability, and user comfort are often neglected. Most slouching posture detection techniques employ a variety of sensors such as IMUs, Force sensitive resistors, Electromagnetic inclinometer [3], fiber-optic sensors [4], cameras [51], and smart garments [1, 12] for monitoring, detection and alerting the users. + +### 2.1 Posture Detection with Real Time Feedback + +Slouching posture detection with real time feedback mainly employs three types of feedback via visual, audio, and haptic through which information about the users' posture, and the need to self-correct their posture is conveyed through visual notifications on their computer monitors/smart phones, voice/auditory alarms, and vibration alerts, respectively. Researchers integrated audio feedback with strain gauges [38], instrumented helmets [55], accelerometers [12], gyroscopes [53], and IMUs [1] for detection of poor posture and development of good posture habits. Similarly, real time visual feedback approaches utilizing a set of IMUs for posture monitoring were developed $\left\lbrack {{16},{23}}\right\rbrack$ to deliver visual feedback via smartphone applications. + +Real time visual and audio feedback was integrated with wearable posture detection systems such as Zishi [52], SPoMo [40] and Spineangel [44]. Zishi, an instrumented vest integrated with IMU sensors on the upper and lower spine, delivered alerts using visual and audio channels through a smartphone application. SPoMo utilized a set of IMUs for monitoring spinal position in seated posture while Spineangel was developed as a wearable belt with IMUs to investigate the relationship between poor posture and lower back pain. Building upon this, Limber [22] was designed as a minimally disruptive method for detecting poor posture and alerting in an office style workplace. Their system incorporated IMUs (accelerometer and strain gauges) on the shoulders, spine, and neck to detect poor posture while sitting and implemented a positive and negative feedback system based on correct and incorrect postures, respectively. Researchers also developed hybrid posture detection techniques utilizing electromagnetic technology and accelerometers [3], inductive sensors [47] and real-time vibrotactile feedback was provided to users for self-correction. Additionally, researchers also integrated haptic feedback with IMUs for postural balance [54], and gait training [13]. Xu et al. [54] devised a system that utilized eight IMUs placed on either side of the torso to monitor trunk tilt and provide vibrotactile feedback. Advancing upon this, Gopalai et al. [13] utilized a wireless IMU attached to the trunk and a wobble board for providing real time biofeedback via vibrational alerts for postural control. Further, a comparative study on posture detection aimed at improving spinal posture concluded that the IMU-based system performed more accurately than the vision-based system [51]. Also, a qualitative assessment of different commercially available wearables for posture detection (LUMO Lift, LUMO Back and Prana), and correction through real time haptic, and visual feedback systems concluded that haptic feedback based on real time monitoring enabled a shared responsibility for detecting poor posture and subsequently delivering real time feedback/ alerts to the user to correct themselves [35]. + +### 2.2 Electrical Muscle Stimulation + +EMS is a non-invasive technique for delivering electrical stimuli to muscles, nerves and joints via surface electrodes positioned on the skin to deliver an acute/chronic therapeutic effect. Traditionally, electrical stimulation was utilized for therapeutic pain management to alleviate chronic muscle strain, acute muscle spasms, and in rehabilitation to regain muscle strength $\left\lbrack {5,9}\right\rbrack$ and normal movement after injury or surgery [49]. EMS has also been utilized for the application of a cyclical, moderate intensity electrical impulses to selected muscles for generating physiological responses. It has been used for generating functional movements that resemble voluntary muscle contractions like evoking hand opening [9], to stimulate reflexes for swallowing disorders [45], to enable neuro-prosthesis control [42], and to restore functions that have been lost through injury, surgery, or muscle disuse. + +### 2.3 EMS in Human Computer Interaction + +EMS has found new interest in human computer interaction (HCI) for applications in gaming, augmented reality, and virtual reality (VR) training [20, 24-27, 48]. This presents an opportunity for developing novel interaction techniques using adaptive wearable interfaces. Researchers have applied EMS technology to different interactive applications such as activity training, immersive feedback technologies and spatial user interfaces. On account of its ability to invoke involuntary muscular contractions, EMS has been utilized in activity training to enable users to acquire and develop new skills such as playing musical instruments [50], learning object affordance [31], and developing preemptive reflex reactions [18, 36]. Additionally, EMS-based feedback systems integrated with augmented, virtual, and mixed reality technologies have been developed for providing users with additional immersion and engagement. EMS was utilized in developing a wearable stimulation device for sharing and augmenting kinesthetic feedback [37] to facilitate virtual experiences of hand tremors in Parkinson's disease. Impacto [28] was designed and developed to provide the haptic sensation of hitting and being hit in VR. EMS was also utilized to increase realism in VR applications by actuating emotions such as inducing fear and pain in In-pulse [21] and transfer of emotional states between users in Emotion Actuator [15]. Lopes et al., [29, 32] developed realistic physical experiences in VR by providing haptic feedback to walls and obstacles. They also developed force feedback for mobile devices [34] by employing EMS to invoke involuntary muscular contractions to tilt the device sideways. Further, they extended EMS-based force feedback to mixed reality gaming to add physical forces to virtual objects [33]. Similarly, Fabriz et al., [11], utilized EMS to simulate impact in an augmented reality tennis game to deliver a more immersive experience. Further, integration of input, and output interfaces with EMS technology has allowed development of physiological feedback loops and enabled a novel communication technique. EMS has been employed in actuated navigation [41], actuated sketching [34], and delivery of discrete notifications and conversations [14]. EMS has also been utilized for enabling unconscious motor learning such as heel striking [6] and co-create visual art works [7]. Pose-IO developed by Lopes et al., [30] allowed users to interact without visual, and audio senses and completely rely on proprioceptive sense. + +The above-mentioned interactive applications demonstrate that EMS provides more defined, implicit, and strong sensations than visual, audio, and haptic feedback mechanisms. These applications also exhibit miniaturization of feedback hardware compared to actuated motors in exoskeletons. The current literature suggests that postural correction utilizing EMS has not been fully explored and thereby presented an opportunity for the development of novel wearable intervention technology for the detection and subsequent correction of slouched posture. In comparison, prior research on posture detection always relied on the user's willingness to self-correct their posture upon being presented with feedback, while our autonomous system was capable of automatically correcting the slouched posture upon detection. + +![01963e88-d534-77ba-a20f-2fb9d83806d9_2_156_148_714_664_0.jpg](images/01963e88-d534-77ba-a20f-2fb9d83806d9_2_156_148_714_664_0.jpg) + +Figure 2: Physiological Feedback Loop: Automatic Slouching Detection and Correction System + +![01963e88-d534-77ba-a20f-2fb9d83806d9_2_154_923_716_191_0.jpg](images/01963e88-d534-77ba-a20f-2fb9d83806d9_2_154_923_716_191_0.jpg) + +Figure 3: Wireless IMU sensor placement for posture monitoring and detecting slouched posture:(A) Side view showing sensor placement on left deltoid, (B) Front view showing sensor placement below center of collar bone above the chest, (C) Side view showing sensor placement on right deltoid + +## 3 AUTOMATIC SLOUCHING DETECTION AND CORRECTION UTILIZING EMS + +We developed a physiological feedback loop based wearable intervention prototype relying on IMU sensors and EMS (illustrated in Figure 2). The prototype utilized three Metawear MMC wireless sensors for measuring angular changes, and openEMSstim package [26] for presenting the EMS feedback. We utilized the Metawear C# SDK for developing a user interface and to integrate the EMS hardware to complete the physiological feedback loop. The change in posture was calculated from the angular information obtained from the IMU sensors. As slouching was mainly characterized by the torso inclination and forward rolling of the shoulders [17], IMU 1 was to be placed at the center of the collar bone above the chest (illustrated in Figure 3 (B)) and the other IMU's 2 and 3 were to be placed on the center of each deltoid (illustrated in Figure 3 (A) & (C)). The user's torso inclination angle was calculated from the pitch of IMU1, and the roll angles on the shoulders were calculated from the roll of IMU's 2 and 3. Our system detected slouching when the user's current torso inclination and shoulder roll angles, both approached and remained at a threshold level for a period of 5 seconds. The threshold level is preset as $- {3}^{ \circ }$ of the torso inclination and shoulder roll angles recorded in the slouched position during calibration. $- {3}^{ \circ }$ was chosen to overcome measurement errors without increasing false positives and 5 seconds timer duration ensured random movements do not lead to false positive slouch detection. These design choices were validated during our pre-study trials. The threshold angle of $- {3}^{ \circ }$ was used to initiate the 5 seconds timer, and the slouch angles detected were recorded at the end of the timer when the feedback was presented. The purpose of timer was to ensure false positives due to participant behavior do not trigger the feedback response. When slouching was detected, the automatic correction feedback was presented by applying electrical stimulus to the rhomboid muscles (illustrated in Figure 4) for generating a pulling force in the opposite direction from the slouched posture and thereby, generating a physiological response to bring the user back to the upright or correct position. Two pairs of electrodes were utilized for contraction of the rhomboid muscles which causes the shoulders blades to be pulled back, thus unrolling the shoulders and bringing the torso back to the upright posture. IMU and EMS calibration play a crucial role in the effectiveness of the system. The calibration process included correcting IMUs offset value in the upright position of the user and recording the angular change in the slouched posture with respect to the upright position. The EMS intensity calibration was manually incremented to deliver an intensity that was optimal for generating involuntary muscular contraction and avoid any pain. This EMS intensity provided to the user for generating the necessary pulling force for correcting the slouched posture and restoring the upright position was recorded and utilized during the experiment. The TENS device was able to deliver intensities between (0-100mA). A continuous ${75}\mathrm{\;{Hz}}$ square wave pulse at the recorded EMS intensity and a pulse width of ${100\mu }\mathrm{s}$ was supplied as the electrical stimulus to the users. + +![01963e88-d534-77ba-a20f-2fb9d83806d9_2_1142_152_288_224_0.jpg](images/01963e88-d534-77ba-a20f-2fb9d83806d9_2_1142_152_288_224_0.jpg) + +Figure 4: EMS Electrode placement on rhomboid muscles for auto-correction using EMS feedback + +## 4 METHODS + +The goal of our study was to evaluate the overall effectiveness and user perception of an automatic detection and correction feedback system (EMS) which would not disrupt the task or require additional cognitive load by the user to self-correct their posture. Hence, we compared it against traditional audio and visual feedback mechanisms that required self-correction by the user. The visual and audio feedback modalities required users to self-correct their posture based on visual and the audio sound notifications presented to them, respectively. We also identified two common causes of slouching in day to day activities such as computer related workplace tasks, and mobile gaming $\left\lbrack {{17},{39}}\right\rbrack$ , and investigated the effectiveness and user perception of our automatic slouching detection and correction prototype across these common causes of slouching. Our objective was to determine if our automatic posture detection and correction system using EMS would be a viable technique for correcting slouched posture as compared to the visual and the audio feedback channels while being engaged in a computer related workplace task and playing a mobile game. + +Table 1: User ranking on posture awareness, devices and EMS + +
User ExperienceApplication$\mathbf{{Mean}}$S.D
Exposure to Posture Alert DevicesText Entry1.611.16
Mobile game1.891.09
Exposure to EMSText Entry2.331.37
Mobile game2.061.43
Experienced posture problemsText Entry4.221.55
Mobile game4.331.49
Experienced slouchingText Entry5.061.50
Mobile game5.40.96
+ +Note: User experience ranking based on a 7-point scale where 1 means never I no experience and 7 means frequently / very experienced + +### 4.1 Subjects and Apparatus + +We recruited 36 Participants (Male=31, Female=5) for the study with 18 participants for each application- text entry and mobile game. All participants recruited were above the age of 18 years and the mean of the age of participants was22.05years $\left( {S.D. = {3.13}}\right)$ . All participants were able bodied and had corrective ${20}/{20}$ vision. We used three Metawear MMC IMU sensors for monitoring the torso inclination angles and the shoulder roll angles. The EMS was generated with an off-the-shelf Tens Unit (TNS SM2MF) and controlled by the openEMSstim package for activating and modulating the intensity of the electrical stimuli supplied to the muscles. The hardware used for the text entry application was a 14" Intel i7 Laptop, and a 2nd generation iPhone SE was used for the mobile game application. From the pre-questionnaires, participants ranking of their prior exposure to posture alert devices and EMS, and experience with posture problems and slouching are noted and illustrated in Table 1. Participants ranked their exposure and experience on a 7-point scale with 1 meaning never/ no experience and 7 meaning frequently/ very experienced. + +### 4.2 Experimental Design + +A 2 by 3 mixed subjects experiment with 36 participants was conducted to investigate the performance and feasibility of our approach. The within subject factor was feedback type and the between subject factor was application type. We tested our system across two applications: text entry and mobile game, and across three feedback types: visual, audio, and EMS. The visual and audio feedback types required self-correction by the users while the EMS feedback was an automatic feedback type that could automatically correct the user's posture when slouched posture was detected. We compared the performance of EMS and auto-correction against the visual and audio feedback techniques. The average correction response times and user perception of the system in detecting slouched postures and subsequently either self-correcting themselves based on visual or audio alert feedback or automatically correcting their posture using EMS were also evaluated. The text entry application required the users to complete a text entry task and the mobile game application required the users to play a Battle Royale game called "PlayerUn-known’s Battlegrounds (PUBG) Mobile ${}^{1\text{," }}$ on a smartphone. We selected PUBG mobile based on popularity (400 million players), level of engagement and demographics (primarily students and office workers aged 15-35 years who may be prone to long work or gaming hours). In both applications, the users were required to complete all three modalities: + +- Modality 1: Visual alert feedback and self-correction + +- Modality 2: Audio alert feedback and self-correction + +## Modality 3: EMS feedback and automatic correction + +The order of the modalities introduced to the users was counterbalanced to include all permutations and to minimize learning effects. In each application, all participants were required to complete all three modalities to complete the study. The independent variables in the study were the three different modalities and the dependent variables were the average correction response times and parameters of user perception such as overall experience, effectiveness of slouching detection, effectiveness of correction feedback, user engagement and task disruption, and modality comfort. Each study session lasted approximately 75 minutes and the participants were compensated $\$ {15}$ for their participation. + +### 4.3 Research Hypotheses + +While slouching detection and alert feedback systems have been designed and tested by researchers, we note that slouching posture correction response times and user perception of the systems have not been measured or reported. As such, in our work we expect that there may be significant difference in average correction response times and user perception parameters across the three modalities which would influence the user experience. Therefore, our study was designed to help determine the effects of self-correction using visual, and audio feedback, and automatic correction using EMS on user experience across the two applications. We have five research hypotheses with two parts (a and b) for investigating into the user perception of our approach in the text entry and the mobile game contexts. + +- H1: In the text entry (a) or the mobile game (b), the average correction response time to slouching feedback will be faster in EMS feedback compared to the visual and audio alert feedback. + +- H2: In the text entry (a) or the mobile game (b), the user perception of accuracy of slouching posture correction in EMS feedback will be greater than visual and audio alert feedback. + +- H3: In the text entry (a) or the mobile game (b), comfort in EMS feedback will not be significantly different compared to visual and audio alert feedback. + +- H4: In the text entry (a) or the mobile game (b), task disruption in EMS feedback will not be significantly different compared to than visual and audio alert feedback. + +- H5: In the text entry (a) or the mobile game (b), automatic correction using EMS feedback delivers better user experience compared to visual and audio alert feedback. + +### 4.4 COVID-19 Considerations + +Due to the ongoing COVID-19 pandemic, we wanted to ensure safety for the participants and researchers. Following our institutions guidelines, all individuals were required to always wear face masks. Between each participant, we sanitized all devices and surfaces that the participants and researchers would be in contact with, to ensure safety during the study. Furthermore, all users were required to wear a face mask to participate in the study, but we provided each individual face masks, hand sanitizer, cleaning wipes, latex gloves, to reduce risk of contracting the disease. Though we cleaned all surfaces between participants, we allowed each individual to clean devices as desired. + +### 4.5 Experimental Procedures + +The participants were supplied with a consent form that details the nature of the experiment, safety, risks, compliance required and information about compensation. The participants are required to review the consent form and provide verbal consent for the study session to start. The participants were then asked to complete a pre-questionnaire that collected basic information about their knowledge on posture related issues at the workplace, intervention technology and knowledge of EMS. + +--- + +${}^{1}$ https://www.pubg.com/ + +--- + +![01963e88-d534-77ba-a20f-2fb9d83806d9_4_152_149_721_401_0.jpg](images/01963e88-d534-77ba-a20f-2fb9d83806d9_4_152_149_721_401_0.jpg) + +Figure 5: Text entry study showing 50-50 split screen with a PDF document (zoom set to 40%) on the left and the Word document (zoom set to page width) on the right. Participants were required to read from the PDF document and type in to the word document. + +Next, the participants were prepared for the study by placing adhesive IMU sensors on the deltoids and the center of the collar bone above the chest (as shown in Figure 3) for data collection and detecting slouching position. Adhesive EMS electrodes were placed on the rhomboid muscles prior to the EMS feedback session for correcting slouching (as shown in Figure 4). After sensor set-up, the participants were required to be seated in an upright position in front of a laptop computer/smartphone with their hands placed on the keyboard or holding the smartphone. Participants were then calibrated for the upright and slouched positions. First, the IMU sensors were corrected for the offset value in the users' upright position and then users were required to emulate a slouched position by inclining their torso and rolling their shoulders forward. These upright and the slouched posture angles were thus recorded. + +Prior to the EMS feedback session, the participants were required to be calibrated only once for the EMS intensity to generate a physiological response of sitting upright. The participants were fitted with two pairs of EMS electrodes on their rhomboid muscles. EMS intensity calibration process was done manually for each participant, and moderators incremented the intensity until an involuntary muscular contraction causing posture correction is affected. During calibration, participants were asked to slouch, and moderators manually incremented the EMS intensity. As EMS also produces a tactile or haptic effect even at low intensities, participants were asked to not respond to the tactile or haptic effect to ensure haptic/tactile component of EMS does not contribute to the automatic correction process in any way. Moderators additionally asked participants to verbally respond specifically to the following questions during calibration to ensure rhomboid muscular contraction and participant comfort: 1) when they initially felt the stimulation (haptic sensation), 2) when the intensity was generating an involuntary muscular contraction and/ or when they are experiencing the pulling force towards the upright posture, 3) when any pain is experienced. For each participant, when involuntary muscular contraction was confirmed verbally by participant and visually verified by moderators, the optimal EMS intensity that was generating an involuntary muscular contraction to correct the slouched posture, was recorded, and selected for EMS part of the study. + +All the above steps are similar for both the text entry and the mobile gaming applications. In the text entry task, users were asked to read from a PDF document and type into a word document. The PDF and word documents were presented in a 50-50 split screen. + +![01963e88-d534-77ba-a20f-2fb9d83806d9_4_926_149_719_342_0.jpg](images/01963e88-d534-77ba-a20f-2fb9d83806d9_4_926_149_719_342_0.jpg) + +Figure 6: Mobile game study showing lobby Area of PUBG mobile prior to start of the game + +![01963e88-d534-77ba-a20f-2fb9d83806d9_4_928_615_708_555_0.jpg](images/01963e88-d534-77ba-a20f-2fb9d83806d9_4_928_615_708_555_0.jpg) + +Figure 7: Text entry study-visual feedback: showing Windows 10 Pop visual notification on the bottom right of the screen (A) To correct posture when slouching is detected (B) After Posture has been corrected + +For the purpose of conducting the study, the PDF zoom was set to ${40}\%$ to promote or cause slouching while reading (illustrated in Figure 5). In the mobile game task, users were asked to play PUBG mobile (illustrated in Figure 6). In both applications, the user's posture was monitored for slouching and upright positions during the study. The study comprised of three parts: visual, audio, and EMS feedback. Each part of the study is 15 minutes in duration. Upon completion of each part, participants were required to complete a post-questionnaire about their experience. All participants were required to finish all three parts for completing the study. + +#### 4.5.1 Visual feedback and self-correction + +Text Entry Application: When slouching was detected by the system based on the IMU sensor feedback, a windows 10 visual popup notification "Please correct your posture" is displayed on the bottom right corner of the monitor (illustrated in Figure 7a) and the users were required to sit upright and self-correct their slouched posture till a second visual popup notification "Posture corrected" is displayed to the user (illustrated in Figure 7b). The response times for correcting the slouched posture were recorded. + +Mobile Game Application: When slouching was detected by the system based on the IMU sensor feedback, an SMS is sent from the C# application to the smart phone with the message "Posture alert: Please correct your posture" and is displayed as drop down badge notification on the smartphone (illustrated in Figure 8a). After receiving the visual alert notification, the users were required to sit upright, and self-correct their slouched posture till another SMS containing the message "Posture corrected" is displayed to the user (illustrated in Figure 8b). The response times for correcting the slouched posture were recorded. + +![01963e88-d534-77ba-a20f-2fb9d83806d9_5_160_160_699_792_0.jpg](images/01963e88-d534-77ba-a20f-2fb9d83806d9_5_160_160_699_792_0.jpg) + +Figure 8: Mobile game study-visual feedback: showing visual notification badges drop down from the top of the display (A) To correct posture when slouching is detected (B) After Posture has been corrected + +#### 4.5.2 Audio feedback and self-correction + +Text Entry Application: When slouching was detected by the system, an audio notification "Please correct your posture" is activated and the users were required to sit upright, and self-correct their slouched posture till a another audio notification "Posture corrected" is presented to the user. The response times for correcting the slouched posture were recorded. + +Mobile Game Application: When slouching was detected by the system, an audio notification bell sound is activated and the users were required to sit upright, and self-correct their slouched posture till another audio notification bell is activated to the user. The response times for correcting the slouched posture were recorded. + +#### 4.5.3 EMS feedback and auto-correction + +Text Entry and Mobile Game Applications:. When slouching was detected by the system, the EMS is activated to apply the recorded EMS intensity to the rhomboid muscles to invoke an involuntary muscle contraction. This muscle contraction produces a pulling force in the opposite direction to the slouched posture and to generate the physiological response of sitting upright by correcting the torso inclination and shoulder roll caused by slouching. Figure 1(A) and (C) illustrate the slouched posture during the mobile game and the text entry studies respectively. Figure 1(B) and (D) illustrate the corrected posture after EMS has been applied in the mobile game and the text entry studies respectively. The EMS is deactivated once the upright position is detected. The response times for correcting the slouched posture were recorded. + +Table 2: Average slouch angles in degrees + +
Slouch angleApplication$\mathbf{{Mean}}$S.D
Torso Inclination angleText Entry${21.00}^{ \circ }$${3.88}^{ \circ }$
Mobile game${18.24}^{ \circ }$${2.80}^{ \circ }$
Shoulder roll angleText Entry15.10°${3.00}^{ \circ }$
Mobile game${13.84}^{ \circ }$${2.22}^{ \circ }$
+ +On the completion of the study, participants were required to complete a comparative post questionnaire regarding their experience in all three modalities. All data from the sensors and the EMS was automatically recorded in the system for further analysis and reporting. + +## 5 RESULTS + +The mean frequency of slouching in the text entry condition was (7.72,10, and 8.72)for the audio, visual and EMS feedback modalities respectively and(7.05,9.11, and 8.38)for the audio, visual and EMS feedback modalities, respectively in the mobile game condition. The average torso inclination and shoulder roll angles were recorded for slouched posture during the calibration process and utilized for detection of slouching which are illustrated in Table 2. For text entry, mean torso inclination angle was ${21}^{ \circ }\left( {S.D = {3.88}^{ \circ }}\right)$ , while the mean shoulder roll angle was ${15.1}^{ \circ }\left( {S.D = {3}^{ \circ }}\right)$ . For the mobile game, mean torso inclination angle was ${18.24}^{ \circ }\left( {S.D = {2.8}^{ \circ }}\right)$ , while the mean shoulder roll angle was ${13.84}^{ \circ }\left( {S.D = {2.22}^{ \circ }}\right)$ . For the text entry application, the mean electrical stimulation intensity required to correct slouched posture was ${39.72}\mathrm{{mA}}\left( {S.D = {13.17}\mathrm{{mA}}}\right)$ while for the mobile game task, the mean electrical stimulation was ${47.22mA}\left( {S \cdot D = {11.08mA}}\right)$ . To address H1, one-way repeated measures ANOVA were performed on the average correction response times for slouching correction across all feedback types in the text entry application and the mobile game task separately. To address H2 through H5, non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of effectiveness of correction feedback, comfort, task disruption and overall experience. Wilcoxon signed rank test was performed if significant differences were found. The results were consolidated and presented below in Table 3. + +For $\mathrm{H}1\left( \mathrm{a}\right)$ , a one-way repeated measures ANOVA conducted on the influence of correction feedback type on the average correction response times taken for correcting posture after slouching is detected and correction feedback is presented to the user. The correction feedback type consisted of three levels (visual, audio, and EMS). All effects were statistically significant at the .05 significance level. The main effect for the correction feedback type yielded an F ratio of $F\left( {2,{34}}\right) = {5.382}, p < {.05}$ , indicating a significant difference between visual feedback $\left( {M = {3.86}, S.D = {1.27}}\right)$ , audio feedback $\left( {M = {3.9}, S.D = {1.28}}\right)$ and EMS feedback $\left( {M = {2.89}, S.D = {1.74}}\right)$ . For $\mathrm{H}1\left( \mathrm{\;b}\right)$ , the one-way repeated measures ANOVA conducted on the influence of correction feedback type on the average correction response times taken for correcting posture after slouching is detected and correction feedback is presented to the user. The correction feedback type consisted of three levels (visual, audio, and EMS). All effects were statistically significant at the .05 significance level. The main effect for the correction feedback type yielded an $\mathrm{F}$ ratio of $F\left( {2,{34}}\right) = {20.66}, p < {.001}$ , indicating a significant difference between visual feedback $\left( {M = {5.98}, S.D = {2.4}}\right)$ , audio feedback $\left( {M = {4.44}, S.D = {0.75}}\right)$ and EMS feedback $(M =$ ${2.70}, S.D = {1.04})$ . For the text entry application, average correction response times were faster for EMS feedback than the visual feedback $\left( {{t}_{17} = - {0.961}, p < {0.05}}\right)$ , but no significant differences were found between EMS and audio feedback types, and between visual and audio feedback types. For the mobile game application, average correction response times were faster for audio feedback than the visual feedback $\left( {{t}_{17} = - {1.538}, p < {0.05}}\right)$ , the EMS feedback was faster than Visual feedback $\left( {{t}_{17} = - {3.276}, p < {0.01}}\right)$ , and also faster than the audio feedback $\left( {{t}_{17} = - {1.737}, p < {0.001}}\right)$ .The post-hoc analysis between the three feedback types shows that the hypothesis $\mathrm{H}1\left( \mathrm{a}\right)$ tested false in that the average correction response times were faster in the EMS feedback type compared to the visual feedback but not the audio feedback. In the case of $\mathrm{H}1\left( \mathrm{\;b}\right)$ , the hypothesis tested true, in that the average correction response times were faster in the EMS feedback type compared to the visual, and audio feedback types as illustrated in Figure 9 (A) and (B). + +![01963e88-d534-77ba-a20f-2fb9d83806d9_6_160_159_700_746_0.jpg](images/01963e88-d534-77ba-a20f-2fb9d83806d9_6_160_159_700_746_0.jpg) + +Figure 9: Average Correction Response Times (in Seconds) across (A) Text Entry and (B) Mobile Game for all correction feedback types - (1) Visual, (2) Audio, (3) EMS. Error Bars: 95% Cl. + +To address H2(a) and (b), non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of the accuracy of correction feedback type in the text entry and mobile game applications separately. The test rendered ${\chi }^{2} = {3.591}, p = {0.166}$ which was insignificant $\left( {p > {.05}}\right)$ for the text entry application, while for the mobile game application, the test rendered $\left( {{\chi }^{2} = {7.259}, p = {0.027}}\right)$ which was significant $\left( {p < {.05}}\right)$ . A Post-hoc analysis with Wilcoxon signed-rank tests was conducted for the mobile game application with a Bonferroni correction applied, resulting in a significance level set at $p < {0.017}$ . Median perceived accuracy of slouching correction for the Visual, Audio, EMS feedback were6,6,7respectively. There was a statistically significant difference between the visual and the EMS correction feedback type $\left( {Z = - {2.591}, p = {0.010}}\right)$ , and also between EMS and audio correction feedback type $\left( {Z = - {2.585}, p = {0.010}}\right)$ . However, there was no significant difference between audio and visual correction feedback types $\left( {Z = - {0.942}, p = {0.346}}\right)$ . Therefore, $\mathrm{H}2\left( \mathrm{a}\right)$ tested false and indicated that the users perceived all three feedback types equally accurate in the text entry application. Whereas H2(b) tested true and indicated that the users perceived that the accuracy of EMS correction feedback was more effective than the visual and the audio feedback in the mobile game application. + +Table 3: Friedman test results on the user ranking for H2-H5 + +
User perceptionApplication${\chi }^{2}$$p$
Accuracy of Correction FeedbackText Entry3.5920.166
Mobile game7.259${0.027}^{ * }$
ComfortText Entry1.3450.510
Mobile game4.5500.103
Task DisruptionText Entry0.0920.955
Mobile game5.6070.061
Experienced slouchingText Entry0.4070.816
Mobile game0.4000.819
+ +Note: * indicates significant difference $P < {0.05}$ + +For H3(a) and (b), non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of comfort across all feedback types in the text entry and mobile game applications separately. The test rendered ${\chi }^{2} = {1.345}$ and $p = {0.510}$ which was insignificant $\left( {p > {.05}}\right)$ for the text entry application, while for the mobile game application, the test rendered ${\chi }^{2} = {4.550}$ and $p = {0.103}$ which was insignificant $\left( {p > {.05}}\right)$ . Therefore, both $\mathrm{H}3\left( \mathrm{a}\right)$ and(b)tested true and indicated that users perceived all three feedback types equally comfortable in the text entry and the mobile game application. + +For $\mathrm{H}4\left( \mathrm{a}\right)$ and(b), non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of task disruption across all feedback types in the text entry and mobile game applications separately. The test rendered ${\chi }^{2} = {0.092}$ and $p = {0.955}$ which was insignificant $\left( {p > {.05}}\right)$ for the text entry application, while for the mobile game application, the test rendered ${\chi }^{2} = {5.607}$ and $p = {0.061}$ which was insignificant $\left( {p > {.05}}\right)$ . Therefore, both H4(a) and (b) tested true and indicated that users perceived EMS correction feedback's disruption no worse than the other two feedback types in the text entry and the mobile game application. + +For H5(a) and (b), non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of overall experience across all feedback types in the text entry and mobile game applications separately. The test rendered ${\chi }^{2} =$ 0.407 and $p = {0.816}$ which was insignificant $\left( {p > {.05}}\right)$ for the text entry application, while for the mobile game application, the test rendered ${\chi }^{2} = {0.400}$ and $p = {0.819}$ which was insignificant $(p >$ .05 ). Therefore, both H4(a) and (b) tested false and indicated that users perceived overall experience across all feedback types equally good in the text entry and the mobile game application. + +For text entry application, on a 7 point scale of how much users shared responsibility with the auto-correction utilizing EMS where 1 means not at all and 7 means completely, the mean shared responsibility of ${2.77}\left( {S.D = {1.7}}\right)$ , while for mobile game condition, users reported that they helped/ aided auto-correction with a mean shared responsibility of ${2.5}\left( {S.D = {0.95}}\right)$ . On a 7 point scale for how interesting EMS concept was to use for posture correction, where 1 means not at all and 7 means completely, the mean ranking received for EMS concept being interesting was ${6.58}\left( {S.D = {1.01}}\right)$ . Seventy-five percent of the study population (27 out of 36 users) reported that they would purchase EMS feedback for slouched posture correction if it were commercially available. + +From the post-questionnaires, participants responses to open-ended "comments on experience with EMS" showed that EMS feedback felt "more natural", "not easily ignorable" and better than audio and visual modalities as they cause "over/under correction" of posture. Additionally, participants also reported about the EMS feedback that "the system accurately initiated the stimulus when slouched and stopped after posture was corrected." and that EMS "would enable me to not worry about my posture during highly engaging tasks." One user responded that EMS is "unobtrusive and discrete method of auto-correcting posture." and EMS was the "least disruptive". Further, users reported "I cannot listen to some one while $i$ am trying to read and type.", and "the visual notifications were annoying and distracting when $i$ was typing." indicating that the audio and visual feedback were placing a cognitive load on the user. Other user comments include "this can be a good training device but EMS requires getting used to", "it actively and immediately corrected my slouched posture", "training device for maintaining proper posture", "the tingling sensation feels weird but good", "this can seriously help people with posture problems." + +## 6 Discussion + +The goal of the current analyses was to evaluate the performance and user perception of the EMS feedback and automatic correction of slouched posture. Five main research questions were addressed that evaluated if significant differences were found between visual, audio and EMS feedback across both application types. The results of these analyses indicate the potential of our physiological feedback loop as a viable alternative technique for poor posture detection and correction. The EMS feedback and auto-correction was perceived as an effective feedback mechanism, minimized correction time, performed across multiple applications without disrupting the task, comfortable yet could not be ignored. + +Our automatic slouching correction using EMS feedback system outperformed the visual and audio feedback types based on the average correction response times. This may have been due to the fact that visual and audio feedback place an additional cognitive load on the user while being engaged in their task and rely completely on the user's willingness to self-correct their posture. The EMS feedback, however, does not require the user's attention to correct their posture. This has made our automatic correction using EMS feedback significantly faster than the other two feedback types. + +Users also perceived that EMS feedback corrected their posture more accurately than the other two feedback types that required self-correction. Users reported that self-correction in the visual and the audio feedback types caused them to always over-correct their posture as their awareness of it was minimal while being engaged in the task at hand. Whereas EMS feedback did not require the user's attention and always accurately activated when slouched posture was detected and deactivated after posture had been corrected. The user rankings on accuracy of EMS feedback indicated that EMS feedback was perceived to be more accurate in the mobile game application than the text entry application. This interesting finding may have been due to different factors such as nature of the two applications, complexity of the task, users' connection to the device and varied range of motion involved in auto-correction using EMS feedback across the two applications. + +Additionally, it was also interesting to note that EMS feedback and auto-correction were perceived equally comfortable and no more disruptive than traditional visual and audio alert feedback but with the added advantage of automatic correction. This may have been because EMS feedback relied entirely on the user's physiology and delivered a somatosensory feedback that discretely enabled posture awareness without disruption. Additionally, the EMS feedback allowed the audio and visual faculties to be engaged in the task and not be distracted by the audio and visual feedback that required their attention and disrupted their workflow or game play. + +Further, users also exhibited and reported a shared responsibility in aiding the auto-correction feedback mechanism. This may have been due to fact that the system increased awareness of their posture and the sensory confirmation presented by the EMS feedback loop by activation and deactivation when slouched or upright posture was detected, respectively. It could also be that they learned the working of the system or perceived that they were involved in the posture correction. Users also reported that they helped the auto-correction process slightly as they progressed through the duration of the task and gradually started utilizing it as a training device. With respect to sensitivity of the EMS feedback, some users were more sensitive to the EMS than others. As shown in the results section, the EMS intensity varied across the study population. This could be due to factors such as different body types, muscle physiology, and activity levels. EMS feedback is a minimally invasive technique and could be intrusive and disruptive if not applied at the correct locations and improperly calibrated. Our system was perceived no more disruptive than the alternative feedback types due to the careful calibration of the EMS intensity with user feedback on their level of comfort and the generation of the desired physiological response during the calibration phase. In general, EMS was also able to deliver an equally good overall experience as the other two feedback types and shows promising potential to be developed as a viable alternative option for maintaining good posture or developing good postural habits. + +Finally, users perceived that EMS was an interesting concept to use for automatic posture correction while they were engaged in their tasks. Twenty out of 36 users reported that EMS feedback and auto-correction was their most preferred feedback type while ${75}\%$ of the study population was willing to purchase the automatic correction using EMS feedback if it were a commercially available product. Therefore, our autonomous system could be a valuable alternative or an addition to existing environment, health, and safety (EHS) protocols at workplaces for enhancing productivity, worker health and in preventive health care. + +## 7 LIMITATIONS AND FUTURE WORK + +Although limitations include placement of the sensors and electrodes, we plan to integrate them into wearable clothing and devising an auto-calibration system that can be customized to each individual's comfort. Another limitation of the current study was the gender of the users, the proportion of the male users was higher than the female users (Male=31, Female=5). We plan to conduct a further study to balance the male to female ratio. + +Our future work includes the development of an instrumented wearable garment that can be worn and contains provisions to embed the sensors and electrodes and house the EMS feedback unit along with a dedicated smartphone application. Additionally, we plan to test the automatic detection and correction of slouching under different physical conditions such as standing, walking, and carrying different loads on users' backs. We also plan to extend the concept of posture correction using EMS to different workplace postures such as wrist extension, text neck, and asymmetric weight distribution on the legs. + +## 8 CONCLUSION + +In conclusion, we have demonstrated that our physiological feedback loop based automatic slouching detection and correction with EMS is a viable approach to supporting posture correction. Our auto-correction system utilizing EMS feedback demonstrated significantly faster posture correction response times compared to the self-correction in the visual and audio feedback. Our approach also showed that users perceived EMS feedback to be more accurate, just as comfortable and produced no more disruption than the alternative techniques it was tested against in both the text entry and the mobile game applications. Therefore, automatic slouching detection and correction utilizing EMS shows promising results and can be developed as an alternative method for posture correction. Our approach could prove useful in preventive healthcare to avoid workplace related RSI and be particularly beneficial to people involved in highly engaging tasks such as gaming, diagnostic monitoring, and defense control tasks. + +[1] F. Abyarjoo, O. Nonnarit, S. Tangnimitchok, F. Ortega, A. Barreto, et al. + +Posturemonitor: real-time imu wearable technology to foster poise and health. In International Conference of Design, User Experience, and Usability, pp. 543-552. Springer, 2015. + +[2] A. Alsuwaidi, A. Alzarouni, D. Bazazeh, N. Almoosa, K. Khalaf, and R. Shubair. Wearable posture monitoring system with vibration feedback. arXiv preprint arXiv:1810.00189, 2018. + +[3] M. Bazzarelli, N. G. Durdle, E. Lou, and V. J. Raso. A wearable computer for physiotherapeutic scoliosis treatment. IEEE Transactions on instrumentation and measurement, 52(1):126-129, 2003. + +[4] J. Bell and M. Stigant. Development of a fibre optic goniometer system to measure lumbar and hip movement to detect activities and their lumbar postures. Journal of medical engineering & technology, 31(5):361-366, 2007. + +[5] M. Bortole, A. Venkatakrishnan, F. Zhu, J. C. Moreno, G. E. Francisco, J. L. Pons, and J. L. Contreras-Vidal. The h2 robotic exoskeleton for gait rehabilitation after stroke: early findings from a clinical study. Journal of neuroengineering and rehabilitation, 12(1):54, 2015. + +[6] A. Colley, A. Leinonen, M.-T. Forsman, and J. Häkkilä. Ems painter: Co-creating visual art using electrical muscle stimulation. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 266-270, 2018. + +[7] F. Daiber, F. Kosmalla, F. Wiehr, and A. Krüger. Footstriker: A wearable ems-based foot strike assistant for running. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, pp. 421-424. ACM, 2017. + +[8] M. A. Davis. Where the united states spends its spine dollars: expenditures on different ambulatory services for the management of back and neck conditions. Spine, 37(19):1693, 2012. + +[9] C. De Marchis, T. S. Monteiro, C. Simon-Martinez, S. Conforto, and A. Gharabaghi. Multi-contact functional electrical stimulation for hand opening: electrophysiologically driven identification of the optimal stimulation site. Journal of neuroengineering and rehabilitation, 13(1):22, 2016. + +[10] R. Deyo. Back pain patient outcomes assessment team (boat). USDe-partment of Health & Human Services-Agency of Healthcare Research, 1994. + +[11] F. Farbiz, Z. H. Yu, C. Manders, and W. Ahmad. An electrical muscle stimulation haptic feedback for mixed reality tennis game. In ${ACM}$ SIGGRAPH 2007 posters, p. 140. ACM, 2007. + +[12] D. Giansanti, M. Dozza, L. Chiari, G. Maccioni, and A. Cappello. Energetic assessment of trunk postural modifications induced by a wearable audio-biofeedback system. Medical engineering & physics, 31(1):48-54, 2009. + +[13] A. Gopalai, S. A. Senanayake, and K. H. Lim. Intelligent vibrotactile biofeedback system for real-time postural correction on perturbed surfaces. In 2012 12th International Conference on Intelligent Systems Design and Applications (ISDA), pp. 973-978. IEEE, 2012. + +[14] S. Hanagata and Y. Kakehi. Paralogue: A remote conversation system using a hand avatar which postures are controlled with electrical muscle stimulation. In Proceedings of the 9th Augmented Human International Conference, pp. 1-3, 2018. + +[15] M. Hassib, M. Pfeiffer, S. Schneegass, M. Rohs, and F. Alt. Emotion actuator: Embodied emotional feedback through electroencephalography and electrical muscle stimulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 6133-6146, 2017. + +[16] A. Hermanis, R. Cacurs, K. Nesenbergs, M. Greitans, E. Syundyukov, and L. Selavo. Wearable sensor grid architecture for body posture and surface detection and rehabilitation. In Proceedings of the 14th International Conference on Information Processing in Sensor Networks, pp. 414-415, 2015. + +[17] N. S. M. Kamil and S. Z. M. Dawal. Effect of postural angle on back muscle activities in aging female workers performing computer tasks. Journal of physical therapy science, 27(6):1967-1970, 2015. + +[18] S. Kasahara, J. Nishida, and P. Lopes. Preemptive action: Accelerating human reaction using electrical muscle stimulation without compromising agency. In Proceedings of the 2019 CHI Conference on Human + +Factors in Computing Systems, pp. 1-15, 2019. + +[19] B. J. Keeney, D. Fulton-Kehoe, J. A. Turner, T. M. Wickizer, K. C. G. Chan, and G. M. Franklin. Early predictors of lumbar spine surgery after occupational back injury: results from a prospective study of + +workers in washington state. Spine, 38(11):953, 2013. + +[20] M. Kono, Y. Ishiguro, T. Miyaki, and J. Rekimoto. Design and study of a multi-channel electrical muscle stimulation toolkit for human augmentation. In Proceedings of the 9th Augmented Human International Conference, pp. 1-8, 2018. + +[21] M. Kono, T. Miyaki, and J. Rekimoto. In-pulse: inducing fear and pain in virtual experiences. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, p. 40. ACM, 2018. + +[22] K. Leung, D. Reilly, K. Hartman, S. Stein, and E. Westecott. Limber: Diy wearables for reducing risk of office injury. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, pp. 85-86, 2012. + +[23] W.-Y. Lin, W.-C. Chou, T.-H. Tsai, C.-C. Lin, and M.-Y. Lee. Development of a wearable instrumented vest for posture monitoring and system usability verification based on the technology acceptance model. Sensors, 16(12):2172, 2016. + +[24] P. Lopes. Interacting with wearable computers by means of functional electrical muscle stimulation. In The First Biannual Neuroadaptive Technology Conference, p. 118, 2017. + +[25] P. Lopes and P. Baudisch. Demonstrating interactive systems based on electrical muscle stimulation. In Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 47-49, 2017. + +[26] P. Lopes and P. Baudisch. Immense power in a tiny package: Wearables based on electrical muscle stimulation. IEEE Pervasive Computing, 16(3):12-16, 2017. + +[27] P. Lopes and P. Baudisch. Interactive systems based on electrical muscle stimulation. Computer, 50(10):28-35, 2017. + +[28] P. Lopes, A. Ion, and P. Baudisch. Impacto: Simulating physical impact by combining tactile stimulation with electrical muscle stimulation. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, pp. 11-19, 2015. + +[29] P. Lopes, A. Ion, and R. Kovacs. Using your own muscles: realistic physical experiences in vr. XRDS: Crossroads, The ACM Magazine for Students, 22(1):30-35, 2015. + +[30] P. Lopes, A. Ion, W. Mueller, D. Hoffmann, P. Jonell, and P. Baudisch. Proprioceptive interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 939-948, 2015. + +[31] P. Lopes, P. Jonell, and P. Baudisch. Affordance++ allowing objects to communicate dynamic use. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2515-2524, 2015. + +[32] P. Lopes, S. You, L.-P. Cheng, S. Marwecki, and P. Baudisch. Providing haptics to walls & heavy objects in virtual reality by means of electrical muscle stimulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 1471-1482, 2017. + +[33] P. Lopes, S. You, A. Ion, and P. Baudisch. Adding force feedback to mixed reality experiences and games using electrical muscle stimulation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2018. + +[34] P. Lopes, D. Yüksel, F. Guimbretière, and P. Baudisch. Muscle-plotter: An interactive system based on electrical muscle stimulation that produces spatial output. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp. 207-217, 2016. + +[35] B. Millington. 'quantify the invisible': notes toward a future of posture. Critical Public Health, 26(4):405-417, 2016. + +[36] J. Nishida, S. Kasahara, and P. Lopes. Demonstrating preemptive reaction: accelerating human reaction using electrical muscle stimulation without compromising agency. In ACM SIGGRAPH 2019 Emerging Technologies, pp. 1-2. 2019. + +[37] J. Nishida, K. Takahashi, and K. Suzuki. A wearable stimulation device for sharing and augmenting kinesthetic feedback. In Proceedings of the 6th Augmented Human International Conference, pp. 211-212. ACM, 2015. + +[38] K. O'Sullivan, S. Verschueren, S. Pans, D. Smets, K. Dekelver, and + +W. Dankaerts. Validation of a novel spinal posture monitor: comparison with digital videofluoroscopy. European Spine Journal, 21(12):2633- 2639, 2012. + +[39] J.-H. Park, S.-Y. Kang, S.-G. Lee, and H.-S. Jeon. The effects of smart phone gaming duration on muscle activation and spinal posture: Pilot study. Physiotherapy theory and practice, 33(8):661-669, 2017. + +[40] A. Petropoulos, D. Sikeridis, and T. Antonakopoulos. Spomo: Imu-based real-time sitting posture monitoring. In 2017 IEEE 7th International Conference on Consumer Electronics-Berlin (ICCE-Berlin), pp. 5-9. IEEE, 2017. + +[41] M. Pfeiffer, T. Dünte, S. Schneegass, F. Alt, and M. Rohs. Cruise control for pedestrians: Controlling walking direction using electrical muscle stimulation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2505-2514. ACM, 2015. + +[42] M. Popovic, A. Curt, T. Keller, and V. Dietz. Functional electrical stimulation for grasping and walking: indications and limitations. Spinal cord, 39(8):403-412, 2001. + +[43] A. Reeve and A. Dilley. Effects of posture on the thickness of transversus abdominis in pain-free subjects. Manual therapy, 14(6):679-684, 2009. + +[44] D. C. Ribeiro, S. Milosavljevic, and J. H. Abbott. Effectiveness of a lumbopelvic monitor and feedback device to change postural behaviour: a protocol for the elf cluster randomised controlled trial. BMJ open, 7(1):e015568, 2017. + +[45] B. Riebold, H. Nahrstaedt, C. Schultheiss, R. O. Seidl, and T. Schauer. Multisensor classification system for triggering fes in order to support voluntary swallowing. European journal of translational myology, 26(4), 2016. + +[46] D. I. Rubin. Epidemiology and risk factors for spine pain. Neurologic clinics, 25(2):353-371, 2007. + +[47] E. Sardini, M. Serpelloni, and V. Pasqui. Daylong sitting posture measurement with a new wearable system for at home body movement monitoring. In 2015 IEEE International Instrumentation and Measurement Technology Conference (I2MTC) Proceedings, pp. 652-657. IEEE, 2015. + +[48] S. Schneegass, A. Schmidt, and M. Pfeiffer. Creating user interfaces with electrical muscle stimulation. interactions, 24(1):74-77, 2016. + +[49] P. Strojnik, A. Kralj, and I. Ursic. Programmed six-channel electrical stimulator for complex stimulation of leg muscles during walking. IEEE Transactions on Biomedical Engineering, (2):112-116, 1979. + +[50] E. Tamaki, T. Miyaki, and J. Rekimoto. Possessedhand: techniques for controlling human hands using electrical muscles stimuli. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 543-552. ACM, 2011. + +[51] S. Valdivia, R. Blanco, A. Uribe, L. Penuela, D. Rojas, and B. Kapra-los. A spinal column exergame for occupational health purposes. In International Conference on Games and Learning Alliance, pp. 83-92. Springer, 2017. + +[52] Q. Wang, M. Toeters, W. Chen, A. Timmermans, and P. Markopoulos. Zishi: a smart garment for posture monitoring. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 3792-3795, 2016. + +[53] W. Y. Wong and M. S. Wong. Trunk posture monitoring with inertial sensors. European Spine Journal, 17(5):743-753, 2008. + +[54] J. Xu, T. Bao, U. H. Lee, C. Kinnaird, W. Carender, Y. Huang, K. H. Sienko, and P. B. Shull. Configurable, wearable sensing and vibrotactile feedback system for real-time postural balance and gait training: proof-of-concept. Journal of neuroengineering and rehabilitation, 14(1):102, 2017. + +[55] X. Yan, H. Li, A. R. Li, and H. Zhang. Wearable imu-based real-time motion warning system for construction workers' musculoskeletal disorders prevention. Automation in Construction, 74:2-11, 2017. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Hce9RpAIZbc/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Hce9RpAIZbc/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..a5aa7bd57e6ecd067bf283562205bf308d5661f4 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Hce9RpAIZbc/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,280 @@ +§ AUTOMATIC SLOUCHING DETECTION AND CORRECTION UTILIZING ELECTRICAL MUSCLE STIMULATION + +Category: Research + + < g r a p h i c s > + +Figure 1: Improper posture can have long term health ramifications. Presented here are images of slouched and corrected posture using Electrical Muscle Stimulation: (A)Mobile Gaming - Slouched Posture, (B)Mobile Gaming - Corrected posture, (C)Text Entry - Slouched posture, (D) Text Entry - Corrected posture + +§ ABSTRACT + +Habitually poor posture can lead to repetitive strain injuries that lower an individual's quality of life and productivity. Slouching over computer screens and smart phones is one common example that leads to soreness, and stiffness in the neck, shoulders, upper and lower back regions. To help cultivate good postural habits, researchers have proposed slouch detection systems that alert users when their posture requires attention. However, such notifications are disruptive and can be easily ignored. We address these issues with a new physiological feedback system that uses inertial measurement unit sensors to detect slouching, and electrical muscle stimulation to automatically correct posture. In a user study involving 36 participants, we compare our automatic approach against two alternative feedback systems and through two unique contexts-text entry and gaming. We find that our approach was perceived to be more accurate, interesting, and outperforms alternative techniques in the gaming but not text entry scenario. + +Index Terms: Human-centered computing-Human computer interaction (HCI)-Wearable computing-Preventive Healthcare Posture correction-Slouching-Electrical Muscle Stimulation; + +§ 1 INTRODUCTION + +The alignment of our body parts, bolstered by the correct muscular tension against gravity, plays a crucial role in maintaining good and healthy posture. Poor posture in working environments leads to Repetitive Strain Injuries (RSI) and long-term musculoskeletal disorders [43], which are becoming increasingly prevalent in working age population. Improper occupational standards, poor workstation arrangements, and gaming in unnatural seated positions are often the biggest factors contributing to RSI at workplace and at home. Poor posture has also been linked to health deterioration and low productivity [2]. Repetitive processes, such as the use of computer systems and smart phones, present a high risk of RSI, examples of which include wrist extension, neck cradling, forward neck, slouching, and uneven weight distribution on the legs. In this paper, we address slouching, which is one of these RSI. The RSI associated with slouching, if not detected, analyzed, and corrected at an early stage, may lead to the development of poor posture habits which induce intense pain, trigger point pain, and muscle tightness in the chest, neck, shoulders, and back regions. + +In the United States, nearly $90 billion [8] are spent every year for the treatment of RSI and lower back pain arising out of poor workplace postures, primarily slouching [10]. Recently, a study showed that slouching affects the transverses abdominus muscle [43] which is responsible stabilizing the torso in an upright position while standing and sitting. The muscle dysfunction, or dystrophy, caused by slouching is directly associated with lower back pain. Lower back pain/injuries are one of the noted root causes of disability in the world, where it is estimated that around ${80}\%$ of the world population will experience it at some point in their lives $\left\lbrack {{19},{46}}\right\rbrack$ . Current intervention technologies only deal with the detection of slouching and rely completely on the user's willingness to self-correct their postures when a feedback alert is presented. As a result, there is a dire need for the implementation of a wearable intervention technology capable of automatically detecting slouched posture and subsequently correcting posture to help users maintain good posture during their work and gaming activities. + +Electrical muscle stimulation (EMS) can be applied to cause involuntary muscular contractions and generate a physiological response $\left\lbrack {9,{42},{49}}\right\rbrack$ . In this work, we integrated EMS with a slouch detection system to automatically correct poor habitually slouched posture and restore correct posture through these involuntary muscular contractions. The main contributions of this work include the development of a wearable intervention prototype that can autonomously detect slouching and correct posture through a physiological feedback loop utilizing EMS. We also evaluate the performance of our approach in breaking the habit of slouching through automatic detection and subsequent correction of slouched posture, and for training and developing good postural habits. + +§ 2 RELATED WORK + +Although, there is a consensus that posture can be improved and fixed through ergonomic solutions such as adjusting desk and chair heights, monitor viewing angles, and keyboard and mouse positions [2], there exists only a small number of reliable techniques for continued posture monitoring and detection of poor posture especially slouching. With the increase in computational power, sensor technology, and advances in wearable technology, posture monitoring has been attracting increased attention for developing detection and alert-based wearables that allows users to self-correct their posture based on feedback from the system $\left\lbrack {{13},{16},{23}}\right\rbrack$ . This development of sensors and their integration into wearables has also enabled real time monitoring and live feedback systems that are independent of the location of the work environment. Despite these efforts and due to the novelty of wearable intervention technologies, current researchers are focused on developing more accurate monitoring techniques and development of predictive algorithms for better detection of poor posture. As a result, aspects such as integration of embedded sensors into wearables, aesthetics, usability, wearability, and user comfort are often neglected. Most slouching posture detection techniques employ a variety of sensors such as IMUs, Force sensitive resistors, Electromagnetic inclinometer [3], fiber-optic sensors [4], cameras [51], and smart garments [1, 12] for monitoring, detection and alerting the users. + +§ 2.1 POSTURE DETECTION WITH REAL TIME FEEDBACK + +Slouching posture detection with real time feedback mainly employs three types of feedback via visual, audio, and haptic through which information about the users' posture, and the need to self-correct their posture is conveyed through visual notifications on their computer monitors/smart phones, voice/auditory alarms, and vibration alerts, respectively. Researchers integrated audio feedback with strain gauges [38], instrumented helmets [55], accelerometers [12], gyroscopes [53], and IMUs [1] for detection of poor posture and development of good posture habits. Similarly, real time visual feedback approaches utilizing a set of IMUs for posture monitoring were developed $\left\lbrack {{16},{23}}\right\rbrack$ to deliver visual feedback via smartphone applications. + +Real time visual and audio feedback was integrated with wearable posture detection systems such as Zishi [52], SPoMo [40] and Spineangel [44]. Zishi, an instrumented vest integrated with IMU sensors on the upper and lower spine, delivered alerts using visual and audio channels through a smartphone application. SPoMo utilized a set of IMUs for monitoring spinal position in seated posture while Spineangel was developed as a wearable belt with IMUs to investigate the relationship between poor posture and lower back pain. Building upon this, Limber [22] was designed as a minimally disruptive method for detecting poor posture and alerting in an office style workplace. Their system incorporated IMUs (accelerometer and strain gauges) on the shoulders, spine, and neck to detect poor posture while sitting and implemented a positive and negative feedback system based on correct and incorrect postures, respectively. Researchers also developed hybrid posture detection techniques utilizing electromagnetic technology and accelerometers [3], inductive sensors [47] and real-time vibrotactile feedback was provided to users for self-correction. Additionally, researchers also integrated haptic feedback with IMUs for postural balance [54], and gait training [13]. Xu et al. [54] devised a system that utilized eight IMUs placed on either side of the torso to monitor trunk tilt and provide vibrotactile feedback. Advancing upon this, Gopalai et al. [13] utilized a wireless IMU attached to the trunk and a wobble board for providing real time biofeedback via vibrational alerts for postural control. Further, a comparative study on posture detection aimed at improving spinal posture concluded that the IMU-based system performed more accurately than the vision-based system [51]. Also, a qualitative assessment of different commercially available wearables for posture detection (LUMO Lift, LUMO Back and Prana), and correction through real time haptic, and visual feedback systems concluded that haptic feedback based on real time monitoring enabled a shared responsibility for detecting poor posture and subsequently delivering real time feedback/ alerts to the user to correct themselves [35]. + +§ 2.2 ELECTRICAL MUSCLE STIMULATION + +EMS is a non-invasive technique for delivering electrical stimuli to muscles, nerves and joints via surface electrodes positioned on the skin to deliver an acute/chronic therapeutic effect. Traditionally, electrical stimulation was utilized for therapeutic pain management to alleviate chronic muscle strain, acute muscle spasms, and in rehabilitation to regain muscle strength $\left\lbrack {5,9}\right\rbrack$ and normal movement after injury or surgery [49]. EMS has also been utilized for the application of a cyclical, moderate intensity electrical impulses to selected muscles for generating physiological responses. It has been used for generating functional movements that resemble voluntary muscle contractions like evoking hand opening [9], to stimulate reflexes for swallowing disorders [45], to enable neuro-prosthesis control [42], and to restore functions that have been lost through injury, surgery, or muscle disuse. + +§ 2.3 EMS IN HUMAN COMPUTER INTERACTION + +EMS has found new interest in human computer interaction (HCI) for applications in gaming, augmented reality, and virtual reality (VR) training [20, 24-27, 48]. This presents an opportunity for developing novel interaction techniques using adaptive wearable interfaces. Researchers have applied EMS technology to different interactive applications such as activity training, immersive feedback technologies and spatial user interfaces. On account of its ability to invoke involuntary muscular contractions, EMS has been utilized in activity training to enable users to acquire and develop new skills such as playing musical instruments [50], learning object affordance [31], and developing preemptive reflex reactions [18, 36]. Additionally, EMS-based feedback systems integrated with augmented, virtual, and mixed reality technologies have been developed for providing users with additional immersion and engagement. EMS was utilized in developing a wearable stimulation device for sharing and augmenting kinesthetic feedback [37] to facilitate virtual experiences of hand tremors in Parkinson's disease. Impacto [28] was designed and developed to provide the haptic sensation of hitting and being hit in VR. EMS was also utilized to increase realism in VR applications by actuating emotions such as inducing fear and pain in In-pulse [21] and transfer of emotional states between users in Emotion Actuator [15]. Lopes et al., [29, 32] developed realistic physical experiences in VR by providing haptic feedback to walls and obstacles. They also developed force feedback for mobile devices [34] by employing EMS to invoke involuntary muscular contractions to tilt the device sideways. Further, they extended EMS-based force feedback to mixed reality gaming to add physical forces to virtual objects [33]. Similarly, Fabriz et al., [11], utilized EMS to simulate impact in an augmented reality tennis game to deliver a more immersive experience. Further, integration of input, and output interfaces with EMS technology has allowed development of physiological feedback loops and enabled a novel communication technique. EMS has been employed in actuated navigation [41], actuated sketching [34], and delivery of discrete notifications and conversations [14]. EMS has also been utilized for enabling unconscious motor learning such as heel striking [6] and co-create visual art works [7]. Pose-IO developed by Lopes et al., [30] allowed users to interact without visual, and audio senses and completely rely on proprioceptive sense. + +The above-mentioned interactive applications demonstrate that EMS provides more defined, implicit, and strong sensations than visual, audio, and haptic feedback mechanisms. These applications also exhibit miniaturization of feedback hardware compared to actuated motors in exoskeletons. The current literature suggests that postural correction utilizing EMS has not been fully explored and thereby presented an opportunity for the development of novel wearable intervention technology for the detection and subsequent correction of slouched posture. In comparison, prior research on posture detection always relied on the user's willingness to self-correct their posture upon being presented with feedback, while our autonomous system was capable of automatically correcting the slouched posture upon detection. + + < g r a p h i c s > + +Figure 2: Physiological Feedback Loop: Automatic Slouching Detection and Correction System + + < g r a p h i c s > + +Figure 3: Wireless IMU sensor placement for posture monitoring and detecting slouched posture:(A) Side view showing sensor placement on left deltoid, (B) Front view showing sensor placement below center of collar bone above the chest, (C) Side view showing sensor placement on right deltoid + +§ 3 AUTOMATIC SLOUCHING DETECTION AND CORRECTION UTILIZING EMS + +We developed a physiological feedback loop based wearable intervention prototype relying on IMU sensors and EMS (illustrated in Figure 2). The prototype utilized three Metawear MMC wireless sensors for measuring angular changes, and openEMSstim package [26] for presenting the EMS feedback. We utilized the Metawear C# SDK for developing a user interface and to integrate the EMS hardware to complete the physiological feedback loop. The change in posture was calculated from the angular information obtained from the IMU sensors. As slouching was mainly characterized by the torso inclination and forward rolling of the shoulders [17], IMU 1 was to be placed at the center of the collar bone above the chest (illustrated in Figure 3 (B)) and the other IMU's 2 and 3 were to be placed on the center of each deltoid (illustrated in Figure 3 (A) & (C)). The user's torso inclination angle was calculated from the pitch of IMU1, and the roll angles on the shoulders were calculated from the roll of IMU's 2 and 3. Our system detected slouching when the user's current torso inclination and shoulder roll angles, both approached and remained at a threshold level for a period of 5 seconds. The threshold level is preset as $- {3}^{ \circ }$ of the torso inclination and shoulder roll angles recorded in the slouched position during calibration. $- {3}^{ \circ }$ was chosen to overcome measurement errors without increasing false positives and 5 seconds timer duration ensured random movements do not lead to false positive slouch detection. These design choices were validated during our pre-study trials. The threshold angle of $- {3}^{ \circ }$ was used to initiate the 5 seconds timer, and the slouch angles detected were recorded at the end of the timer when the feedback was presented. The purpose of timer was to ensure false positives due to participant behavior do not trigger the feedback response. When slouching was detected, the automatic correction feedback was presented by applying electrical stimulus to the rhomboid muscles (illustrated in Figure 4) for generating a pulling force in the opposite direction from the slouched posture and thereby, generating a physiological response to bring the user back to the upright or correct position. Two pairs of electrodes were utilized for contraction of the rhomboid muscles which causes the shoulders blades to be pulled back, thus unrolling the shoulders and bringing the torso back to the upright posture. IMU and EMS calibration play a crucial role in the effectiveness of the system. The calibration process included correcting IMUs offset value in the upright position of the user and recording the angular change in the slouched posture with respect to the upright position. The EMS intensity calibration was manually incremented to deliver an intensity that was optimal for generating involuntary muscular contraction and avoid any pain. This EMS intensity provided to the user for generating the necessary pulling force for correcting the slouched posture and restoring the upright position was recorded and utilized during the experiment. The TENS device was able to deliver intensities between (0-100mA). A continuous ${75}\mathrm{\;{Hz}}$ square wave pulse at the recorded EMS intensity and a pulse width of ${100\mu }\mathrm{s}$ was supplied as the electrical stimulus to the users. + + < g r a p h i c s > + +Figure 4: EMS Electrode placement on rhomboid muscles for auto-correction using EMS feedback + +§ 4 METHODS + +The goal of our study was to evaluate the overall effectiveness and user perception of an automatic detection and correction feedback system (EMS) which would not disrupt the task or require additional cognitive load by the user to self-correct their posture. Hence, we compared it against traditional audio and visual feedback mechanisms that required self-correction by the user. The visual and audio feedback modalities required users to self-correct their posture based on visual and the audio sound notifications presented to them, respectively. We also identified two common causes of slouching in day to day activities such as computer related workplace tasks, and mobile gaming $\left\lbrack {{17},{39}}\right\rbrack$ , and investigated the effectiveness and user perception of our automatic slouching detection and correction prototype across these common causes of slouching. Our objective was to determine if our automatic posture detection and correction system using EMS would be a viable technique for correcting slouched posture as compared to the visual and the audio feedback channels while being engaged in a computer related workplace task and playing a mobile game. + +Table 1: User ranking on posture awareness, devices and EMS + +max width= + +User Experience Application $\mathbf{{Mean}}$ S.D + +1-4 +2*Exposure to Posture Alert Devices Text Entry 1.61 1.16 + +2-4 + Mobile game 1.89 1.09 + +1-4 +2*Exposure to EMS Text Entry 2.33 1.37 + +2-4 + Mobile game 2.06 1.43 + +1-4 +2*Experienced posture problems Text Entry 4.22 1.55 + +2-4 + Mobile game 4.33 1.49 + +1-4 +2*Experienced slouching Text Entry 5.06 1.50 + +2-4 + Mobile game 5.4 0.96 + +1-4 + +Note: User experience ranking based on a 7-point scale where 1 means never I no experience and 7 means frequently / very experienced + +§ 4.1 SUBJECTS AND APPARATUS + +We recruited 36 Participants (Male=31, Female=5) for the study with 18 participants for each application- text entry and mobile game. All participants recruited were above the age of 18 years and the mean of the age of participants was22.05years $\left( {S.D. = {3.13}}\right)$ . All participants were able bodied and had corrective ${20}/{20}$ vision. We used three Metawear MMC IMU sensors for monitoring the torso inclination angles and the shoulder roll angles. The EMS was generated with an off-the-shelf Tens Unit (TNS SM2MF) and controlled by the openEMSstim package for activating and modulating the intensity of the electrical stimuli supplied to the muscles. The hardware used for the text entry application was a 14" Intel i7 Laptop, and a 2nd generation iPhone SE was used for the mobile game application. From the pre-questionnaires, participants ranking of their prior exposure to posture alert devices and EMS, and experience with posture problems and slouching are noted and illustrated in Table 1. Participants ranked their exposure and experience on a 7-point scale with 1 meaning never/ no experience and 7 meaning frequently/ very experienced. + +§ 4.2 EXPERIMENTAL DESIGN + +A 2 by 3 mixed subjects experiment with 36 participants was conducted to investigate the performance and feasibility of our approach. The within subject factor was feedback type and the between subject factor was application type. We tested our system across two applications: text entry and mobile game, and across three feedback types: visual, audio, and EMS. The visual and audio feedback types required self-correction by the users while the EMS feedback was an automatic feedback type that could automatically correct the user's posture when slouched posture was detected. We compared the performance of EMS and auto-correction against the visual and audio feedback techniques. The average correction response times and user perception of the system in detecting slouched postures and subsequently either self-correcting themselves based on visual or audio alert feedback or automatically correcting their posture using EMS were also evaluated. The text entry application required the users to complete a text entry task and the mobile game application required the users to play a Battle Royale game called "PlayerUn-known’s Battlegrounds (PUBG) Mobile ${}^{1\text{ ," }}$ on a smartphone. We selected PUBG mobile based on popularity (400 million players), level of engagement and demographics (primarily students and office workers aged 15-35 years who may be prone to long work or gaming hours). In both applications, the users were required to complete all three modalities: + + * Modality 1: Visual alert feedback and self-correction + + * Modality 2: Audio alert feedback and self-correction + +§ MODALITY 3: EMS FEEDBACK AND AUTOMATIC CORRECTION + +The order of the modalities introduced to the users was counterbalanced to include all permutations and to minimize learning effects. In each application, all participants were required to complete all three modalities to complete the study. The independent variables in the study were the three different modalities and the dependent variables were the average correction response times and parameters of user perception such as overall experience, effectiveness of slouching detection, effectiveness of correction feedback, user engagement and task disruption, and modality comfort. Each study session lasted approximately 75 minutes and the participants were compensated $\$ {15}$ for their participation. + +§ 4.3 RESEARCH HYPOTHESES + +While slouching detection and alert feedback systems have been designed and tested by researchers, we note that slouching posture correction response times and user perception of the systems have not been measured or reported. As such, in our work we expect that there may be significant difference in average correction response times and user perception parameters across the three modalities which would influence the user experience. Therefore, our study was designed to help determine the effects of self-correction using visual, and audio feedback, and automatic correction using EMS on user experience across the two applications. We have five research hypotheses with two parts (a and b) for investigating into the user perception of our approach in the text entry and the mobile game contexts. + + * H1: In the text entry (a) or the mobile game (b), the average correction response time to slouching feedback will be faster in EMS feedback compared to the visual and audio alert feedback. + + * H2: In the text entry (a) or the mobile game (b), the user perception of accuracy of slouching posture correction in EMS feedback will be greater than visual and audio alert feedback. + + * H3: In the text entry (a) or the mobile game (b), comfort in EMS feedback will not be significantly different compared to visual and audio alert feedback. + + * H4: In the text entry (a) or the mobile game (b), task disruption in EMS feedback will not be significantly different compared to than visual and audio alert feedback. + + * H5: In the text entry (a) or the mobile game (b), automatic correction using EMS feedback delivers better user experience compared to visual and audio alert feedback. + +§ 4.4 COVID-19 CONSIDERATIONS + +Due to the ongoing COVID-19 pandemic, we wanted to ensure safety for the participants and researchers. Following our institutions guidelines, all individuals were required to always wear face masks. Between each participant, we sanitized all devices and surfaces that the participants and researchers would be in contact with, to ensure safety during the study. Furthermore, all users were required to wear a face mask to participate in the study, but we provided each individual face masks, hand sanitizer, cleaning wipes, latex gloves, to reduce risk of contracting the disease. Though we cleaned all surfaces between participants, we allowed each individual to clean devices as desired. + +§ 4.5 EXPERIMENTAL PROCEDURES + +The participants were supplied with a consent form that details the nature of the experiment, safety, risks, compliance required and information about compensation. The participants are required to review the consent form and provide verbal consent for the study session to start. The participants were then asked to complete a pre-questionnaire that collected basic information about their knowledge on posture related issues at the workplace, intervention technology and knowledge of EMS. + +${}^{1}$ https://www.pubg.com/ + + < g r a p h i c s > + +Figure 5: Text entry study showing 50-50 split screen with a PDF document (zoom set to 40%) on the left and the Word document (zoom set to page width) on the right. Participants were required to read from the PDF document and type in to the word document. + +Next, the participants were prepared for the study by placing adhesive IMU sensors on the deltoids and the center of the collar bone above the chest (as shown in Figure 3) for data collection and detecting slouching position. Adhesive EMS electrodes were placed on the rhomboid muscles prior to the EMS feedback session for correcting slouching (as shown in Figure 4). After sensor set-up, the participants were required to be seated in an upright position in front of a laptop computer/smartphone with their hands placed on the keyboard or holding the smartphone. Participants were then calibrated for the upright and slouched positions. First, the IMU sensors were corrected for the offset value in the users' upright position and then users were required to emulate a slouched position by inclining their torso and rolling their shoulders forward. These upright and the slouched posture angles were thus recorded. + +Prior to the EMS feedback session, the participants were required to be calibrated only once for the EMS intensity to generate a physiological response of sitting upright. The participants were fitted with two pairs of EMS electrodes on their rhomboid muscles. EMS intensity calibration process was done manually for each participant, and moderators incremented the intensity until an involuntary muscular contraction causing posture correction is affected. During calibration, participants were asked to slouch, and moderators manually incremented the EMS intensity. As EMS also produces a tactile or haptic effect even at low intensities, participants were asked to not respond to the tactile or haptic effect to ensure haptic/tactile component of EMS does not contribute to the automatic correction process in any way. Moderators additionally asked participants to verbally respond specifically to the following questions during calibration to ensure rhomboid muscular contraction and participant comfort: 1) when they initially felt the stimulation (haptic sensation), 2) when the intensity was generating an involuntary muscular contraction and/ or when they are experiencing the pulling force towards the upright posture, 3) when any pain is experienced. For each participant, when involuntary muscular contraction was confirmed verbally by participant and visually verified by moderators, the optimal EMS intensity that was generating an involuntary muscular contraction to correct the slouched posture, was recorded, and selected for EMS part of the study. + +All the above steps are similar for both the text entry and the mobile gaming applications. In the text entry task, users were asked to read from a PDF document and type into a word document. The PDF and word documents were presented in a 50-50 split screen. + + < g r a p h i c s > + +Figure 6: Mobile game study showing lobby Area of PUBG mobile prior to start of the game + + < g r a p h i c s > + +Figure 7: Text entry study-visual feedback: showing Windows 10 Pop visual notification on the bottom right of the screen (A) To correct posture when slouching is detected (B) After Posture has been corrected + +For the purpose of conducting the study, the PDF zoom was set to ${40}\%$ to promote or cause slouching while reading (illustrated in Figure 5). In the mobile game task, users were asked to play PUBG mobile (illustrated in Figure 6). In both applications, the user's posture was monitored for slouching and upright positions during the study. The study comprised of three parts: visual, audio, and EMS feedback. Each part of the study is 15 minutes in duration. Upon completion of each part, participants were required to complete a post-questionnaire about their experience. All participants were required to finish all three parts for completing the study. + +§ 4.5.1 VISUAL FEEDBACK AND SELF-CORRECTION + +Text Entry Application: When slouching was detected by the system based on the IMU sensor feedback, a windows 10 visual popup notification "Please correct your posture" is displayed on the bottom right corner of the monitor (illustrated in Figure 7a) and the users were required to sit upright and self-correct their slouched posture till a second visual popup notification "Posture corrected" is displayed to the user (illustrated in Figure 7b). The response times for correcting the slouched posture were recorded. + +Mobile Game Application: When slouching was detected by the system based on the IMU sensor feedback, an SMS is sent from the C# application to the smart phone with the message "Posture alert: Please correct your posture" and is displayed as drop down badge notification on the smartphone (illustrated in Figure 8a). After receiving the visual alert notification, the users were required to sit upright, and self-correct their slouched posture till another SMS containing the message "Posture corrected" is displayed to the user (illustrated in Figure 8b). The response times for correcting the slouched posture were recorded. + + < g r a p h i c s > + +Figure 8: Mobile game study-visual feedback: showing visual notification badges drop down from the top of the display (A) To correct posture when slouching is detected (B) After Posture has been corrected + +§ 4.5.2 AUDIO FEEDBACK AND SELF-CORRECTION + +Text Entry Application: When slouching was detected by the system, an audio notification "Please correct your posture" is activated and the users were required to sit upright, and self-correct their slouched posture till a another audio notification "Posture corrected" is presented to the user. The response times for correcting the slouched posture were recorded. + +Mobile Game Application: When slouching was detected by the system, an audio notification bell sound is activated and the users were required to sit upright, and self-correct their slouched posture till another audio notification bell is activated to the user. The response times for correcting the slouched posture were recorded. + +§ 4.5.3 EMS FEEDBACK AND AUTO-CORRECTION + +Text Entry and Mobile Game Applications:. When slouching was detected by the system, the EMS is activated to apply the recorded EMS intensity to the rhomboid muscles to invoke an involuntary muscle contraction. This muscle contraction produces a pulling force in the opposite direction to the slouched posture and to generate the physiological response of sitting upright by correcting the torso inclination and shoulder roll caused by slouching. Figure 1(A) and (C) illustrate the slouched posture during the mobile game and the text entry studies respectively. Figure 1(B) and (D) illustrate the corrected posture after EMS has been applied in the mobile game and the text entry studies respectively. The EMS is deactivated once the upright position is detected. The response times for correcting the slouched posture were recorded. + +Table 2: Average slouch angles in degrees + +max width= + +Slouch angle Application $\mathbf{{Mean}}$ S.D + +1-4 +2*Torso Inclination angle Text Entry ${21.00}^{ \circ }$ ${3.88}^{ \circ }$ + +2-4 + Mobile game ${18.24}^{ \circ }$ ${2.80}^{ \circ }$ + +1-4 +2*Shoulder roll angle Text Entry 15.10° ${3.00}^{ \circ }$ + +2-4 + Mobile game ${13.84}^{ \circ }$ ${2.22}^{ \circ }$ + +1-4 + +On the completion of the study, participants were required to complete a comparative post questionnaire regarding their experience in all three modalities. All data from the sensors and the EMS was automatically recorded in the system for further analysis and reporting. + +§ 5 RESULTS + +The mean frequency of slouching in the text entry condition was (7.72,10, and 8.72)for the audio, visual and EMS feedback modalities respectively and(7.05,9.11, and 8.38)for the audio, visual and EMS feedback modalities, respectively in the mobile game condition. The average torso inclination and shoulder roll angles were recorded for slouched posture during the calibration process and utilized for detection of slouching which are illustrated in Table 2. For text entry, mean torso inclination angle was ${21}^{ \circ }\left( {S.D = {3.88}^{ \circ }}\right)$ , while the mean shoulder roll angle was ${15.1}^{ \circ }\left( {S.D = {3}^{ \circ }}\right)$ . For the mobile game, mean torso inclination angle was ${18.24}^{ \circ }\left( {S.D = {2.8}^{ \circ }}\right)$ , while the mean shoulder roll angle was ${13.84}^{ \circ }\left( {S.D = {2.22}^{ \circ }}\right)$ . For the text entry application, the mean electrical stimulation intensity required to correct slouched posture was ${39.72}\mathrm{{mA}}\left( {S.D = {13.17}\mathrm{{mA}}}\right)$ while for the mobile game task, the mean electrical stimulation was ${47.22mA}\left( {S \cdot D = {11.08mA}}\right)$ . To address H1, one-way repeated measures ANOVA were performed on the average correction response times for slouching correction across all feedback types in the text entry application and the mobile game task separately. To address H2 through H5, non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of effectiveness of correction feedback, comfort, task disruption and overall experience. Wilcoxon signed rank test was performed if significant differences were found. The results were consolidated and presented below in Table 3. + +For $\mathrm{H}1\left( \mathrm{a}\right)$ , a one-way repeated measures ANOVA conducted on the influence of correction feedback type on the average correction response times taken for correcting posture after slouching is detected and correction feedback is presented to the user. The correction feedback type consisted of three levels (visual, audio, and EMS). All effects were statistically significant at the .05 significance level. The main effect for the correction feedback type yielded an F ratio of $F\left( {2,{34}}\right) = {5.382},p < {.05}$ , indicating a significant difference between visual feedback $\left( {M = {3.86},S.D = {1.27}}\right)$ , audio feedback $\left( {M = {3.9},S.D = {1.28}}\right)$ and EMS feedback $\left( {M = {2.89},S.D = {1.74}}\right)$ . For $\mathrm{H}1\left( \mathrm{\;b}\right)$ , the one-way repeated measures ANOVA conducted on the influence of correction feedback type on the average correction response times taken for correcting posture after slouching is detected and correction feedback is presented to the user. The correction feedback type consisted of three levels (visual, audio, and EMS). All effects were statistically significant at the .05 significance level. The main effect for the correction feedback type yielded an $\mathrm{F}$ ratio of $F\left( {2,{34}}\right) = {20.66},p < {.001}$ , indicating a significant difference between visual feedback $\left( {M = {5.98},S.D = {2.4}}\right)$ , audio feedback $\left( {M = {4.44},S.D = {0.75}}\right)$ and EMS feedback $(M =$ ${2.70},S.D = {1.04})$ . For the text entry application, average correction response times were faster for EMS feedback than the visual feedback $\left( {{t}_{17} = - {0.961},p < {0.05}}\right)$ , but no significant differences were found between EMS and audio feedback types, and between visual and audio feedback types. For the mobile game application, average correction response times were faster for audio feedback than the visual feedback $\left( {{t}_{17} = - {1.538},p < {0.05}}\right)$ , the EMS feedback was faster than Visual feedback $\left( {{t}_{17} = - {3.276},p < {0.01}}\right)$ , and also faster than the audio feedback $\left( {{t}_{17} = - {1.737},p < {0.001}}\right)$ .The post-hoc analysis between the three feedback types shows that the hypothesis $\mathrm{H}1\left( \mathrm{a}\right)$ tested false in that the average correction response times were faster in the EMS feedback type compared to the visual feedback but not the audio feedback. In the case of $\mathrm{H}1\left( \mathrm{\;b}\right)$ , the hypothesis tested true, in that the average correction response times were faster in the EMS feedback type compared to the visual, and audio feedback types as illustrated in Figure 9 (A) and (B). + + < g r a p h i c s > + +Figure 9: Average Correction Response Times (in Seconds) across (A) Text Entry and (B) Mobile Game for all correction feedback types - (1) Visual, (2) Audio, (3) EMS. Error Bars: 95% Cl. + +To address H2(a) and (b), non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of the accuracy of correction feedback type in the text entry and mobile game applications separately. The test rendered ${\chi }^{2} = {3.591},p = {0.166}$ which was insignificant $\left( {p > {.05}}\right)$ for the text entry application, while for the mobile game application, the test rendered $\left( {{\chi }^{2} = {7.259},p = {0.027}}\right)$ which was significant $\left( {p < {.05}}\right)$ . A Post-hoc analysis with Wilcoxon signed-rank tests was conducted for the mobile game application with a Bonferroni correction applied, resulting in a significance level set at $p < {0.017}$ . Median perceived accuracy of slouching correction for the Visual, Audio, EMS feedback were6,6,7respectively. There was a statistically significant difference between the visual and the EMS correction feedback type $\left( {Z = - {2.591},p = {0.010}}\right)$ , and also between EMS and audio correction feedback type $\left( {Z = - {2.585},p = {0.010}}\right)$ . However, there was no significant difference between audio and visual correction feedback types $\left( {Z = - {0.942},p = {0.346}}\right)$ . Therefore, $\mathrm{H}2\left( \mathrm{a}\right)$ tested false and indicated that the users perceived all three feedback types equally accurate in the text entry application. Whereas H2(b) tested true and indicated that the users perceived that the accuracy of EMS correction feedback was more effective than the visual and the audio feedback in the mobile game application. + +Table 3: Friedman test results on the user ranking for H2-H5 + +max width= + +User perception Application ${\chi }^{2}$ $p$ + +1-4 +2*Accuracy of Correction Feedback Text Entry 3.592 0.166 + +2-4 + Mobile game 7.259 ${0.027}^{ * }$ + +1-4 +2*Comfort Text Entry 1.345 0.510 + +2-4 + Mobile game 4.550 0.103 + +1-4 +2*Task Disruption Text Entry 0.092 0.955 + +2-4 + Mobile game 5.607 0.061 + +1-4 +3*Experienced slouching Text Entry 0.407 0.816 + +2-4 + Mobile game 0.400 0.819 + +2-4 + X X X + +1-4 + +Note: * indicates significant difference $P < {0.05}$ + +For H3(a) and (b), non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of comfort across all feedback types in the text entry and mobile game applications separately. The test rendered ${\chi }^{2} = {1.345}$ and $p = {0.510}$ which was insignificant $\left( {p > {.05}}\right)$ for the text entry application, while for the mobile game application, the test rendered ${\chi }^{2} = {4.550}$ and $p = {0.103}$ which was insignificant $\left( {p > {.05}}\right)$ . Therefore, both $\mathrm{H}3\left( \mathrm{a}\right)$ and(b)tested true and indicated that users perceived all three feedback types equally comfortable in the text entry and the mobile game application. + +For $\mathrm{H}4\left( \mathrm{a}\right)$ and(b), non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of task disruption across all feedback types in the text entry and mobile game applications separately. The test rendered ${\chi }^{2} = {0.092}$ and $p = {0.955}$ which was insignificant $\left( {p > {.05}}\right)$ for the text entry application, while for the mobile game application, the test rendered ${\chi }^{2} = {5.607}$ and $p = {0.061}$ which was insignificant $\left( {p > {.05}}\right)$ . Therefore, both H4(a) and (b) tested true and indicated that users perceived EMS correction feedback's disruption no worse than the other two feedback types in the text entry and the mobile game application. + +For H5(a) and (b), non-parametric Friedman tests of differences among repeated measures was conducted on the users' ranking of overall experience across all feedback types in the text entry and mobile game applications separately. The test rendered ${\chi }^{2} =$ 0.407 and $p = {0.816}$ which was insignificant $\left( {p > {.05}}\right)$ for the text entry application, while for the mobile game application, the test rendered ${\chi }^{2} = {0.400}$ and $p = {0.819}$ which was insignificant $(p >$ .05 ). Therefore, both H4(a) and (b) tested false and indicated that users perceived overall experience across all feedback types equally good in the text entry and the mobile game application. + +For text entry application, on a 7 point scale of how much users shared responsibility with the auto-correction utilizing EMS where 1 means not at all and 7 means completely, the mean shared responsibility of ${2.77}\left( {S.D = {1.7}}\right)$ , while for mobile game condition, users reported that they helped/ aided auto-correction with a mean shared responsibility of ${2.5}\left( {S.D = {0.95}}\right)$ . On a 7 point scale for how interesting EMS concept was to use for posture correction, where 1 means not at all and 7 means completely, the mean ranking received for EMS concept being interesting was ${6.58}\left( {S.D = {1.01}}\right)$ . Seventy-five percent of the study population (27 out of 36 users) reported that they would purchase EMS feedback for slouched posture correction if it were commercially available. + +From the post-questionnaires, participants responses to open-ended "comments on experience with EMS" showed that EMS feedback felt "more natural", "not easily ignorable" and better than audio and visual modalities as they cause "over/under correction" of posture. Additionally, participants also reported about the EMS feedback that "the system accurately initiated the stimulus when slouched and stopped after posture was corrected." and that EMS "would enable me to not worry about my posture during highly engaging tasks." One user responded that EMS is "unobtrusive and discrete method of auto-correcting posture." and EMS was the "least disruptive". Further, users reported "I cannot listen to some one while $i$ am trying to read and type.", and "the visual notifications were annoying and distracting when $i$ was typing." indicating that the audio and visual feedback were placing a cognitive load on the user. Other user comments include "this can be a good training device but EMS requires getting used to", "it actively and immediately corrected my slouched posture", "training device for maintaining proper posture", "the tingling sensation feels weird but good", "this can seriously help people with posture problems." + +§ 6 DISCUSSION + +The goal of the current analyses was to evaluate the performance and user perception of the EMS feedback and automatic correction of slouched posture. Five main research questions were addressed that evaluated if significant differences were found between visual, audio and EMS feedback across both application types. The results of these analyses indicate the potential of our physiological feedback loop as a viable alternative technique for poor posture detection and correction. The EMS feedback and auto-correction was perceived as an effective feedback mechanism, minimized correction time, performed across multiple applications without disrupting the task, comfortable yet could not be ignored. + +Our automatic slouching correction using EMS feedback system outperformed the visual and audio feedback types based on the average correction response times. This may have been due to the fact that visual and audio feedback place an additional cognitive load on the user while being engaged in their task and rely completely on the user's willingness to self-correct their posture. The EMS feedback, however, does not require the user's attention to correct their posture. This has made our automatic correction using EMS feedback significantly faster than the other two feedback types. + +Users also perceived that EMS feedback corrected their posture more accurately than the other two feedback types that required self-correction. Users reported that self-correction in the visual and the audio feedback types caused them to always over-correct their posture as their awareness of it was minimal while being engaged in the task at hand. Whereas EMS feedback did not require the user's attention and always accurately activated when slouched posture was detected and deactivated after posture had been corrected. The user rankings on accuracy of EMS feedback indicated that EMS feedback was perceived to be more accurate in the mobile game application than the text entry application. This interesting finding may have been due to different factors such as nature of the two applications, complexity of the task, users' connection to the device and varied range of motion involved in auto-correction using EMS feedback across the two applications. + +Additionally, it was also interesting to note that EMS feedback and auto-correction were perceived equally comfortable and no more disruptive than traditional visual and audio alert feedback but with the added advantage of automatic correction. This may have been because EMS feedback relied entirely on the user's physiology and delivered a somatosensory feedback that discretely enabled posture awareness without disruption. Additionally, the EMS feedback allowed the audio and visual faculties to be engaged in the task and not be distracted by the audio and visual feedback that required their attention and disrupted their workflow or game play. + +Further, users also exhibited and reported a shared responsibility in aiding the auto-correction feedback mechanism. This may have been due to fact that the system increased awareness of their posture and the sensory confirmation presented by the EMS feedback loop by activation and deactivation when slouched or upright posture was detected, respectively. It could also be that they learned the working of the system or perceived that they were involved in the posture correction. Users also reported that they helped the auto-correction process slightly as they progressed through the duration of the task and gradually started utilizing it as a training device. With respect to sensitivity of the EMS feedback, some users were more sensitive to the EMS than others. As shown in the results section, the EMS intensity varied across the study population. This could be due to factors such as different body types, muscle physiology, and activity levels. EMS feedback is a minimally invasive technique and could be intrusive and disruptive if not applied at the correct locations and improperly calibrated. Our system was perceived no more disruptive than the alternative feedback types due to the careful calibration of the EMS intensity with user feedback on their level of comfort and the generation of the desired physiological response during the calibration phase. In general, EMS was also able to deliver an equally good overall experience as the other two feedback types and shows promising potential to be developed as a viable alternative option for maintaining good posture or developing good postural habits. + +Finally, users perceived that EMS was an interesting concept to use for automatic posture correction while they were engaged in their tasks. Twenty out of 36 users reported that EMS feedback and auto-correction was their most preferred feedback type while ${75}\%$ of the study population was willing to purchase the automatic correction using EMS feedback if it were a commercially available product. Therefore, our autonomous system could be a valuable alternative or an addition to existing environment, health, and safety (EHS) protocols at workplaces for enhancing productivity, worker health and in preventive health care. + +§ 7 LIMITATIONS AND FUTURE WORK + +Although limitations include placement of the sensors and electrodes, we plan to integrate them into wearable clothing and devising an auto-calibration system that can be customized to each individual's comfort. Another limitation of the current study was the gender of the users, the proportion of the male users was higher than the female users (Male=31, Female=5). We plan to conduct a further study to balance the male to female ratio. + +Our future work includes the development of an instrumented wearable garment that can be worn and contains provisions to embed the sensors and electrodes and house the EMS feedback unit along with a dedicated smartphone application. Additionally, we plan to test the automatic detection and correction of slouching under different physical conditions such as standing, walking, and carrying different loads on users' backs. We also plan to extend the concept of posture correction using EMS to different workplace postures such as wrist extension, text neck, and asymmetric weight distribution on the legs. + +§ 8 CONCLUSION + +In conclusion, we have demonstrated that our physiological feedback loop based automatic slouching detection and correction with EMS is a viable approach to supporting posture correction. Our auto-correction system utilizing EMS feedback demonstrated significantly faster posture correction response times compared to the self-correction in the visual and audio feedback. Our approach also showed that users perceived EMS feedback to be more accurate, just as comfortable and produced no more disruption than the alternative techniques it was tested against in both the text entry and the mobile game applications. Therefore, automatic slouching detection and correction utilizing EMS shows promising results and can be developed as an alternative method for posture correction. Our approach could prove useful in preventive healthcare to avoid workplace related RSI and be particularly beneficial to people involved in highly engaging tasks such as gaming, diagnostic monitoring, and defense control tasks. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/I0HZJde1BxQ/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/I0HZJde1BxQ/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..01d53caac05bc2467ed7f91496b4bf43e2acd94f --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/I0HZJde1BxQ/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,281 @@ +# Konjak: Live Visualization in Deep Neural Network Programming as a Learning Tool + +Anonymous Authors ${}^{ * }$ + +Author's Affiliation + +## Abstract + +Visualization in deep neural networks (DNNs) development could play a key role in helping novice programmers to inspect and understand a network structure. However, these visualizations are usually available only after the implementation of the DNN program. We propose combining a code editor with a live visualization of the DNN structure to assist machine learning novice users during the DNN code development. The user assigns operation blocks and input data size in the code editor, and the system continuously updates the network visualization. The visualization is also editable, where the user can directly use drag-and-drop operations to build a network. To our knowledge, we are the first to tightly combine text-based programming editor with live and editable visualization as an educational tool in DNN programming, which can help understand the concept of shape consistency. This paper describes the system's design rationale and presents an exploratory user study to evaluate its effectiveness as a learning environment. + +Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods + +## 1 INTRODUCTION + +In recent years, deep learning networks (DNNs) have been surging in many classical machine learning tasks, such as image classification [16, 18, 23, 30], object detection [15], text generation [11], and rendering [21]. As a sub-domain in machine learning, DNNs show impressive performance in most of the tasks above, even surpassing human-level accuracy. DNNs' powerful ability to dig insights from data makes it one of the most popular tools for researchers and practitioners to utilize in practice. + +For non-expert machine learning users, like software engineers, medical doctors, and artists, DNNs are attractive for applications in their domains. Libraries like Tensorflow $\left\lbrack {1,2}\right\rbrack$ and Pytorch $\left\lbrack {29}\right\rbrack$ provide high-level APIs to enable a more approachable model building without losing the flexibility needed by expert users to customize more details in their model. In general, DNNs are sequences of mathematical functions (a.k.a. layers) to process data in the form of multi-dimensional arrays (a.k.a. tensors), and arguments of these functions rule legal tensor shapes that can be processed. The programmer needs to consider the alignment between layer arguments and assumed input data shape at the early DNN programming stage. However, this is not an easy task for a novice machine learning user. We will detailedly describe the shape inconsistency error that the misalignment will lead to in Sect. 3. + +By observing expert machine learning users and rethinking our own experience, we noticed that the network diagram plays a crucial role in DNN programming practice. DNN developers or researchers often draw node-link diagrams on a whiteboard for DNN structure communication [36] as well as scholarly communication. In the current DNN programming practice, visualization is optional and only available after the training phase. These visualization tools enable network validation at a very late stage in DNN modeling procedures, although visualization can guide the user during the code editing. Tools such as A Neural Network Playground [31] introduce manipulable visualization into DNN education to help explain the mechanism but do not show the corresponding text-based code. This prevents novice users from improving DNN programming skills, which usually involves text-based programming. + +![01963eac-3b85-7c06-84fd-6b04c273b3b9_0_934_456_711_460_0.jpg](images/01963eac-3b85-7c06-84fd-6b04c273b3b9_0_934_456_711_460_0.jpg) + +Figure 1: Konjak shows live visualization next to the text-based code editor, providing continuous feedback on the Deep Neural Network structure to the programmers. The visualization also supports direct manipulation to edit the corresponding text code. + +We propose Konjak, a system to augment a standard text-based code editor with an always-on and editable live network structure visualization to help machine learning novice programmers to learn DNN programming, as shown in Fig. 1. Konjak enables higher liveness than the current practice of DL system development, where the programmers can bidirectionally check and edit the DNN between synchronized code panel and visualization panel. We support an automated tensor shape checker to help users, especially novices, tackle a shape inconsistency error at where they occur in time. By comparing the codes and adjacent DNN visualization and repeating edit on either of them, the programmer can qualitatively improve the DNN programming skill. We contribute to Human-Computer Interaction (HCI) as follows: + +- A literature study on DNN visualization based on figures from machine learning academic papers and existing visualization tools. From it, we summarize visual principles for the DNN programming environment. + +- A novel DNN programming environment for teaching nonexpert programmers about DNN modeling and programming paradigm. The programmers can edit the neural network in the editable live visualization and check the tensor shape in real-time. + +- An exploratory user study that shows Konjak helps in two aspects: novice machine learning users could touch the DNN programming paradigm and fix layer-tensor alignment during programming; experienced DNN programming trainers could teach the skills to the learners by demonstrating text-based editing and the corresponding effect on the visualization, or vise versa. + +--- + +*e-mail: Author's email + +--- + +## 2 RELATED WORK + +### 2.1 Deep Neural Network Bug and Repairing + +In practical DL system development, the developers mainly use modern DL frameworks like Tensorflow [1], Chainer [35], Pytorch , [29] and Keras [10] to programmatically build their model. These embedded domain-specific-languages (DSL) provide packaged functions and layers in DNN programming and keep updated to support the newest statistical functions proposed in the machine learning community. Many studies have researched on challenges that a programmer may face in DL system development. Amershi et al. surveyed software engineers from Microsoft teams. They found that the more the programmer experiences machine learning software engineering, the more they consider the use of "AI tools" as a challenge [4]. Cai et al. investigated software engineers' motivation, hurdles, and desires in shifting to machine learning engineering and found that "implementation challenge" is unignorable [7]. Zhang et al. and Islam et al. looked deeper into the DL programming process by analyzing posts from StackOverflow and repositories on GitHub [19, 39]. According to them, "Program Crash" is a common category of bugs in the deep learning system, and amongst this type, "Shape Inconsistency" is one of the most questioned bug types. "Shape Inconsistency" refers to runtime errors caused by mismatched multi-dimension arrays between operations and layers [39]. To ensure that the shape of the array will not differ from the developer's desired mental model, the programmer may keep calling print statement or have their model visualized after the editing towards the network definition program file is over, then repeat the process until they are satisfied with the network's structure. Konjak is designed based on our observation of this specific type of bug. With the "Level 3" liveness in DNN programming (as explained in the following subsection), the novice programmer can more efficiently master the mechanism of DNN development. + +### 2.2 Coding-free DNN Development Tools + +In response to the growing desire to become a more professional programmer in data science $\left\lbrack {4,7,{24}}\right\rbrack$ , Konjak plays the role of a novel DL programming learning environment. AutoML [13] provides a commercial online service to allow the user to upload their data and have their data automatically analyzed without touching the complicated machine learning algorithm and program. It is end-to-end, which means the only thing the user needs to do is to prepare desired input and output data pairs, and the algorithm will pick a proper model and train it automatically. Thus the pipeline is simple enough for those end-users who even don't have coding experience at all, but it is also not originally designed with teaching machine learning programming paradigm in mind. Neural Network Console [32] by Sony and DL-IDE [33] by IBM provides the user with a fully graphical interface for DNN modeling using block-based visual programming language, without exposing codes to the user. From Tanimoto's classification [34], both of them utilize a "Level 2" liveness in DNN modeling, which means the network structure diagram is editable and executable but not always responsive to the user's edits. However, these coding-free DNN development tools are limited in capability and flexibility. You can do much more by direct coding using frameworks. From the study by Qian et al. in [37], in data science model building, experts prefer graphical tools for communication and education, while non-experts prefer code editor where they are able to start from existing codes. To fill the gap by providing both ways in the user interface, Konjak is initially designed for novice's DL programming education. We retain the code editor and experiment "Level 3" liveness in modern DNN modeling since its visualization and codes update in nearly real-time whenever the user edits the model in its interface. + +### 2.3 Live Programming Environment + +Our system is categorized into live programming environments, a concept that already has a long history $\left\lbrack {{27},{34}}\right\rbrack$ . The main core of these techniques is the liveness in the programming experience of the under-development program, which is demonstrated by continuously providing the programmers simultaneous feedback so that an evaluation phase can be integrated with the code-editing phase to some degree [27]. By reducing the latency between writing the code and checking its behavior [34], the programmer will have a better comprehension of the behavior they are conducting towards the program. A number of live programming environments have been implemented for different domains while with corresponding features emphasized. For example, TextAlive [22] allows editing computer graphics animation algorithms whose rendering results are shown next to the code editor. Omnicode [20] provides an always-on visualization that presents all numerical values throughout the whole execution history in order to provide the user with better program understanding; Projection Boxes [26] are interaction techniques to enable on-the-fly configuration of such always-on visualizations; Sketch-n-Sketch [17] contributes an output-oriented programming interface to bidirectionally (code $\Leftrightarrow$ screen) create and manipulate graphical designs (scalable vector graphics [SVG]); Plotshop [5, 6] augments a text-based code editor with an interactive scatter plot editor to enable the user to author synthetic $2\mathrm{D}$ points dataset for machine learning algorithm test more intuitively. Skyline [38] is quite a similar project with Konjak in concept and background but focuses on in-editor DNN computation performance profiling. + +## 3 BACKGROUND ON DNNS PROGRAMMING + +As stated in Sect. 2, many previous works have focused on obstacles that a user might meet during writing DL programs and utilizing DL in their systems $\left\lbrack {4,7,{19},{39}}\right\rbrack$ . In this paper, we especially focus on model structure comprehension and tensor shape inconsistency during DNNs programming. This section briefly introduces the background of DNNs programming practice and common bugs in DL development. + +a) DNN programming using DL libraries: Writing a program to define a DNN is actually to assemble a series of mathematical functions into a sequence. The finished function sequence is called a network, and each mathematical function is identified as a layer in the network, which may have weights. In practice, DL libraries are maintained to provide common layers and tools to build up a network, which has much-eased programmers' burden in the network programming phase. Once the network is assembled up, the programmer writes scripts to define the training process, where the DNN is created as an instance, also called model, then its weights iteratively learn from batches of input data. In their heart, DNN models receive data in specific tasks and output predictions. In each iteration of the training process, the model makes a prediction towards the input data batch, and based on errors of the prediction, the model's weights are updated to reduce the prediction error. In this process, layers receive and emit data in the form of a multi-dimensional array, e.g., for an image as the input into the network, the tensor is usually in shape (batch size, channel size, height, width). The multi-dimensional array is heuristically called activation or tensor. After the training is finished, the programmer evaluates the performance of the trained model, and iteratively improves it by muting arguments and network structure. Optionally, the programmer may deploy the model into an actual system. + +b) Common bugs in DNN programming: Bugs in DNNs can be identified as explicit or implicit bugs. Explicit bugs are those that crash the program and abort the training or evaluation process. Implicit bugs won't produce any errors during program execution but cause symptoms like abnormal training or low prediction accuracy. In this work, we focus on explicit bugs that crash the program. According to Zhang et al. in their study over DL-related questions collected from StackOverflow [39], the most common explicit bugs that cause program crash in DNN program are: 1) Shape inconsistency: Layers in a network are defined with several arguments and can only receive tensors in a specific shape. If a layer's arguments mismatch with the input tensor shape (e.g., in the image classification task, produces a tensor that has non-integer height or width), the execution will be aborted and throw an error; 2) Numerical error: data in DNNs is mainly defined as float point values [12], and inconsistent numerical type will easily raise an undesired error, e.g., when a float64 tensor is an input to a layer with float32 weights; 3) CPU/GPU incompatibility: GPU plays a vital role in accelerating DNN's training and evaluation iteration but given a trained model and relevant published codes, the execution may fail on another CPU-only machine because the codes are GPU only. These bugs are most frequently asked in DL application-related questions and do not occur in conventional non-DL applications. + +node. + +![01963eac-3b85-7c06-84fd-6b04c273b3b9_2_145_224_1517_658_0.jpg](images/01963eac-3b85-7c06-84fd-6b04c273b3b9_2_145_224_1517_658_0.jpg) + +## 4 A LITERATURE STUDY ON DNN VISUALIZATION + +We address the learning of novice in DNN programming by introducing a synchronized visualization bound with text-based programming. Prior to the design of the system, we first investigated common network drawing practices in DNN to reflect the programmer's mental model towards the under-developed model within our interface. We follow procedures to create more effective domain-specific visualization for communication proposed by [3]. We collected hand-drawn DNN structure diagrams from DL papers to build a database. The papers are picked from Paperswithcode.com [28] by visiting leaderboards in three computer vision areas (i.e., image classification, object detection, and semantic segmentation). Besides referring to DL papers, we also get insights from the existing DNN model visualizers, which automatically renders a trained model matrix or manually input network definition into a visual representation. + +We analyzed the DNN visualization database (including diagrams from papers and synthesized by tools) and categorized the visualizations in terms of visual encoding. In the DNN visualization database, it is common to draw network structure in a node-link diagram. We noticed that there are two branches in the design decision what the node represents (see Table 1): some diagrams choose node as the visual factor when representing the tensors, and the adjacent link represents layers to process tensors; Other diagrams, on the contrary, emphasize layers as nodes in visual representation, in this case, links indicate tensor's flow in the network. + +Another important choice in visualization design is how to draw the node in the diagram, especially in diagrams where tensors are emphasized as nodes. In our observation, three answers are given in this choice: One is to draw the node in a 3D-box, which is the style adopted in network structure diagram by AlexNet [23], the paper opened DL's new age in 2012. Tensor data in DNN can be a 1-D vector or an n-D array, and each dimension's size affects the network's correctness. Drawing tensors in 3D-boxes gives the user an instinctive representation of the tensor's concrete shape; The other style is to draw the node in Stacked Sheets. Here tensors are represented as some stacked rectangles. The size of the rectangle is the encoding of a tensor's map sizes (width and height), and the number of rectangles is encoded from the channel number of a tensor. This style is adopted by LeCun et al. in 1998's famous DNN structure LeNet; a Very limited amount of diagrams choose a flow-like style to represent the flow of tensors. This style encodes tensors into some "rivers," and the river's width represents the tensor's channel number. However, the tensor map's width and height information is omitted in this drawing style. On the other hand, in diagrams where a node represents a layer, the node is usually drawn as a plain rectangle. + +As stated in the previous section, "Shape Inconsistency" is the main problem we want to solve in a novice's DNN programming learning. Therefore, in the first choice what the node represents, we pick tensors as the majority of the structure graph. In terms of the second choice how to draw the node, we choose 3D-box to show the tensor's shape as much as possible. Nevertheless, the layer's information should be contained in the visualization. Thus in our design, we divide a separate panel from the interface to list information of all the layers defined by users in code. + +![01963eac-3b85-7c06-84fd-6b04c273b3b9_3_165_150_701_418_0.jpg](images/01963eac-3b85-7c06-84fd-6b04c273b3b9_3_165_150_701_418_0.jpg) + +Figure 2: Screenshot of the code editor at left half of the user interface: (a) the code editor, (b) the program error log panel and (c) the inline shape consistency indicator. + +## 5 KONJAK + +We implemented a prototype system, called Konjak, to show the feasibility and effectiveness of live visualization for teaching novices DNN programming and accelerating their progress to DL software engineering. It is implemented as a web-based application written in JavaScript. We use Python as the user-facing language and Chainer [35] as DNN API. We choose Chainer as the DNN library because it was one of the most popular DNN libraries at the time of the implementation; meanwhile, its pioneered dynamic eager execution characteristics has deeply inspired later DNN libraries' API design $\left\lbrack {2,{29}}\right\rbrack$ . Keeping the education purpose in mind, we now visit every component in Konjak's interface and explain their design motivation and functions. The user interface consists of two tightly interlinked main components, the code editor and the live visualization. + +### 5.1 Code Editor + +The left half of the screen is split as the code editor component, where the user writes a program like in a standard workflow of general-purpose programming (See Fig. 2). This consists of three child components: a) text-based code editor, b) problem panel, and c) inline shape check and highlight. + +a) Text-based code editor: We utilize CodeMirror [14] as the backbone of the text-based code editor in a browser. The user writes a DNN structure program in this area. Note that although in the prototype we only support Chainer, without loss of generality, the interface can be transferred to other popular DNN libraries like PyTorch due to their similar API design. A DNN structure in Chainer usually starts from inheriting a built-in class chainer. Chain, which is a class to define a neural network composed of several layers (chainer.links or chainer.functions). In actual code structure, the user defines layers to use in the network in function _____init_____ by assigning Chainer.links instances to attributes. For example, the code self. $\mathrm{c}@ =$ L. Convolution $2\mathrm{\;D}(3$ , ${16},3,1,1)$ assigns a $2\mathrm{D}$ convolution layer instance to a attribute c0, with the convolution layer’s arguments given as "Use a $3 \times 3$ kernel to convolute a 3-channel tensor in stride 1 , and output 16-channel tensor. In convolution, the input tensor map is spatially padded with width 1 for each channel". + +The other necessary component in code is function _____call_____, where the user gradually connects layers defined in function _____init_____ to input data. When the network definition is over, the user is supposed to give a shape definition of input data in a comment, e.g., in line ${23},\mathrm{x} = \left( {{32},{128}}\right)$ , means $\mathrm{x}$ is a tensor in the shape of ${32} \times {128} \times {128}$ . In our current prototype, we assume that the input data's width and height are the same for simplification. Konjak provides a live programming environment that always parses the under-development program and updates its visualization. Every time the user stops typing after 0.5 seconds, the browser will send the program back to a server to have it checked for further visual feedback. + +![01963eac-3b85-7c06-84fd-6b04c273b3b9_3_932_149_713_459_0.jpg](images/01963eac-3b85-7c06-84fd-6b04c273b3b9_3_932_149_713_459_0.jpg) + +Figure 3: Screenshot of the live visualization at right half of the user interface: (e) layer card bar to list all layers defined in the program, and (d) the interactive graph visualization panel for visualizing network structure and tensor shape. + +b) Problem panel: After the program is sent to the server, its syntax will be checked first by the library Pylint. If no errors are found, the server parses the received program into AST and sends simplified AST back to the browser in JSON format. Konjak renders the received JSON into visualization in the user interface and prints an error message in the problem panel, which can be syntax errors detected by Pylint or non-syntax errors (tensor shape inconsistency). The state of the program is encoded to panel background color to notify users, where red is for error and green is for success. The line number is included in the message to help users locate errors. + +c) Inline shape check and highlight: Synchronized with the problem panel, in-situ tensor shape inconsistency indicator is activated in lines where _____ call_____ function is defined. Applying exactly the same color encoding as the problem panel, the line where the inconsistency happens will become red. The indicator stops showing color at that line. Otherwise, all lines in the _____ call_____ function show green. + +### 5.2 Live Visualization + +Live visualization (See Fig. 3) is designed to help novices lively check DNN structure and interactively connect layers to tensors lies in the right half of the user interface. Following the design principles we described in the last section, we use two sub-panels to visualize the neural networks: d) layer card bar and e) graph visualization panel. + +d) Layer card bar: We retain a bar to list all layers defined in function _____init_____. In this bar, all the layer instances assigned to the network's attributes are drawn in separate cards, and Konjak places the cards in the narrow bar vertically, following the order that layer instances are created in the program. These layer cards are responsive to the program in the code editor. When the user adds or deletes lines in the program to create or delete layer instances, the corresponding layer cards will simultaneously arise or disappear. Taking the characteristics of different types of layers in DNN into consideration, we present a brief and explanatory layer card design. + +Fig. 4 shows some examples of layer cards in Konjak. For the top $2/3$ space of the card, we draw different sketches to represent different types of neural network layers. These brief sketches also indicate the layer's effect on the tensor shape, e.g., a linear layer will flatten a tensor into a fixed-length vector, and a max-pooling layer down-sample a tensor by picking the maximum value in each tile of the tensor to reduce original data volume to a smaller one. We show the layer’s parameter information at the bottom $1/3$ space for those layers that only receive and emit specific data size. At the left end of the cards, we encode different layer categories into a colored strip (Orange for convolution, purple for normalization, ash for linear, red for activation, dark blue for dropout, and green for pooling). Every time the user edits parameters to create a layer instance in the code editor, the corresponding layer card updates its drawing as well as information. + +![01963eac-3b85-7c06-84fd-6b04c273b3b9_4_157_149_711_452_0.jpg](images/01963eac-3b85-7c06-84fd-6b04c273b3b9_4_157_149_711_452_0.jpg) + +Figure 4: Examples of layer card designs. + +![01963eac-3b85-7c06-84fd-6b04c273b3b9_4_151_1150_717_513_0.jpg](images/01963eac-3b85-7c06-84fd-6b04c273b3b9_4_151_1150_717_513_0.jpg) + +Figure 5: Drag-and-drop function in Konjak's live visualization. (a) the user is allowed to drag a layer card and drop it on the tensor node they want to connect the layer to, then the graph visualization updates to reflect the connection, meanwhile (b) new code line (red frame) is synthesized simultaneously in code editor. + +e) Graph visualization panel: this is the heart of Konjak, which interacts with all other components in the user interface. Similar to the layer card bar, it mainly provides the user synchronized visual representation of the network structure in the form of a node-link graph. To keep the user's image towards tensor shape clear, we draw 3D tensor nodes in 3D-box and 2D tensor nodes in the 2D bar, with tensor shape printed next to the box (Channel $\times$ Height $\times$ Width). Color encoding of the cube or the $2\mathrm{D}$ bar is consistent with the layer card but to indicate layer type that reproduces the tensor. As a special type of tensor, we encode input data as grey cubes. $h$ or $x$ at the side of tensor nodes means these tensors are assigned to variables in the program. + +Graph visualization panel interacts with the layer card bar by providing a drag-and-drop feature. The user is allowed to click and drag the layer card, then drop it on these tensor nodes with the variable names to connect the layer to the tensor node. After the drag-and-drop operation, a new node will show as the next tensor, and variable information updates simultaneously. This operation will also affect the code editor as a way to synthesize new code lines in relevant line numbers from visualization. We implement this feature by binding the visualization elements with the relevant line number and offset information in the program. When the drag-and-drop operation is conducted, Konjak attaches the synthesized codes after the relevant last line, then the browser sends the updated program back to the server and re-render the visualization based on the response from the server. The editable visualization, together with the synchronized update from code editing to visualization, builds a bidirectional DNN editing experience for novices. To make the binding between program and visualization tighter, when the user move cursor in the code editor, the line cursor movement will trigger highlights in relevant visual parts. In reverse, if the user's mouse hovers on link (layer) or node (tensor) in the graph visualization panel, the corresponding layer card will show a different background color, and relevant code lines will become bold. + +## 6 USAGE SCENARIOS + +Under the context of DNN programming curriculum and a novice's first DNN programming exploration, we present two usage scenarios here as examples to show how Konjak can be used. + +### 6.1 Convolution layer setting playground + +Compared to other layers that don't affect the input tensor's shape or simply reduce the original tensor's size to half, the convolution layer's output tensor shape is influenced by many layer parameters in the definition. The formula to calculate the output tensor shape of a convolution layer is: + +$$ +{C}_{o} = K +$$ + +$$ +{W}_{o} = \frac{\left( {W}_{i} - F + 2P\right) }{S} + 1 \tag{1} +$$ + +where input tensor is of size $\left( {{C}_{i},{H}_{i},{W}_{i}}\right)$ , output tensor is of size $\left( {{C}_{o},{H}_{o},{W}_{o}}\right)$ , and the convolution layer’s arguments are defined as L. Convolution2D(in_channels=Wi, out_channels=K, ksize=F, stride=S, pad=P). Some specific parameter settings are commonly used in neural networks implementations, like (ksize $= 3$ , stride $= 1$ , pad $= 1$ ) to keep input tensor size and (ksize=4, stride=2, pad=1) to reduce the size to half. Konjak can act as a playground for a novice to easily experiment with different convolution layer parameter settings, always with instant visual feedback to help them what is happening with the parameter. + +![01963eac-3b85-7c06-84fd-6b04c273b3b9_4_934_1676_705_353_0.jpg](images/01963eac-3b85-7c06-84fd-6b04c273b3b9_4_934_1676_705_353_0.jpg) + +Figure 6: Konjak enables DL programming learners solving shape consistency problem iteratively in much lower trail-and-error cost. + +### 6.2 Solving shape inconsistency + +Live visualization largely reduces novices' trial and error cost in solving the bugs related to a tensor's shape. Let's consider a scene where a novice is re-implementing a network structure to fit their own application needs. In Step.1, the user starts by copying and pasting a template of a DNN program, then defines the layers that will be used in the network. Layer parameters don't need to be precisely filled at the beginning because Konjak's visualization is synchronized. Even when there is improper parameter-picking, our system can tell the user where it is wrong. With layers defined with randomly picked initial parameters in the code editor, the user can use the drag-and-drop function to have a quick network connection in Step.2. Then, the user will notice the error message shown in the problem panel and inline highlight and locates the layer that results in the shape error. Step.3, the user is allowed to check the live visualization alternately, go back to the layer definition part and modify the parameters, and repeat this process until the tensor shape shown in the visualization is satisfactory. Compared to current practice to use a print statement or visualization tool after the code editing phase, trial and error cost here is reduced to less than one second. + +## 7 USER STUDY + +As an educational tool to help novices learn DNN programming, we hope Konjak's liveness and visualization can 1) help form a proper mental model towards DNN structure and 2) shorten the cycle in input/feedback in the learning stage. To evaluate the points above in the educational context, we ran an exploratory first-use study to get feedback from both DL novices (learners) and experienced DL developers (trainers). We invited 12 participants (nine males and three females), aged 22 to 28, with programming experience from two to 10 years. P1 - P4 are Deep Learning engineers who mainly act as the role of a trainer, and P5 - P12 are lab internal members who are identified as novices in DNN programming (has nearly no knowledge or only basic knowledge about DNN). Note that we determined the number of participants (sample size) based on the professional standards for this type of study within the HCI community [8]. + +### 7.1 Procedures + +The study lasted 60 minutes for each participant. We started with a 15-minutes warm-up session on Konjak's user interface and features, as well as the basics of DNN programming. During the session, the participant was allowed actually to touch Konjak to get familiar with the system. Next, we presented two tasks that are common in the DL programming learning situation and have been mentioned in Sect. 6. We designed the tasks by talking to expert machine learning users. Each task lasts 15 minutes. After the task phase, we conducted an interview and questionnaire in the 15 minutes remaining. Note that all the participants worked on the same tasks. P5 - P12 are the main target participants to simulate the DNN programming learning situation, and P1 - P4 are mainly recruited to provide extra reviews about our system from the perspective of an educator. The tasks are: + +- Task 1: Given desired input and output tensor size, explore proper layer parameter settings to finish the tensor shape transformation. + +- Task 2: Given the DNN structure diagram cropped from a research paper (LeNet [25]), try to implement it in Konjak. + +Table 2: Summary of post-study questionnaire results. + +
#Items in Questionnaire$\mu$$\sigma$
1The live visualization diagram reflected my mental model towards the network structure well.4.420.51
2Highlight features (code2visualization, vi- sualization2code) helped me to locate code/visual element from each other.4.001.04
3The design to divide visualization panel into graph visualization panel and layer card bar is intuitive for Chainer's code structure.4.670.49
4Using node to represent tensors in visualiza- tion, and the shape check feature, helped me in exploring a shape-consistent solution in DNN implementation.4.580.67
5Konjak's network and layer card drawing are confusing for me to understand.1.920.90
6The consistency between code editor and vi- sualization, helped me observing DNN pro- gram more conveniently.4.500.67
+ +### 7.2 Result + +All except two (P8 and P11) of our participants finished the two simulation learning tasks using Konjak within the given time. Table 2 shows post-study questionnaire results, which are summarized from six 5-point Likert scale questions (From Strongly Disagree 1 , to Strongly Agree 5). These questions covered the core concept of utilizing live programming in the DNN educational context and detailed design points of the visualization. In terms of the drawing of sketches in layer card and network graph to show different layer's functionality, because they are mainly originated from our understanding, we included a question in the questionnaire to survey users' reaction towards our concern that the drawing may cause novices' confusion. We also surveyed participants willing to use live visualization to train other novices in DNN programming in the future. Here we summarize their feedback in three aspects: + +Assist DNN implementation and debug: some feedback can be categorized into description about how the live visualization help some of our participants implement DNN program and debug parameters. Compared to traditional practice to check tensor shapes using built-in print statements, Konjak enables a shorter route in debugging the DNN program. P6 said: "With the highlight feature and real-time visualization, I can write the program while making sure whether previous lines are correct or not." Exactly as why we are motivated to design Konjak, P7 expressed that the system indeed reduces shape inconsistency in DNN implementation: "Konjak's interactive feedback and tensor shape inconsistency error messages were very useful to create a DNN structure without worrying about tensor shape inconsistency too much." + +We recorded how the participants used Konjak in the study. When reviewing the video, we noticed P10's whispering in Task 2: "The input image is(3,32,32), and the first feature map on the diagram is(6,28,28), so I should first fill in out_channels $= 6\ldots$ Then what about the parameter ksize and stride? It seems that stride=2 will make the output map size decrease too much, so I keep it 1 first... And ksize=1, this will make the width 32... (Change the value to 4) Oops, it (map size) is still not small enough... (Re-input 5) Okay, now it's the same as the diagram. So the next layer is a max-pooling layer..." Observing his behavior in Konjak, we found that it exactly matches the usage scenario of solving the shape inconsistency we presented in the last section. + +A skilled DNN programmer also provided positive feedback about the live visualization in aiding DNN implementation. P1 presented a view about the similarity between Konjak's programming experience and web development: "This reminds me of my web developing experience, where I put a code editor on the left half and browser on the right half. Since I use hot-reloader, which automatically refreshes the browser whenever I change the code base, I have the tendency to focus on coding and use the visualizer only for reference." In this sense, we may potentially extend Konjak's features for more than an educational purpose. + +Aid DNN programming education: Eight participants (P5 - P12) are novices or beginners in DNN programming, and Konjak is highly regarded by them. P8, a CS student who had learned DL before, but nearly forgot all of the knowledge, appreciate the live visualization a lot: "Although I almost forgot DNN, this structure helped me understand how the layers and structures work. I want to use it when I use DNN in the future.". P10, identified as a DL learner, said: "(Konjak) is especially helpful in implementing a DNN while referring to a research paper." + +All of our participants agreed that if it is possible, they tend to use a live visualization like Konjak to teach other beginners DNN programming in the future. From the perspective of a skilled DL engineer, P1 described the situation where he uses a system like Konjak to teach a novice DNN programming: "If I am going to teach someone Chainer, I probably will use this UI because after understanding how it works, the synchronization between code editor and graph is very helpful to teach the student. I will focus on teaching the student how to write code on the left panel, but occasionally if they make any mistake or don't understand what is going on with the code, then $I$ will remind them to play with the visualization to understand what is going wrong so he can clarify his question and return to coding." + +We ran this study like a get-started lesson to our novice participants, and for those trainer participants, it might seem like a chance for them to think about how to teach the DNN programming paradigm to a newbie. Nevertheless, the programming paradigm is only a side in DL programming. According to the study by Cai et al. [7], other obstacles like mathematics knowledge are still prohibiting novices' diving in. + +Implementation Issues: Some feedback are about some interface details that affected participants' programming experience. P5 complained about the switching between the code editor and the live visualization: "It bothered me a little if $I$ have to watch the visualization but edit the network on another side. I'd like to modify the network parts right in the visualization panel.". P9 suggested another style to draw 3D-Box in graph visualization panel: "Live visualization to show DNN structure is easy to understand. But the drawing of tensor nodes confused me at first. Maybe you can draw it like stacked slices, which can be a more intuitive representation of channels." + +## 8 LIMITATION AND FUTURE WORK + +Konjak's current prototype is originated from the motivation to support novices in the learning stage. For this reason, only limited layers and structures are supported to be displayed in the live visualization. We implement the prototype to only support convolutional neural networks (CNNs) for tasks like image classification, object detection, and image segmentation, while other structures such as recurrent neural networks (RNNs) and generative adversarial networks (GANs) are out of scope. Fig. 4 covers nearly all the supported layers in the current prototype, nine layers in total. Besides the layers drawn in the figure, two activation layers (Sigmoid and Softmax) and one normalization layer (Local_response_normalization) are supported in the implementation. These six layer categories are typical because they are the main components of some classical DNNs, such as VGG16/19 [30], AlexNet [23], and ResNet [16]. In each category, we only implemented at most three layers because the layers classified into the same category share similar APIs and visual representation. In DNN programming learning, the DNN model that novices explored might be relatively simple, but when the programmers become skilled, more complicated networks (e.g., with skip link or multiple tensor flows) and customized layers are common in practical development. In our user study, the skilled machine learning programmer participants agreed that the live visualization to always show tensor shape could be quite helpful even in their daily DNN programming. Therefore, the direction to extend Konjak's concept to more situations is worth considering. Features like sub-nets or customized layer visualization are in our future plans. Konjak's current implementation is based on static analysis, which limits the system's application in an expert's practical development. We think that hacking a Python interpreter and tracing its execution memory by executing the actual code will greatly improve the system's extensibility and make it ready for practical development. + +In our user study to evaluate Konjak as an educational tool, we design the study like a lesson to train novices in solving shape inconsistency problems and collect their first-use impression of Konjak. We didn't compare Konjak with the print statement or other coding-free DNN network modelers. As far as we know, Konjak is the first research to explore a live programming environment in text-based DNN programming. We hope this work can become the first step toward the acceleration of research in improving DL developing experience in the future. + +Also, our participants identified some implementation issues in Konjak's prototype. We thought that they could be overcome with engineering efforts. + +## 9 CONCLUSION + +We have proposed a system called Konjak that augments a text-based code editor with a synchronized, editable, and representative live visualization to support novices in DNN programming as an educational tool. We revisited DNN structure diagram designs from research papers in machine learning and existing DNN visualizer and extracted design principles, especially for educational context and live programming environment. The system provides a bidirectional editing manner between the code editor and the live visualization and instant tensor shape checking features to avoid the common shape inconsistency error of the DNN program. An exploratory user study was conducted to evaluate Konjak in an educational situation. + +## ACKNOWLEDGEMENTS + +Removed for review. + +## REFERENCES + +[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kud-lur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation (OSDI), pp. 265-283. USENIX Association, USA, 2015. + +[2] A. Agrawal, A. N. Modi, A. Passos, A. Lavoie, A. Agarwal, A. Shankar, I. Ganichev, J. Levenberg, M. Hong, R. Monga, et al. Tensorflow eager: A multi-stage, python-embedded dsl for machine learning. arXiv preprint arXiv:1903.01855, 2019. + +[3] M. Agrawala, W. Li, and F. Berthouzoz. Design principles for visual communication. Communications of the ACM, 54(4):60-69, 2011. doi: 10.1145/1924421.1924439 + +[4] S. Amershi, A. Begel, C. Bird, R. DeLine, H. Gall, E. Kamar, N. Na-gappan, B. Nushi, and T. Zimmermann. Software engineering for machine learning: A case study. In Proceedings of IEEE/ACM 41st International Conference on Software Engineering: Software Engi- + +neering in Practice (ICSE-SEIP), pp. 291-300. IEEE, Montreal, QC, Canada, 2019. doi: 10.1109/ICSE-SEIP.2019.00042 + +[5] K. Asai, T. Fukusato, and T. Igarashi. Plotshop: An interactive system + +for designing a 2d data distribution on a scatter plot. In The Adjunct Publication of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST ' 19, pp. 19-20. ACM, New York, NY, USA, 2019. doi: 10.1145/3332167.3357101 + +[6] K. Asai, T. Fukusato, and T. Igarashi. Integrated development environment with interactive scatter plot for examining statistical modeling. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2020), pp. 328:1-328:7. ACM, New York, NY, USA, 2020. doi: 10.1145/3313831.3376455 + +[7] C. J. Cai and P. J. Guo. Software developers learning machine learning: Motivations, hurdles, and desires. In Proceedings of IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 25-34. IEEE, Memphis, TN, USA, 2019. doi: 10.1109/VLHCC.2019. 8818751 + +[8] K. Caine. Local standards for sample size at chi. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI'16, pp. 981-992. ACM, New York, NY, USA, 2016. doi: 10. 1145/2858036.2858498 + +[9] Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng. Dual path networks. In Proceedings of Advances in Neural Information Processing Systems, pp. 4467-4475. Curran Associates, Inc., Long Beach, USA, 2017. https://papers.nips.cc/paper/ 7033-dual-path-networks.pdf. + +[10] F. Chollet. Keras. https://github.com/fchollet/keras, 2015. + +[11] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. + +[12] D. Goldberg. What every computer scientist should know about floating-point arithmetic. ACM Comput. Surv., 23(1):5-48, Mar. 1991. doi: 10.1145/103162.103163 + +[13] Google Cloud. Automl, custom machine learning models. https: //cloud.google.com/automl, 2018. + +[14] M. Haverbeke. Codemirror, 2020. https://codemirror.net/. + +[15] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017. + +[16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. + +[17] B. Hempel, J. Lubin, and R. Chugh. Sketch-n-sketch: Output-directed programming for svg. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, pp. 281-292. ACM, New York, NY, USA, 2019. doi: 10.1145/3332165.3347925 + +[18] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017. + +[19] M. J. Islam, R. Pan, G. Nguyen, and H. Rajan. Repairing deep neural networks: Fix patterns and challenges. In Proceedings of the 42nd International Conference on Software Engineering (ICSE'20), pp. 1-12. ACM and IEEE Computer Society, Seoul, South Korea, 2020. + +[20] H. Kang and P. J. Guo. Omnicode: A novice-oriented live programming environment with always-on run-time value visualizations. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST), pp. 737-745. ACM, New York, NY, USA, 2017. doi: 10.1145/3126594.3126632 + +[21] H. Kato, Y. Ushiku, and T. Harada. Neural 3d mesh renderer. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. + +[22] J. Kato, T. Nakano, and M. Goto. Textalive: Integrated design environment for kinetic typography. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, p. 3403-3412. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123.2702140 + +[23] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105. Curran Associates, Inc., + +Lake Tahoe, Nevada, USA, 2012. doi: 10.1145/3065386 + +[24] S. Kross and P. J. Guo. Practitioners teaching data science in industry and academia: Expectations, workflows, and challenges. In Proceed- + +ings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-14. ACM, New York, NY, USA, 2019. doi: 10.1145/ 3290605.3300493 + +[25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. doi: 10.1109/5.726791 + +[26] S. Lerner. Projection boxes: On-the-fly reconfigurable visualization for live programming. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-7. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/ 3313831.3376494 + +[27] Live Prog Blog. A history of live programming. http: //liveprogramming.github.io/liveblog/2013/01/ a-history-of-live-programming/,2013. + +[28] A. ML. Papers with code: the latest in machine learning. https: //paperswithcode.com/, 2020. + +[29] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, eds., Proceedings of Advances in Neural Information Processing Systems 32 (NeurIPS), pp. 8024-8035. Curran Associates, Inc., Vancouver, Canada, 2019. + +[30] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. + +[31] D. Smilkov, S. Carter, D. Sculley, F. B. Viégas, and M. Wattenberg. Direct-manipulation visualization of deep networks. arXiv preprint arXiv:1708.03788, 2017. + +[32] Sony Network Communications Inc. Neural network console. https: //dl.sony.com, 2018. + +[33] S. G. Tamilselvam, N. Panwar, S. Khare, R. Aralikatte, A. Sankaran, and S. Mani. A visual programming paradigm for abstract deep learning model development. In Proceedings of the 10th Indian Conference on Human-Computer Interaction, pp. 1-11. ACM, New York, NY, USA, 2019. doi: 10.1145/3364183.3364202 + +[34] S. L. Tanimoto. A perspective on the evolution of live programming. In Proceedings of the 1st International Workshop on Live Programming (LIVE), pp. 31-34. IEEE, San Francisco, CA, USA, 2013. doi: 10. 1109/LIVE.2013.6617346 + +[35] S. Tokui, R. Okuta, T. Akiba, Y. Niitani, T. Ogawa, S. Saito, S. Suzuki, K. Uenishi, B. Vogel, and H. Yamazaki Vincent. Chainer: A deep learning framework for accelerating the research cycle. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), pp. 2002-2011. ACM, New York, NY, USA, 2019. doi: 10.1145/3292500.3330756 + +[36] K. Wongsuphasawat, D. Smilkov, J. Wexler, J. Wilson, D. Mane, D. Fritz, D. Krishnan, F. B. Viégas, and M. Wattenberg. Visualizing dataflow graphs of deep learning models in tensorflow. IEEE Transactions on Visualization and Computer Graphics, 24(1):1-12, 2017. doi: 10.1109/TVCG.2017.2744878 + +[37] Q. Yang, J. Suh, N.-C. Chen, and G. Ramos. Grounding interactive machine learning tool design in how non-experts actually build models. In Proceedings of the 2018 Designing Interactive Systems Conference, DIS '18, p. 573-584. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3196709.3196729 + +[38] G. X. Yu, T. Grossman, and G. Pekhimenko. Skyline: Interactive In-Editor Computational Performance Profiling for Deep Neural Network Training. In Proceedings of the 33rd ACM Symposium on User Interface Software and Technology (UIST'20), 2020. + +[39] T. Zhang, C. Gao, L. Ma, M. Lyu, and M. Kim. An empirical study of common challenges in developing deep learning applications. In Proceedings of IEEE 30th International Symposium on Software Reliability Engineering (ISSRE), pp. 104-115. IEEE, Berlin, Germany, 2019. doi: 10.1109/ISSRE. 2019.00020 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/I0HZJde1BxQ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/I0HZJde1BxQ/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..ccf3072a4ba5a797716e9b3514d6df1620b05eef --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/I0HZJde1BxQ/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,211 @@ +§ KONJAK: LIVE VISUALIZATION IN DEEP NEURAL NETWORK PROGRAMMING AS A LEARNING TOOL + +Anonymous Authors ${}^{ * }$ + +Author's Affiliation + +§ ABSTRACT + +Visualization in deep neural networks (DNNs) development could play a key role in helping novice programmers to inspect and understand a network structure. However, these visualizations are usually available only after the implementation of the DNN program. We propose combining a code editor with a live visualization of the DNN structure to assist machine learning novice users during the DNN code development. The user assigns operation blocks and input data size in the code editor, and the system continuously updates the network visualization. The visualization is also editable, where the user can directly use drag-and-drop operations to build a network. To our knowledge, we are the first to tightly combine text-based programming editor with live and editable visualization as an educational tool in DNN programming, which can help understand the concept of shape consistency. This paper describes the system's design rationale and presents an exploratory user study to evaluate its effectiveness as a learning environment. + +Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods + +§ 1 INTRODUCTION + +In recent years, deep learning networks (DNNs) have been surging in many classical machine learning tasks, such as image classification [16, 18, 23, 30], object detection [15], text generation [11], and rendering [21]. As a sub-domain in machine learning, DNNs show impressive performance in most of the tasks above, even surpassing human-level accuracy. DNNs' powerful ability to dig insights from data makes it one of the most popular tools for researchers and practitioners to utilize in practice. + +For non-expert machine learning users, like software engineers, medical doctors, and artists, DNNs are attractive for applications in their domains. Libraries like Tensorflow $\left\lbrack {1,2}\right\rbrack$ and Pytorch $\left\lbrack {29}\right\rbrack$ provide high-level APIs to enable a more approachable model building without losing the flexibility needed by expert users to customize more details in their model. In general, DNNs are sequences of mathematical functions (a.k.a. layers) to process data in the form of multi-dimensional arrays (a.k.a. tensors), and arguments of these functions rule legal tensor shapes that can be processed. The programmer needs to consider the alignment between layer arguments and assumed input data shape at the early DNN programming stage. However, this is not an easy task for a novice machine learning user. We will detailedly describe the shape inconsistency error that the misalignment will lead to in Sect. 3. + +By observing expert machine learning users and rethinking our own experience, we noticed that the network diagram plays a crucial role in DNN programming practice. DNN developers or researchers often draw node-link diagrams on a whiteboard for DNN structure communication [36] as well as scholarly communication. In the current DNN programming practice, visualization is optional and only available after the training phase. These visualization tools enable network validation at a very late stage in DNN modeling procedures, although visualization can guide the user during the code editing. Tools such as A Neural Network Playground [31] introduce manipulable visualization into DNN education to help explain the mechanism but do not show the corresponding text-based code. This prevents novice users from improving DNN programming skills, which usually involves text-based programming. + + < g r a p h i c s > + +Figure 1: Konjak shows live visualization next to the text-based code editor, providing continuous feedback on the Deep Neural Network structure to the programmers. The visualization also supports direct manipulation to edit the corresponding text code. + +We propose Konjak, a system to augment a standard text-based code editor with an always-on and editable live network structure visualization to help machine learning novice programmers to learn DNN programming, as shown in Fig. 1. Konjak enables higher liveness than the current practice of DL system development, where the programmers can bidirectionally check and edit the DNN between synchronized code panel and visualization panel. We support an automated tensor shape checker to help users, especially novices, tackle a shape inconsistency error at where they occur in time. By comparing the codes and adjacent DNN visualization and repeating edit on either of them, the programmer can qualitatively improve the DNN programming skill. We contribute to Human-Computer Interaction (HCI) as follows: + + * A literature study on DNN visualization based on figures from machine learning academic papers and existing visualization tools. From it, we summarize visual principles for the DNN programming environment. + + * A novel DNN programming environment for teaching nonexpert programmers about DNN modeling and programming paradigm. The programmers can edit the neural network in the editable live visualization and check the tensor shape in real-time. + + * An exploratory user study that shows Konjak helps in two aspects: novice machine learning users could touch the DNN programming paradigm and fix layer-tensor alignment during programming; experienced DNN programming trainers could teach the skills to the learners by demonstrating text-based editing and the corresponding effect on the visualization, or vise versa. + +*e-mail: Author's email + +§ 2 RELATED WORK + +§ 2.1 DEEP NEURAL NETWORK BUG AND REPAIRING + +In practical DL system development, the developers mainly use modern DL frameworks like Tensorflow [1], Chainer [35], Pytorch , [29] and Keras [10] to programmatically build their model. These embedded domain-specific-languages (DSL) provide packaged functions and layers in DNN programming and keep updated to support the newest statistical functions proposed in the machine learning community. Many studies have researched on challenges that a programmer may face in DL system development. Amershi et al. surveyed software engineers from Microsoft teams. They found that the more the programmer experiences machine learning software engineering, the more they consider the use of "AI tools" as a challenge [4]. Cai et al. investigated software engineers' motivation, hurdles, and desires in shifting to machine learning engineering and found that "implementation challenge" is unignorable [7]. Zhang et al. and Islam et al. looked deeper into the DL programming process by analyzing posts from StackOverflow and repositories on GitHub [19, 39]. According to them, "Program Crash" is a common category of bugs in the deep learning system, and amongst this type, "Shape Inconsistency" is one of the most questioned bug types. "Shape Inconsistency" refers to runtime errors caused by mismatched multi-dimension arrays between operations and layers [39]. To ensure that the shape of the array will not differ from the developer's desired mental model, the programmer may keep calling print statement or have their model visualized after the editing towards the network definition program file is over, then repeat the process until they are satisfied with the network's structure. Konjak is designed based on our observation of this specific type of bug. With the "Level 3" liveness in DNN programming (as explained in the following subsection), the novice programmer can more efficiently master the mechanism of DNN development. + +§ 2.2 CODING-FREE DNN DEVELOPMENT TOOLS + +In response to the growing desire to become a more professional programmer in data science $\left\lbrack {4,7,{24}}\right\rbrack$ , Konjak plays the role of a novel DL programming learning environment. AutoML [13] provides a commercial online service to allow the user to upload their data and have their data automatically analyzed without touching the complicated machine learning algorithm and program. It is end-to-end, which means the only thing the user needs to do is to prepare desired input and output data pairs, and the algorithm will pick a proper model and train it automatically. Thus the pipeline is simple enough for those end-users who even don't have coding experience at all, but it is also not originally designed with teaching machine learning programming paradigm in mind. Neural Network Console [32] by Sony and DL-IDE [33] by IBM provides the user with a fully graphical interface for DNN modeling using block-based visual programming language, without exposing codes to the user. From Tanimoto's classification [34], both of them utilize a "Level 2" liveness in DNN modeling, which means the network structure diagram is editable and executable but not always responsive to the user's edits. However, these coding-free DNN development tools are limited in capability and flexibility. You can do much more by direct coding using frameworks. From the study by Qian et al. in [37], in data science model building, experts prefer graphical tools for communication and education, while non-experts prefer code editor where they are able to start from existing codes. To fill the gap by providing both ways in the user interface, Konjak is initially designed for novice's DL programming education. We retain the code editor and experiment "Level 3" liveness in modern DNN modeling since its visualization and codes update in nearly real-time whenever the user edits the model in its interface. + +§ 2.3 LIVE PROGRAMMING ENVIRONMENT + +Our system is categorized into live programming environments, a concept that already has a long history $\left\lbrack {{27},{34}}\right\rbrack$ . The main core of these techniques is the liveness in the programming experience of the under-development program, which is demonstrated by continuously providing the programmers simultaneous feedback so that an evaluation phase can be integrated with the code-editing phase to some degree [27]. By reducing the latency between writing the code and checking its behavior [34], the programmer will have a better comprehension of the behavior they are conducting towards the program. A number of live programming environments have been implemented for different domains while with corresponding features emphasized. For example, TextAlive [22] allows editing computer graphics animation algorithms whose rendering results are shown next to the code editor. Omnicode [20] provides an always-on visualization that presents all numerical values throughout the whole execution history in order to provide the user with better program understanding; Projection Boxes [26] are interaction techniques to enable on-the-fly configuration of such always-on visualizations; Sketch-n-Sketch [17] contributes an output-oriented programming interface to bidirectionally (code $\Leftrightarrow$ screen) create and manipulate graphical designs (scalable vector graphics [SVG]); Plotshop [5, 6] augments a text-based code editor with an interactive scatter plot editor to enable the user to author synthetic $2\mathrm{D}$ points dataset for machine learning algorithm test more intuitively. Skyline [38] is quite a similar project with Konjak in concept and background but focuses on in-editor DNN computation performance profiling. + +§ 3 BACKGROUND ON DNNS PROGRAMMING + +As stated in Sect. 2, many previous works have focused on obstacles that a user might meet during writing DL programs and utilizing DL in their systems $\left\lbrack {4,7,{19},{39}}\right\rbrack$ . In this paper, we especially focus on model structure comprehension and tensor shape inconsistency during DNNs programming. This section briefly introduces the background of DNNs programming practice and common bugs in DL development. + +a) DNN programming using DL libraries: Writing a program to define a DNN is actually to assemble a series of mathematical functions into a sequence. The finished function sequence is called a network, and each mathematical function is identified as a layer in the network, which may have weights. In practice, DL libraries are maintained to provide common layers and tools to build up a network, which has much-eased programmers' burden in the network programming phase. Once the network is assembled up, the programmer writes scripts to define the training process, where the DNN is created as an instance, also called model, then its weights iteratively learn from batches of input data. In their heart, DNN models receive data in specific tasks and output predictions. In each iteration of the training process, the model makes a prediction towards the input data batch, and based on errors of the prediction, the model's weights are updated to reduce the prediction error. In this process, layers receive and emit data in the form of a multi-dimensional array, e.g., for an image as the input into the network, the tensor is usually in shape (batch size, channel size, height, width). The multi-dimensional array is heuristically called activation or tensor. After the training is finished, the programmer evaluates the performance of the trained model, and iteratively improves it by muting arguments and network structure. Optionally, the programmer may deploy the model into an actual system. + +b) Common bugs in DNN programming: Bugs in DNNs can be identified as explicit or implicit bugs. Explicit bugs are those that crash the program and abort the training or evaluation process. Implicit bugs won't produce any errors during program execution but cause symptoms like abnormal training or low prediction accuracy. In this work, we focus on explicit bugs that crash the program. According to Zhang et al. in their study over DL-related questions collected from StackOverflow [39], the most common explicit bugs that cause program crash in DNN program are: 1) Shape inconsistency: Layers in a network are defined with several arguments and can only receive tensors in a specific shape. If a layer's arguments mismatch with the input tensor shape (e.g., in the image classification task, produces a tensor that has non-integer height or width), the execution will be aborted and throw an error; 2) Numerical error: data in DNNs is mainly defined as float point values [12], and inconsistent numerical type will easily raise an undesired error, e.g., when a float64 tensor is an input to a layer with float32 weights; 3) CPU/GPU incompatibility: GPU plays a vital role in accelerating DNN's training and evaluation iteration but given a trained model and relevant published codes, the execution may fail on another CPU-only machine because the codes are GPU only. These bugs are most frequently asked in DL application-related questions and do not occur in conventional non-DL applications. + +node. + + < g r a p h i c s > + +§ 4 A LITERATURE STUDY ON DNN VISUALIZATION + +We address the learning of novice in DNN programming by introducing a synchronized visualization bound with text-based programming. Prior to the design of the system, we first investigated common network drawing practices in DNN to reflect the programmer's mental model towards the under-developed model within our interface. We follow procedures to create more effective domain-specific visualization for communication proposed by [3]. We collected hand-drawn DNN structure diagrams from DL papers to build a database. The papers are picked from Paperswithcode.com [28] by visiting leaderboards in three computer vision areas (i.e., image classification, object detection, and semantic segmentation). Besides referring to DL papers, we also get insights from the existing DNN model visualizers, which automatically renders a trained model matrix or manually input network definition into a visual representation. + +We analyzed the DNN visualization database (including diagrams from papers and synthesized by tools) and categorized the visualizations in terms of visual encoding. In the DNN visualization database, it is common to draw network structure in a node-link diagram. We noticed that there are two branches in the design decision what the node represents (see Table 1): some diagrams choose node as the visual factor when representing the tensors, and the adjacent link represents layers to process tensors; Other diagrams, on the contrary, emphasize layers as nodes in visual representation, in this case, links indicate tensor's flow in the network. + +Another important choice in visualization design is how to draw the node in the diagram, especially in diagrams where tensors are emphasized as nodes. In our observation, three answers are given in this choice: One is to draw the node in a 3D-box, which is the style adopted in network structure diagram by AlexNet [23], the paper opened DL's new age in 2012. Tensor data in DNN can be a 1-D vector or an n-D array, and each dimension's size affects the network's correctness. Drawing tensors in 3D-boxes gives the user an instinctive representation of the tensor's concrete shape; The other style is to draw the node in Stacked Sheets. Here tensors are represented as some stacked rectangles. The size of the rectangle is the encoding of a tensor's map sizes (width and height), and the number of rectangles is encoded from the channel number of a tensor. This style is adopted by LeCun et al. in 1998's famous DNN structure LeNet; a Very limited amount of diagrams choose a flow-like style to represent the flow of tensors. This style encodes tensors into some "rivers," and the river's width represents the tensor's channel number. However, the tensor map's width and height information is omitted in this drawing style. On the other hand, in diagrams where a node represents a layer, the node is usually drawn as a plain rectangle. + +As stated in the previous section, "Shape Inconsistency" is the main problem we want to solve in a novice's DNN programming learning. Therefore, in the first choice what the node represents, we pick tensors as the majority of the structure graph. In terms of the second choice how to draw the node, we choose 3D-box to show the tensor's shape as much as possible. Nevertheless, the layer's information should be contained in the visualization. Thus in our design, we divide a separate panel from the interface to list information of all the layers defined by users in code. + + < g r a p h i c s > + +Figure 2: Screenshot of the code editor at left half of the user interface: (a) the code editor, (b) the program error log panel and (c) the inline shape consistency indicator. + +§ 5 KONJAK + +We implemented a prototype system, called Konjak, to show the feasibility and effectiveness of live visualization for teaching novices DNN programming and accelerating their progress to DL software engineering. It is implemented as a web-based application written in JavaScript. We use Python as the user-facing language and Chainer [35] as DNN API. We choose Chainer as the DNN library because it was one of the most popular DNN libraries at the time of the implementation; meanwhile, its pioneered dynamic eager execution characteristics has deeply inspired later DNN libraries' API design $\left\lbrack {2,{29}}\right\rbrack$ . Keeping the education purpose in mind, we now visit every component in Konjak's interface and explain their design motivation and functions. The user interface consists of two tightly interlinked main components, the code editor and the live visualization. + +§ 5.1 CODE EDITOR + +The left half of the screen is split as the code editor component, where the user writes a program like in a standard workflow of general-purpose programming (See Fig. 2). This consists of three child components: a) text-based code editor, b) problem panel, and c) inline shape check and highlight. + +a) Text-based code editor: We utilize CodeMirror [14] as the backbone of the text-based code editor in a browser. The user writes a DNN structure program in this area. Note that although in the prototype we only support Chainer, without loss of generality, the interface can be transferred to other popular DNN libraries like PyTorch due to their similar API design. A DNN structure in Chainer usually starts from inheriting a built-in class chainer. Chain, which is a class to define a neural network composed of several layers (chainer.links or chainer.functions). In actual code structure, the user defines layers to use in the network in function_____init_____ by assigning Chainer.links instances to attributes. For example, the code self. $\mathrm{c}@ =$ L. Convolution $2\mathrm{\;D}(3$ , ${16},3,1,1)$ assigns a $2\mathrm{D}$ convolution layer instance to a attribute c0, with the convolution layer’s arguments given as "Use a $3 \times 3$ kernel to convolute a 3-channel tensor in stride 1, and output 16-channel tensor. In convolution, the input tensor map is spatially padded with width 1 for each channel". + +The other necessary component in code is function_____call_____, where the user gradually connects layers defined in function_____init_____ to input data. When the network definition is over, the user is supposed to give a shape definition of input data in a comment, e.g., in line ${23},\mathrm{x} = \left( {{32},{128}}\right)$ , means $\mathrm{x}$ is a tensor in the shape of ${32} \times {128} \times {128}$ . In our current prototype, we assume that the input data's width and height are the same for simplification. Konjak provides a live programming environment that always parses the under-development program and updates its visualization. Every time the user stops typing after 0.5 seconds, the browser will send the program back to a server to have it checked for further visual feedback. + + < g r a p h i c s > + +Figure 3: Screenshot of the live visualization at right half of the user interface: (e) layer card bar to list all layers defined in the program, and (d) the interactive graph visualization panel for visualizing network structure and tensor shape. + +b) Problem panel: After the program is sent to the server, its syntax will be checked first by the library Pylint. If no errors are found, the server parses the received program into AST and sends simplified AST back to the browser in JSON format. Konjak renders the received JSON into visualization in the user interface and prints an error message in the problem panel, which can be syntax errors detected by Pylint or non-syntax errors (tensor shape inconsistency). The state of the program is encoded to panel background color to notify users, where red is for error and green is for success. The line number is included in the message to help users locate errors. + +c) Inline shape check and highlight: Synchronized with the problem panel, in-situ tensor shape inconsistency indicator is activated in lines where_____ call_____ function is defined. Applying exactly the same color encoding as the problem panel, the line where the inconsistency happens will become red. The indicator stops showing color at that line. Otherwise, all lines in the_____ call_____ function show green. + +§ 5.2 LIVE VISUALIZATION + +Live visualization (See Fig. 3) is designed to help novices lively check DNN structure and interactively connect layers to tensors lies in the right half of the user interface. Following the design principles we described in the last section, we use two sub-panels to visualize the neural networks: d) layer card bar and e) graph visualization panel. + +d) Layer card bar: We retain a bar to list all layers defined in function_____init_____. In this bar, all the layer instances assigned to the network's attributes are drawn in separate cards, and Konjak places the cards in the narrow bar vertically, following the order that layer instances are created in the program. These layer cards are responsive to the program in the code editor. When the user adds or deletes lines in the program to create or delete layer instances, the corresponding layer cards will simultaneously arise or disappear. Taking the characteristics of different types of layers in DNN into consideration, we present a brief and explanatory layer card design. + +Fig. 4 shows some examples of layer cards in Konjak. For the top $2/3$ space of the card, we draw different sketches to represent different types of neural network layers. These brief sketches also indicate the layer's effect on the tensor shape, e.g., a linear layer will flatten a tensor into a fixed-length vector, and a max-pooling layer down-sample a tensor by picking the maximum value in each tile of the tensor to reduce original data volume to a smaller one. We show the layer’s parameter information at the bottom $1/3$ space for those layers that only receive and emit specific data size. At the left end of the cards, we encode different layer categories into a colored strip (Orange for convolution, purple for normalization, ash for linear, red for activation, dark blue for dropout, and green for pooling). Every time the user edits parameters to create a layer instance in the code editor, the corresponding layer card updates its drawing as well as information. + + < g r a p h i c s > + +Figure 4: Examples of layer card designs. + + < g r a p h i c s > + +Figure 5: Drag-and-drop function in Konjak's live visualization. (a) the user is allowed to drag a layer card and drop it on the tensor node they want to connect the layer to, then the graph visualization updates to reflect the connection, meanwhile (b) new code line (red frame) is synthesized simultaneously in code editor. + +e) Graph visualization panel: this is the heart of Konjak, which interacts with all other components in the user interface. Similar to the layer card bar, it mainly provides the user synchronized visual representation of the network structure in the form of a node-link graph. To keep the user's image towards tensor shape clear, we draw 3D tensor nodes in 3D-box and 2D tensor nodes in the 2D bar, with tensor shape printed next to the box (Channel $\times$ Height $\times$ Width). Color encoding of the cube or the $2\mathrm{D}$ bar is consistent with the layer card but to indicate layer type that reproduces the tensor. As a special type of tensor, we encode input data as grey cubes. $h$ or $x$ at the side of tensor nodes means these tensors are assigned to variables in the program. + +Graph visualization panel interacts with the layer card bar by providing a drag-and-drop feature. The user is allowed to click and drag the layer card, then drop it on these tensor nodes with the variable names to connect the layer to the tensor node. After the drag-and-drop operation, a new node will show as the next tensor, and variable information updates simultaneously. This operation will also affect the code editor as a way to synthesize new code lines in relevant line numbers from visualization. We implement this feature by binding the visualization elements with the relevant line number and offset information in the program. When the drag-and-drop operation is conducted, Konjak attaches the synthesized codes after the relevant last line, then the browser sends the updated program back to the server and re-render the visualization based on the response from the server. The editable visualization, together with the synchronized update from code editing to visualization, builds a bidirectional DNN editing experience for novices. To make the binding between program and visualization tighter, when the user move cursor in the code editor, the line cursor movement will trigger highlights in relevant visual parts. In reverse, if the user's mouse hovers on link (layer) or node (tensor) in the graph visualization panel, the corresponding layer card will show a different background color, and relevant code lines will become bold. + +§ 6 USAGE SCENARIOS + +Under the context of DNN programming curriculum and a novice's first DNN programming exploration, we present two usage scenarios here as examples to show how Konjak can be used. + +§ 6.1 CONVOLUTION LAYER SETTING PLAYGROUND + +Compared to other layers that don't affect the input tensor's shape or simply reduce the original tensor's size to half, the convolution layer's output tensor shape is influenced by many layer parameters in the definition. The formula to calculate the output tensor shape of a convolution layer is: + +$$ +{C}_{o} = K +$$ + +$$ +{W}_{o} = \frac{\left( {W}_{i} - F + 2P\right) }{S} + 1 \tag{1} +$$ + +where input tensor is of size $\left( {{C}_{i},{H}_{i},{W}_{i}}\right)$ , output tensor is of size $\left( {{C}_{o},{H}_{o},{W}_{o}}\right)$ , and the convolution layer’s arguments are defined as L. Convolution2D(in_channels=Wi, out_channels=K, ksize=F, stride=S, pad=P). Some specific parameter settings are commonly used in neural networks implementations, like (ksize $= 3$ , stride $= 1$ , pad $= 1$ ) to keep input tensor size and (ksize=4, stride=2, pad=1) to reduce the size to half. Konjak can act as a playground for a novice to easily experiment with different convolution layer parameter settings, always with instant visual feedback to help them what is happening with the parameter. + + < g r a p h i c s > + +Figure 6: Konjak enables DL programming learners solving shape consistency problem iteratively in much lower trail-and-error cost. + +§ 6.2 SOLVING SHAPE INCONSISTENCY + +Live visualization largely reduces novices' trial and error cost in solving the bugs related to a tensor's shape. Let's consider a scene where a novice is re-implementing a network structure to fit their own application needs. In Step.1, the user starts by copying and pasting a template of a DNN program, then defines the layers that will be used in the network. Layer parameters don't need to be precisely filled at the beginning because Konjak's visualization is synchronized. Even when there is improper parameter-picking, our system can tell the user where it is wrong. With layers defined with randomly picked initial parameters in the code editor, the user can use the drag-and-drop function to have a quick network connection in Step.2. Then, the user will notice the error message shown in the problem panel and inline highlight and locates the layer that results in the shape error. Step.3, the user is allowed to check the live visualization alternately, go back to the layer definition part and modify the parameters, and repeat this process until the tensor shape shown in the visualization is satisfactory. Compared to current practice to use a print statement or visualization tool after the code editing phase, trial and error cost here is reduced to less than one second. + +§ 7 USER STUDY + +As an educational tool to help novices learn DNN programming, we hope Konjak's liveness and visualization can 1) help form a proper mental model towards DNN structure and 2) shorten the cycle in input/feedback in the learning stage. To evaluate the points above in the educational context, we ran an exploratory first-use study to get feedback from both DL novices (learners) and experienced DL developers (trainers). We invited 12 participants (nine males and three females), aged 22 to 28, with programming experience from two to 10 years. P1 - P4 are Deep Learning engineers who mainly act as the role of a trainer, and P5 - P12 are lab internal members who are identified as novices in DNN programming (has nearly no knowledge or only basic knowledge about DNN). Note that we determined the number of participants (sample size) based on the professional standards for this type of study within the HCI community [8]. + +§ 7.1 PROCEDURES + +The study lasted 60 minutes for each participant. We started with a 15-minutes warm-up session on Konjak's user interface and features, as well as the basics of DNN programming. During the session, the participant was allowed actually to touch Konjak to get familiar with the system. Next, we presented two tasks that are common in the DL programming learning situation and have been mentioned in Sect. 6. We designed the tasks by talking to expert machine learning users. Each task lasts 15 minutes. After the task phase, we conducted an interview and questionnaire in the 15 minutes remaining. Note that all the participants worked on the same tasks. P5 - P12 are the main target participants to simulate the DNN programming learning situation, and P1 - P4 are mainly recruited to provide extra reviews about our system from the perspective of an educator. The tasks are: + + * Task 1: Given desired input and output tensor size, explore proper layer parameter settings to finish the tensor shape transformation. + + * Task 2: Given the DNN structure diagram cropped from a research paper (LeNet [25]), try to implement it in Konjak. + +Table 2: Summary of post-study questionnaire results. + +max width= + +# Items in Questionnaire $\mu$ $\sigma$ + +1-4 +1 The live visualization diagram reflected my mental model towards the network structure well. 4.42 0.51 + +1-4 +2 Highlight features (code2visualization, vi- sualization2code) helped me to locate code/visual element from each other. 4.00 1.04 + +1-4 +3 The design to divide visualization panel into graph visualization panel and layer card bar is intuitive for Chainer's code structure. 4.67 0.49 + +1-4 +4 Using node to represent tensors in visualiza- tion, and the shape check feature, helped me in exploring a shape-consistent solution in DNN implementation. 4.58 0.67 + +1-4 +5 Konjak's network and layer card drawing are confusing for me to understand. 1.92 0.90 + +1-4 +6 The consistency between code editor and vi- sualization, helped me observing DNN pro- gram more conveniently. 4.50 0.67 + +1-4 + +§ 7.2 RESULT + +All except two (P8 and P11) of our participants finished the two simulation learning tasks using Konjak within the given time. Table 2 shows post-study questionnaire results, which are summarized from six 5-point Likert scale questions (From Strongly Disagree 1, to Strongly Agree 5). These questions covered the core concept of utilizing live programming in the DNN educational context and detailed design points of the visualization. In terms of the drawing of sketches in layer card and network graph to show different layer's functionality, because they are mainly originated from our understanding, we included a question in the questionnaire to survey users' reaction towards our concern that the drawing may cause novices' confusion. We also surveyed participants willing to use live visualization to train other novices in DNN programming in the future. Here we summarize their feedback in three aspects: + +Assist DNN implementation and debug: some feedback can be categorized into description about how the live visualization help some of our participants implement DNN program and debug parameters. Compared to traditional practice to check tensor shapes using built-in print statements, Konjak enables a shorter route in debugging the DNN program. P6 said: "With the highlight feature and real-time visualization, I can write the program while making sure whether previous lines are correct or not." Exactly as why we are motivated to design Konjak, P7 expressed that the system indeed reduces shape inconsistency in DNN implementation: "Konjak's interactive feedback and tensor shape inconsistency error messages were very useful to create a DNN structure without worrying about tensor shape inconsistency too much." + +We recorded how the participants used Konjak in the study. When reviewing the video, we noticed P10's whispering in Task 2: "The input image is(3,32,32), and the first feature map on the diagram is(6,28,28), so I should first fill in out_channels $= 6\ldots$ Then what about the parameter ksize and stride? It seems that stride=2 will make the output map size decrease too much, so I keep it 1 first... And ksize=1, this will make the width 32... (Change the value to 4) Oops, it (map size) is still not small enough... (Re-input 5) Okay, now it's the same as the diagram. So the next layer is a max-pooling layer..." Observing his behavior in Konjak, we found that it exactly matches the usage scenario of solving the shape inconsistency we presented in the last section. + +A skilled DNN programmer also provided positive feedback about the live visualization in aiding DNN implementation. P1 presented a view about the similarity between Konjak's programming experience and web development: "This reminds me of my web developing experience, where I put a code editor on the left half and browser on the right half. Since I use hot-reloader, which automatically refreshes the browser whenever I change the code base, I have the tendency to focus on coding and use the visualizer only for reference." In this sense, we may potentially extend Konjak's features for more than an educational purpose. + +Aid DNN programming education: Eight participants (P5 - P12) are novices or beginners in DNN programming, and Konjak is highly regarded by them. P8, a CS student who had learned DL before, but nearly forgot all of the knowledge, appreciate the live visualization a lot: "Although I almost forgot DNN, this structure helped me understand how the layers and structures work. I want to use it when I use DNN in the future.". P10, identified as a DL learner, said: "(Konjak) is especially helpful in implementing a DNN while referring to a research paper." + +All of our participants agreed that if it is possible, they tend to use a live visualization like Konjak to teach other beginners DNN programming in the future. From the perspective of a skilled DL engineer, P1 described the situation where he uses a system like Konjak to teach a novice DNN programming: "If I am going to teach someone Chainer, I probably will use this UI because after understanding how it works, the synchronization between code editor and graph is very helpful to teach the student. I will focus on teaching the student how to write code on the left panel, but occasionally if they make any mistake or don't understand what is going on with the code, then $I$ will remind them to play with the visualization to understand what is going wrong so he can clarify his question and return to coding." + +We ran this study like a get-started lesson to our novice participants, and for those trainer participants, it might seem like a chance for them to think about how to teach the DNN programming paradigm to a newbie. Nevertheless, the programming paradigm is only a side in DL programming. According to the study by Cai et al. [7], other obstacles like mathematics knowledge are still prohibiting novices' diving in. + +Implementation Issues: Some feedback are about some interface details that affected participants' programming experience. P5 complained about the switching between the code editor and the live visualization: "It bothered me a little if $I$ have to watch the visualization but edit the network on another side. I'd like to modify the network parts right in the visualization panel.". P9 suggested another style to draw 3D-Box in graph visualization panel: "Live visualization to show DNN structure is easy to understand. But the drawing of tensor nodes confused me at first. Maybe you can draw it like stacked slices, which can be a more intuitive representation of channels." + +§ 8 LIMITATION AND FUTURE WORK + +Konjak's current prototype is originated from the motivation to support novices in the learning stage. For this reason, only limited layers and structures are supported to be displayed in the live visualization. We implement the prototype to only support convolutional neural networks (CNNs) for tasks like image classification, object detection, and image segmentation, while other structures such as recurrent neural networks (RNNs) and generative adversarial networks (GANs) are out of scope. Fig. 4 covers nearly all the supported layers in the current prototype, nine layers in total. Besides the layers drawn in the figure, two activation layers (Sigmoid and Softmax) and one normalization layer (Local_response_normalization) are supported in the implementation. These six layer categories are typical because they are the main components of some classical DNNs, such as VGG16/19 [30], AlexNet [23], and ResNet [16]. In each category, we only implemented at most three layers because the layers classified into the same category share similar APIs and visual representation. In DNN programming learning, the DNN model that novices explored might be relatively simple, but when the programmers become skilled, more complicated networks (e.g., with skip link or multiple tensor flows) and customized layers are common in practical development. In our user study, the skilled machine learning programmer participants agreed that the live visualization to always show tensor shape could be quite helpful even in their daily DNN programming. Therefore, the direction to extend Konjak's concept to more situations is worth considering. Features like sub-nets or customized layer visualization are in our future plans. Konjak's current implementation is based on static analysis, which limits the system's application in an expert's practical development. We think that hacking a Python interpreter and tracing its execution memory by executing the actual code will greatly improve the system's extensibility and make it ready for practical development. + +In our user study to evaluate Konjak as an educational tool, we design the study like a lesson to train novices in solving shape inconsistency problems and collect their first-use impression of Konjak. We didn't compare Konjak with the print statement or other coding-free DNN network modelers. As far as we know, Konjak is the first research to explore a live programming environment in text-based DNN programming. We hope this work can become the first step toward the acceleration of research in improving DL developing experience in the future. + +Also, our participants identified some implementation issues in Konjak's prototype. We thought that they could be overcome with engineering efforts. + +§ 9 CONCLUSION + +We have proposed a system called Konjak that augments a text-based code editor with a synchronized, editable, and representative live visualization to support novices in DNN programming as an educational tool. We revisited DNN structure diagram designs from research papers in machine learning and existing DNN visualizer and extracted design principles, especially for educational context and live programming environment. The system provides a bidirectional editing manner between the code editor and the live visualization and instant tensor shape checking features to avoid the common shape inconsistency error of the DNN program. An exploratory user study was conducted to evaluate Konjak in an educational situation. + +§ ACKNOWLEDGEMENTS + +Removed for review. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/IC54qQMBFJj/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/IC54qQMBFJj/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..e53772922075bb4fc08e372bd63a5a113f52faa4 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/IC54qQMBFJj/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,437 @@ +# Algorithmic Typewriter Art: Can 1000 Words Paint a Picture? + +Roy G. Biv ${}^{ * }$ + +Starbucks Research + +Ed Grimley ${}^{ \dagger }$ + +Grimley Widgets, Inc. + +Martha Stewart ${}^{ \ddagger }$ + +Martha Stewart Enterprises + +Microsoft Research + +![01963eae-e418-7e07-ba44-3a16b02b765f_0_212_384_1368_500_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_0_212_384_1368_500_0.jpg) + +Figure 1: The left side shows 1 layer of typed characters. Moving right, overlapping layers are added up to 12 layers in 4 positions. + +## Abstract + +We develop an optimization-based algorithm for converting input photographs into typewriter art. Taking advantage of the typist's ability to move the paper in the typewriter, the optimization algorithm selects characters for four overlapping, staggered layers of type. By typing the characters as instructed, the typist can reproduce the image on the typewriter. + +Compared to text-mode ASCII art, allowing characters to overlap greatly increases tonal range and spatial resolution, at the expense of exponentially increasing the search space. We use a simulated annealing search to find an approximate solution in this high-dimensional search space. Considering only one dimension at a time, we measure the effect of changing a single character in the simulated typed result, repeatedly iterating over all the characters composing the image. + +Both simulated and physical typed results have a high degree of detail, while still being clearly recognizable as type art. The accuracy of the physical typed result is primarily limited by human error and the mechanics of the typewriter. + +Index Terms: Computing methodologies-Non-photorealistic rendering—; Image Processing— + +## 1 INTRODUCTION + +Typewriter art involves producing images with typewritten text. A modern computer graphics practitioner is likely familiar with ASCII art, where an image is formed out of text characters on the screen. Typewriter art offers additional flexibility, insofar as characters are not restricted to a regular grid. Multiple characters can be typed at a single location, a practice called overstriking, and rows and columns of characters can partially overlap with previously typed rows and columns. Further, keys on the typewriter can be struck with varying levels of force, transferring greater or lesser quantities of ink from the typewriter ribbon. Varying the strike force produces a much smoother tonal range than monochrome ASCII art, which is especially important in lighter-tone regions of the image. Overstriking and overlapping improve outcomes in the darker regions of the image as well as improving detail. + +In this paper, we present an algorithm for converting an input image into typewriter art, exploiting overstriking, overlapping, and strike force to add detail and to improve tone matching. We can directly render an output image, or produce a set of instructions that can be typed to create a physical realization of the image, in keeping with recent trends in computer graphics towards assisting computational fabrication [3]. Of course, we can create digital versions of the images, and by employing a computer font as input, bypass the typewriter entirely should that be desired by the user. + +Manually crafted typewriter art can be extremely detailed, and historically was often created using primarily the period key or other small, geometric shapes. Without restriction to a regular grid, overlapping these small characters yields both fine spatial resolution and a perceptually wide tonal range, with a texture reminiscent of pointillism or stippling. + +Considerable artistic skill was needed to craft these works. However, the typewriter also allowed users with less skill to produce images. "Typewriter mystery games" [15] provided typists with instructions that, when carried out, produced an image; e.g., see Figure 2 [4]. These instructions exploited the backspace key to enable overstriking to create much darker shades. + +In our method, we produce instructions for four separate layers, each offset by half a space horizontally, vertically, or both; formally, the layers have their respective origins at $\left( {0,0}\right) ,\left( {{0.5},0}\right) ,\left( {0,{0.5}}\right)$ , and (0.5,0.5). Together, over-typing and half-spacing provide nearly full ink coverage. Figure 1 shows a rendered result, where superimposed layers of text cooperate to form a detailed image. + +Our algorithm, outlined in Figure 3 takes as inputs a target image to be reproduced on the typewriter and a scan of the typewriter's character set. The program selects the characters to be typed for each of four layers, which then overlap to produce the image. Within each layer, the placement of characters is limited to a grid. The algorithm optimizes by measuring the effect of changing a single character in the simulated typed result, repeatedly iterating over all character positions until no change to any single character can improve the outcome. The selected characters for each layer form the typist's instructions. By following these instructions the image can be reproduced on the typewriter without demanding any artistic skill. + +--- + +*e-mail: roy.g.biv@aol.com + +${}^{ \dagger }$ e-mail: ed.grimley@aol.com + +${}^{\frac{1}{4}}$ e-mail: martha.stewart@marthastewart.com + +--- + +![01963eae-e418-7e07-ba44-3a16b02b765f_1_154_149_706_534_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_1_154_149_706_534_0.jpg) + +Figure 2: Instructions from Bob Neill's Book of Typewriter Art + +With respect to both tone reproduction and shape matching, this technique produces a better approximation of the input image than non-overlapping ASCII art with a similar number of characters. We posed the question "Can 1000 words paint a picture?"; we find that 5790 characters suffices for a fair facsimile of a portrait, though more resolution is required for complex scenes. + +![01963eae-e418-7e07-ba44-3a16b02b765f_1_155_1203_718_282_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_1_155_1203_718_282_0.jpg) + +Figure 3: Simplified process + +This paper makes the following contributions: + +- We exploited overlapping characters to increase the dynamic range and expressivity of text art. + +- We present an optimization algorithm for automatic creation of typewriter art, paying special attention to physical reproducibility on a mechanical typewriter. + +- We propose asymmetric mean squared error, in which positive error (too little ink) is weighted less than negative error (too much ink). Subjectively, optimizing with asymmetric MSE produced the best results. + +## 2 BACKGROUND + +Much recent work in ASCII art has focused on improving shape matching, which can be traced back to the introduction of the Structural Similarity metric (SSIM) [21]. Another metric, created specifically for ASCII art, is the Alignment Insensitive Structural Similarity Metric (AISS) [18]; Xu et al. deform the target image to better match the available character shapes at a given position. Deforming the input image poses issues for the high-fidelity approach we pursue, but the idea of optimizing the alignment of the target image proves useful. + +Conventionally, ASCII art used monospaced fonts. Xu et al. $\left\lbrack {{19},{20}}\right\rbrack$ achieved superior results using proportional-width fonts. Although this approach provides flexibility in the columns of type, the rows of type are still fixed; typewriter art allows both rows and columns to vary. + +Some recent approaches to generating ASCII art involve machine learning with no explicit metric. Akiyama employed a convolutional neural network, trained on manually created structural ASCII art, to produce compelling results for this style [2]. Markus et al. use a decision tree to approximate SSIM comparison for a particular character set, yielding a good approximation at high speed [14]. With our interest in overlapping characters, these trained model approaches pose the issue that only single characters are stored in the model, so overlapping composites of those characters would not be compared with the target image. + +In typewriter art, the characters can in principle be freely positioned, which brings to mind stippling [7]. Computer-generated stippling usually seeks non-overlapping stipples, unlike our case which encourages overlaps. Also, while stippling need not be restricted to points $\left\lbrack {6,{10}}\right\rbrack$ , computer-generated stippling methods do not exploit choice of object shape to better approximate an input image, while this is a fundamental aspect of ASCII art and typewriter art. + +Typewriter art can be seen as a variant of halftoning [8], where a continuous-tone input image is represented with a small number of discrete tones. Our output images are quantized in space (discrete character placement positions) and tone (individual character selection) with the quantization partly hidden by overlapping characters. We employ optimization directly informed by an error metric, similar to the method of Pang et al. [17]. + +## 3 ALGORITHM + +The aim of this project was to marry the mechanical reproducibility of the "typewriter mystery game" with the improved spatial resolution of freehand typewriter art by employing four overlapping layers, offset both vertically and horizontally. We developed an algorithm which considers the effect of overlapping characters. + +![01963eae-e418-7e07-ba44-3a16b02b765f_1_1224_1446_118_193_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_1_1224_1446_118_193_0.jpg) + +Figure 4: Nine offset characters overlap within one character position + +The algorithm takes as input a target image to be reproduced, and an image of a set of characters (which can be produced by scanning typed characters or from a computer font). The "character set" is chopped into a set of images, one for each character plus one blank character. + +### 3.1 Search technique + +We take an iterative approach, choosing a character for a single position at a time. Each selection answers the question "What character, placed in this position, will best complement the already-chosen characters to most closely match the target image?" + +With greedy search, we find the best character for a particular position independent of the rest. We then do the same for all character positions in the typed image. This completes a single optimization pass. + +Algorithm 1: Simplified optimization algorithm + +--- + +Create list of character positions [charPos]: \{layer, row, col, + +charId\}; + +Initialize each charId in [charPos] randomly; + +for $i = \left\lbrack {1..\text{maxIterations}}\right\rbrack$ do + + for pos in [charPos] do + + Create list of character scores [charScores]: \{charId, + + score\}; + + for char in [charScores] do + + char.score $=$ compare(simulated typed output, + + target image at this position); + + end + + pos.charId $=$ highest scoring char in [charScores] for + + this pos; + + end + +end + +Simulate composite typed output; + +Simulate individual typeable layers; + +--- + +Due to the use of overlapping layers, past selections may later become suboptimal: when any character overlapping a position changes, that position must be re-evaluated. For example, if all neighbours of a certain position changed from a dark character to a light one, the selection for that position should possibly be changed to a darker one to compensate. Moreover, because all the characters are connected through overlap, changing a single selection can trigger a cascade of selection changes spanning the image. + +### 3.2 Stopping condition + +A character position is considered stable if none of its neighbours have changed since it was last evaluated; thus, re-evaluating the position would lead to the same selection as before. The search terminates when no further improvement can occur by changing any one character selection. A stable state is reached when a complete optimization pass has occurred with no selection changes. + +It is possible that the algorithm will never reach a stable state, as the cascades of selection changes create cycles. To ensure termination, we limit the search to 500 selections per character position. In practice, this limit was never reached, with 25 iterations being typical. + +### 3.3 Loss function + +We relied on previous work in image fidelity measurement to provide the error function used to score character selection. We also employed a variation on mean squared error (MSE), asymmetric mean squared error (AMSE), multiplies positive error by a factor of $1 + a$ before squaring. + +A combined loss function $\left( {1 - \text{ssim }}\right) \times$ amse and maximizing SSIM were also explored. + +### 3.4 Optimized cropping + +Before the iterative search begins, we first generate quick approximations - using a single optimization pass - for each of 64 slightly different crops of the target image. The crop parameters (translation and scale) that maximize SSIM + PSNR are applied to the input before iterative optimization. + +### 3.5 Simulated annealing + +A greedy best-first search evaluates every position along a single dimension, selecting the character which results in the highest similarity to the target image. However, there is no guarantee that this is optimal in other dimensions. The greedy search is quick to converge to a local optimum, where no single character swap will increase the similarity to the target image. + +To combat the lock-in to local optima exhibited by greedy search, we use stochastic SA to intelligently expand the search space. SA is modelled after the physical process of heating and cooling metal to reduce defects [9]. At any selection, it is more probably that a high scoring candidate will be chosen than a lower scoring one. The width of this probability distribution is decreased over iterations, as the "temperature" is reduced by a fixed amount after each optimization pass. When the temperature reaches 0 , it "reheats" to a percentage of the initial temperature. + +Each time the algorithm visits a character location, it evaluates the candidate characters in a random order. If selecting the character increases the similarity score, it is chosen. A lower scoring character may also be selected with probability inversely proportional to the delta between the current score and the score resulting from its selection. In other words, a very bad selection is possible, but a better selection is more probable, and increasingly so at each iteration, as the temperature is reduced. + +## 4 RESULTS + +### 4.1 Loss function + +As a loss function, standard MSE works very well in this application. MSE matches tone quality well, and the penalty for larger error at a single pixel gives it a sharper, shape matching quality than mean absolute error. + +AMSE penalizes positive error (too much ink) to a greater degree than negative error (too little ink). This has two advantages: first, erroneously placed ink is subjectively more noticeable; second, an overlapping character can fill in missing ink later, while erroneously placed ink cannot be removed. In practice, AMSE improves shape matching - both subjectively and as measured by SSIM on the resultant image - while only slightly compromising MSE. + +![01963eae-e418-7e07-ba44-3a16b02b765f_2_925_1210_723_272_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_2_925_1210_723_272_0.jpg) + +Figure 5: SSIM, MSE, AMSE,(1-SSIM)*AMSE $\left( {a = {0.2}}\right)$ [20w] + +As a loss function SSIM differed markedly from MSE in this application, working much like an edge detector with little respect for tone matching. At small sizes, SSIM can yield a linear, sketchlike style. + +At small sizes, some of the best results come from blending SSIM with AMSE. The blended loss function performs well at sizes of around 20 characters (Figure 5); we use the notation [20w] to indicate a result generated at 20 characters in width with no overstrike, and $\left\lbrack {{20}\mathrm{w},2\mathrm{p}}\right\rbrack$ to indicate 20 characters in width with 2 overstrike passes. SSIM slightly emphasizes lines while leaving some lighter areas bare of ink, effectively increasing contrast. + +### 4.2 Search technique + +#### 4.2.1 Greedy search sensitivity to initial state + +By default, we begin the search from an initial blank page where every character selection is the blank character (spacebar). To evaluate the effectiveness of greedy search, we performed multiple runs of the algorithm from different initial states, including random states as well as single-pass approximations. The exhibited sensitivity to initial state (shown in Figure 8) indicates that the local optimum reached by a greedy search may still be some distance from the global optimum. + +#### 4.2.2 Simulated Annealing (SA) + +![01963eae-e418-7e07-ba44-3a16b02b765f_3_175_408_646_485_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_3_175_408_646_485_0.jpg) + +Figure 6: Effect of search technique on score and convergence time + +![01963eae-e418-7e07-ba44-3a16b02b765f_3_158_1073_695_505_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_3_158_1073_695_505_0.jpg) + +Figure 7: Effect of different initial states on score + +Despite taking longer to converge (Figure 6), SA consistently finds a lower-error state than greedy search, and is less affected by initial state (Figure 7). The results using SA from different random states (Figure 8) are hard to distinguish. However, all SA results captured the whites of the eyes while some greedy results blurred them out. + +#### 4.2.3 Selection order + +When using a greedy search, the order in which selections are made (which character position, or dimension, is evaluated next) has a predictable influence on the output. This is especially evident when the search is initiated from a blank state. The first selections will tend to be overly dark, as the single character under evaluation takes full responsibility for tone matching over an area that could ultimately include 9 overlapping characters; see Figure 4. The greedy search tends to become trapped in local optima with overly dark initial selections. The characters that are selected first contribute disproportionately to the result. Our intuition was that this would lead to a suboptimal result. + +![01963eae-e418-7e07-ba44-3a16b02b765f_3_925_142_723_585_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_3_925_142_723_585_0.jpg) + +Figure 8: Greedy and simulated annealing results from random initial states [20w] + +We experimented with priority-ordered selection - where the priority of a position is determined by the maximal increase in score resulting from selecting the best character at that position - but found that it does not improve the results when simulated annealing is used, and comes at a high computational cost. While a linear selection order resulted in noticeable artifacts, a random selection order was sufficient to avoid these without incurring high computational cost and hence was used for all our results. + +### 4.3 Pre-processing the target image + +Since many source images are colour, the black and white conversion method has a substantial effect on the result (Figure 9), as does edge enhancement and adjustments to the contrast and brightness [11, 13]. We considered these processes out of scope, but have worked to optimize one critical aspect of image pre-processing unique to this application: the alignment of the target image to the typewriter's "character grid". + +![01963eae-e418-7e07-ba44-3a16b02b765f_3_926_1502_720_530_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_3_926_1502_720_530_0.jpg) + +Figure 9: RGB average vs. STRESS algorithm [40w] + +#### 4.3.1 Optimized cropping + +Figure 10 shows two results for the same checkered test pattern, with and without cropping (scaling and shifting the image within the same fixed-size container). The results after cropping via the auto-alignment routine are much sharper. In realistic scenarios - rendering a portrait, say - the alignment still has a strong effect. + +![01963eae-e418-7e07-ba44-3a16b02b765f_4_326_371_370_193_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_4_326_371_370_193_0.jpg) + +Figure 10: Effect of optimized cropping on test pattern. Left: no crop; Right: optimized crop [12w] + +In Figure 11, the target image has been slightly scaled and shifted to better align with the "character grid" of the typed result. Note the improvement in shape resolution of the eyes, glasses, hair and nose. At small image sizes, alignment is one of the dominant factors in the quality of the result. + +![01963eae-e418-7e07-ba44-3a16b02b765f_4_309_850_405_280_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_4_309_850_405_280_0.jpg) + +Figure 11: Effect of optimized cropping on portrait [20w] + +### 4.4 Number of "overstrike" characters at a single posi- tion and number of total characters + +Despite having partly overlapping typed characters, we have so far discussed only one character to be selected at a particular location. Making use of the backspace key, the typist can "overstrike" multiple characters at the same position within the same layer. This increases the tonal range, especially allowing darker tones, and can to some extent increase shape matching. + +The algorithm incorporates the backspace key by using the simulated result produced by one run of the program as a background layer during compositing operations in a second pass. The most obvious effect is an expanded tonal range in darker regions of the image, along with increased gradient smoothness and shape matching (Figure 12, Figure 13). + +![01963eae-e418-7e07-ba44-3a16b02b765f_4_223_1684_571_235_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_4_223_1684_571_235_0.jpg) + +Figure 12: Top: Simulated typed gradient [30w, 3p]. Bottom: Input gradient. + +In exploring selection order, we learned that it was preferable to distribute ink between several layers, rather than concentrating ink in a few. By default, adding a second overstrike pass will contribute little ink, as the majority of the ink has already been placed in the first pass. With the AMSE metric, we can increase the penalty for wrongly placed ink (asymmetry) on the first overstrike pass and reduce it on the second pass. A higher asymmetry factor encourages the selection of characters which are an especially good match in some areas, but have too little ink in others, over darker characters which do not match the shape as closely. The second pass, with a lower asymmetry, can fill in the gaps. + +![01963eae-e418-7e07-ba44-3a16b02b765f_4_968_159_633_315_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_4_968_159_633_315_0.jpg) + +Figure 13: Distributing ink between multiple overstrike passes [19w] + +Unsurprisingly, increasing character resolution produces better results. Figure 14 shows two simulations, each with three overstrike passes, at 20 characters wide and 30 characters wide. The higher resolution yielded a marked improvement in detail reproduction. + +![01963eae-e418-7e07-ba44-3a16b02b765f_4_926_976_719_328_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_4_926_976_719_328_0.jpg) + +Figure 14: Left: short row length [20w,3p], Right: longer row length $\left\lbrack {{30}\mathrm{w},3\mathrm{p}}\right\rbrack$ + +Keeping to a fixed maximum number of characters, generally the best results come from dividing the characters between two to three overstrike passes. Darker images especially benefit from overstrike, as seen in Figure 15. The increased tonal range in the shadows more than compensates for the reduction in size, up to around four overstrike passes. With a lighter image, the trade-off favors a larger size with two layers, as overstrike is less needed in light areas. + +![01963eae-e418-7e07-ba44-3a16b02b765f_4_926_1659_719_252_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_4_926_1659_719_252_0.jpg) + +Figure 15: Left to right: [45w,1p], [32w,2p], [23w,4p], [16w,8p] + +### 4.5 Character set + +The choice of typewriter greatly influences the outcome, as it forms the character set that it is available to the algorithm. With most + +1234567890-#qwertyuiop nm,.é#"/\$%_&'( )%@QWERTYUIOPξASDFGHJKL:^ 2XCVBNM?.gi234567890-#quertjuiop是asdigh jkl;`zʌcvbnn,,&g"/&&_&'( );&&w&c\\\$v1Or& SLFGhJKL:^ZZCVSVN?.5 + +Figure 16: Characters from Smith-Corona typewriter with dry ribbon, typed hard and soft mechanical typewriters, the typist can vary their strike force to produce lighter and darker variations of the same character. This has dramatic results: including lighter variations greatly increases tonal range in the midtones. + +Figure 17 shows the results with different typed character sets and the SF Mono computer font. The leftmost images, created with monotone character sets, exhibit limited tonal range and draw more attention to the textual characters. In comparing these two images alone, the computer font does not reproduce the image as effectively. The uneven distribution of ink on the typewriter characters aids in shape matching, compared to the perfectly flat distribution across the character a computer font. Critically, even the darkest typewriter characters are less than fully black, which yields a greater tonal range when characters are allowed to overlap. + +The center and right columns in Figure 17 exhibit progressively better tonal range as the character set includes a better distribution of greys. Even if we limit ourselves to two intensities, results improve by having medium and light intensities, rather than dark and medium. There is a trade-off here: overlapping lighter characters can produce a smoother tonal range, at the expense of requiring more overlaps to produce full black. + +![01963eae-e418-7e07-ba44-3a16b02b765f_5_150_1129_720_718_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_5_150_1129_720_718_0.jpg) + +Figure 17: Top: Typed characters, Bottom: SF Mono font; Left to right: 1 tone,2 tones,2 lighter tones $\left\lbrack {{30}\mathrm{w},4\mathrm{p}}\right\rbrack$ + +### 4.6 Timing notes + +The algorithm is expensive due to the large number of image compositing and comparison operations required to determine the best character selection for a given position, multiplied by the number of iterations before convergence. Each of these operations is performed at a high resolution (around 1000 pixels total) at 8 -bit depth. + +We leaned on Google Colaboratory, a cloud computer service which supplies 2 threads of an Intel Xeon processor at ${2.3}\mathrm{{GHz}}$ , and 13GB of RAM [5]. The software is written in Python 3, making use of numpy, OpenCV and scikit-image libraries. + +The top left image in Figure 18 is composed of two overstrike passes of 2670 characters each. To converge to a stable state the algorithm visited each character position 25 times. The first pass took 29 minutes to complete; subsequent overstrike passes converge more quickly. Auto-alignment took an additional 10 minutes. In total, the image took approximately one hour to generate. These timings are representative and vary little over different input photographs. + +The time to converge increases exponentially with number of characters: at 420 characters, the algorithm took 0.64 seconds per character position; at 2670 characters, 0.77 seconds; and at 6392 characters, 0.94 seconds. + +## 5 EVALUATION + +The metric used for comparison between the target image and the simulated typed result has, subjectively, the strongest effect on the results. Best results were obtained from an asymmetric variation on MSE which disproportionately penalizes wrongly placed ink. Blending this metric with SSIM can sharpen results, especially when the typed image is small. + +Greedy search will necessarily converge to a local optimum in which each dimension is at a global optimum in that dimension (no single character swap can improve the score). We can get closer to the overall global optimum state by using simulated annealing. + +Alignment of the target image with the typewriter's "character grid" can have a strong effect on shape matching. Alignment can be optimized by measuring the effect of translation and scaling on the target image, selecting the best parameters for each. This is the primary factor when the typed image is small. + +Allowing multiple characters to be typed at a single position can substantially increase tonal range and shape matching, especially when the character set is monotone (with no variation in strike force). Allowing for variation in strike force is the largest factor in obtaining a good tonal range in the midtones and highlights. + +### 5.1 Settings for best results + +To best answer the question posed by the title of this paper - can 1000 words paint a picture - we limit ourselves to only typing 5790 characters. This number of characters reflects the average English word length (4.79) plus one space per word [16]. For a 4:3 portrait as the target image, this allows for 25 characters width, with 4 overlapping staggered layers and up to 3 typed characters at any single position while remaining within our budget of 5790 characters. + +The settings described below result in the highest combined SSIM + PSNR. To the author's eye, this metric best reflects overall quality as explored in 4.1. + +- Run auto-alignment for the maximal row length within character budget. Choose the crop parameters that maximize SSIM + PSNR. + +- Use a character set which includes all typewriter characters, typed both hard and soft. + +- Search with simulated annealing, reheating to 10% each time the temperature reaches 0 + +- Start from a random state and randomize the selection order + +- Run the algorithm 2 times ( 2 overstrike passes). Use AMSE, with asymmetry settings $a = {1.5}$ on the first pass, and $a = {0.1}$ on the second. + +![01963eae-e418-7e07-ba44-3a16b02b765f_6_148_147_725_1061_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_6_148_147_725_1061_0.jpg) + +Figure 18: Gallery of results with consistent settings $\left\lbrack {i = {5790}}\right.$ characters,2p]. Zoom to see details. + +Figure 18 shows 8 images produced with these settings: maximizing our character budget of 5790 characters and distributing the characters across two overstrike passes. In general, the simulated typed results communicate the content and mood of the input photographs regardless of content. + +To represent noisy textures, the algorithm selects a variety of overlapping characters, demonstrated in the thin tree branches, or the windows of the Flatiron Building (top left). In the absence of texture, characters with more even ink distribution - such as @ or #- are selected. This applies to both light and dark regions, with the darkness of the character and number of overlaps determining overall tone. This is evident in the skin tones of the woman with headphones and in the sky of several images, as well as the gradient shown in Figure 12. + +Employing overstrike achieves good tonal range in the shadows, as seen in the handles of the wrenches and the windows of darker buildings; and a deep black level, demonstrated in the landscape and flowers results. Further emphasizing the value of overstrike, light shapes against a dark background are especially clear, like the round nail suspending the rightmost wrench. While the fixed character grid is set at $1/2$ character resolution, some details in the output demonstrate a spatial resolution of less than $1/{16}$ of a character. + +In general, individual characters are difficult to discern except in the lightest areas, where characters do not overlap - e.g., the sky in the cityscape. The algorithm selects characters which nest, hiding their textual origins, unless the character shapes are a particularly good match - as in the underscores used for the decks of the cruise ships. + +![01963eae-e418-7e07-ba44-3a16b02b765f_6_926_146_720_926_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_6_926_146_720_926_0.jpg) + +Figure 19: Migrant mother, optimized settings $\left\lbrack {{68}\mathrm{w},3\mathrm{p}}\right\rbrack$ + +Details with low local contrast, such as the face of the woman with headphones, display coarser spatial resolution than areas of high local contrast such as the edges of buildings. This is due to relying on overlapping characters to produce sharp edges. When additional overlap would cause a poorer tonal match, shape resolution suffers. This explains why metrics preferring shape matching result in higher local contrast. + +However, relying on MSE suits a high-fidelity approach. Accurate tone matching makes the shadows in the cityscape and waterfront images easy to discern; the three-dimensional nature of the buildings is clearly conveyed. Employing a shape-matching metric such as SSIM obscures this. + +Increasing the size of the typed image improves quality. In Figure 19, the output size of5" x8" (68 characters wide) is sufficient to reproduce an image with multiple persons with excellent quality, conveying the emotional impact of the input photograph. + +### 5.2 Physical typed results + +Physical reproducibility on a mechanical typewriter is a key feature of this work. Figure 20 shows the result of following the generated instructions to type a ${70}{\mathrm{\;{cm}}}^{2}$ image over 6 hours. + +Typing the results revealed difficulties in replicating the rendered results. The physical typed result in Figure 21 shows false textures and loss of blackness in the body of the horses, especially in the bottom right quadrant. Even slight misalignment of the offset layers creates gaps between tightly packed characters - especially noticeable when thin characters like $\frac{1}{2}$ overlap within areas of uniform tone. This is an example of over-optimization: while packing thin characters can improve outcomes in the simulation, in practice the physical result would be improved by favoring wider, overlapping characters in smooth-textured regions. + +![01963eae-e418-7e07-ba44-3a16b02b765f_7_152_147_716_905_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_7_152_147_716_905_0.jpg) + +Figure 20: Physical typed result, in typewriter [40w,2p] + +Tonally, the gradient produced in the rendered result is reflected in the physical typed result, albeit with a compressed dynamic range. While this partly caused by alignment errors, another factor is the incongruity between the subtractive color space of typed ink and the additive color space of the simulation. The simulation uses multiplicative compositing, as a best guess at how the ink would behave. In reality, the combination of ink and paper imposes a lower bound on the black level, while inconsistent strike force creates a crude step between the lightest reproducible tones and the white paper background. The imperfect tone curve of the scanner used to capture the character set also limits the accuracy of reproducing light tones - the characters appear lighter in the rendered result, shown in Figure 22. + +![01963eae-e418-7e07-ba44-3a16b02b765f_7_152_1667_718_395_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_7_152_1667_718_395_0.jpg) + +Figure 21: Horses: Rendered vs. physical typed results [40w,2p] + +![01963eae-e418-7e07-ba44-3a16b02b765f_7_923_146_723_365_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_7_923_146_723_365_0.jpg) + +Figure 22: Details of horses. Top: Rendered, Bottom: Physical + +Figure 22 shows two details from Figure 21. The bottom left image - a detail of the top-most rows - shows excellent alignment of the characters. Tone matching is accurate overall, but the variation in strike force is apparent on $\mathrm{B}$ and $\%$ characters. In contrast, the right image - a detail of lower rows - demonstrates substantial misalignment: note how the $\frac{1}{2}$ and $y$ characters are shifted right and down relative to the b. While the layers were properly aligned at the top of the image, moving the page to create an offset layer can introduce unwanted rotation. In this case, the page was rotated counter-clockwise on the layer containing $\frac{1}{2}$ and $y$ , relative to the layer containing the leftmost b characters. This blurs the edge delineating the fence post and the body of the pony, and creates gaps in what should be a uniform texture, visible on the right. + +While the typed result of the horses has few misplaced characters (0.15%), achieving such a result is difficult. The left image in Figure 23 shows the most apparent typing error: adding an extra character within a line. This image also reveals the difficulty in typing at more than two dynamic levels; hard, medium and soft were attempted, succeeding only in a smoother gradient in the beard. The right image took our current approach of only hard and soft strikes, limiting the tonal resolution but improving consistency of the background tone, thus avoiding unintended texture. + +![01963eae-e418-7e07-ba44-3a16b02b765f_7_1030_1294_507_342_0.jpg](images/01963eae-e418-7e07-ba44-3a16b02b765f_7_1030_1294_507_342_0.jpg) + +Figure 23: Two attempts at typing the same image $\left\lbrack {{20}\mathrm{w},2\mathrm{p}}\right\rbrack$ + +### 5.3 Limitations + +#### 5.3.1 Physical limitations + +The algorithm is sufficiently optimized that human error in typing is a primary limitation. It is especially difficult to use precise strike force. While the addition of softly typed characters into the character set is crucial for producing smooth midtones and highlights without employing dither, physical limitations make reliable reproduction of the lightest tones difficult. + +The typewriter itself is also a source of error. The distribution of ink on the ribbon is not perfectly uniform, so the typed characters which were scanned to create the character set may not be exactly reproduced when typed later. The typewriter may drift out of alignment due to variation in the amount the platen is moved or rotated at each keystroke. + +These issues can be ameliorated somewhat by improved preparation of the character set. Multiple typed copies of each character could be averaged both in intensity and alignment. This would reduce tone error due to variation in strike force and ribbon inkiness, and increase placement consistency between the simulated and physical typed output. + +#### 5.3.2 Algorithmic limitations + +The most notable limitation of this method is that the optimization is quite time-consuming. We were primarily concerned with image quality; future work could focus on acceleration. + +Halftoning-type patterns can emerge which falsely delineate tonally consistent areas, creating spurious shapes. Repeated characters create false textures. Both of these can be addressed by penalizing repeated adjacent characters. and/or through dithering. For large images, dither would also subjectively improve tone matching over the highlight areas [8] [12]. + +Our implementation of "overstrike" was made for pragmatic reasons; in it, each pass is computed in sequence with no backtracking, so selections made in the first pass are not revisited. For example, two slash characters $\smallsetminus$ and / typed in the same position might provide a better shape match than the typewriter’s serif $\mathrm{X}$ character, but this might only be realized if both "overstrike layers" are considered at once. + +## 6 CONCLUSIONS AND FUTURE WORK + +The typewriter, as an analog mechanical device, possesses degrees of freedom not readily available to digital ASCII art: multiple characters can be typed at a single grid location; typing is not restricted to a grid; and characters can be typed with different levels of force. Our method is able to produce high quality output, both physical (by manually typing the selected characters) and digital (displaying the simulated typed result). + +We used four overlapping layers of characters as a compromise balancing mechanical reproducibility (can the image be physically typed?) and image quality (does it faithfully represent the input?). We were able to obtain good results over a variety of subject matter under these conditions; dark tones and fine details are handled with overlapping and overstriking, while application of lighter strike force helps increase fidelity in light areas of the image. + +We suggest a few different directions for future work, both in modifying details of the existing algorithm and in extending automatic typewriter art to handle additional degrees of freedom. We discuss both types of future work below. + +As the choice of error metric has a strong effect on the result, other metrics should be explored. To improve shape matching, we could employ Alignment Insensitive Shape Similarity, created for ASCII art. To produce results with good tone matching, it would have to be used in combination with other metrics. Machine optimization of the metric blending function's parameters could improve results. + +Freehand typewriter artists such as Paul Smith take advantage of rotating the page: characters can be typed at any angle [1]. We would like to investigate allowing some layers to be typed with the paper rotated. + +Finally, this work has focused on achieving fidelity by optimizing the algorithm more than optimizing physical reproduction. The physical results could be improved by exploring human factors or robotics in future work. + +## REFERENCES + +[1] The Paul Smith Foundation home page. + +[2] O. Akiyama. ASCII art synthesis with convolutional networks. In NIPS Workshop on Machine Learning for Creativity and Design, 2017. + +[3] A. H. Bermano, T. Funkhouser, and S. Rusinkiewicz. State of the Art in Methods and Representations for Fabrication-Aware Design. Computer Graphics Forum, 2017. doi: 10.1111/cgf.13146 + +[4] Bob Neill. Bob Neill's second book of typewriter art. 1984. + +[5] T. Carneiro, R. V. Medeiros Da NóBrega, T. Nepomuceno, G. Bian, V. H. C. De Albuquerque, and P. P. R. Filho. Performance analysis of Google Colaboratory as a tool for accelerating deep learning applications. IEEE Access, 6:61677-61685, 2018. + +[6] K. Dalal, A. W. Klein, Y. Liu, and K. Smith. A spectral approach to NPR packing. In Proceedings of the 4th International Symposium on Non-Photorealistic Animation and Rendering, NPAR '06, p. 71-78. Association for Computing Machinery, New York, NY, USA, 2006. + +[7] O. Deussen, S. Hiller, C. Van Overveld, and T. Strothotte. Floating points: A method for computing stipple drawings. Computer Graphics Forum, 19(3):41-50, 2000. + +[8] R. W. Floyd and L. Steinberg. An adaptive algorithm for spatial greyscale. Proceedings of the Society for Information Display, 17(2):75-77, 1976. + +[9] D. Henderson, S. H. Jacobson, and A. W. Johnson. The theory and practice of simulated annealing. In F. Glover and G. A. Kochenberger, eds., Handbook of Metaheuristics, International Series in Operations Research & Management Science, pp. 287-319. Springer US, Boston, MA, 2003. doi: 10.1007/0-306-48056-5_10 + +[10] S. Hiller, H. Hellwig, and O. Deussen. Beyond stippling - methods for distributing objects on the plane. Computer Graphics Forum, 22(3):515- 522, 2003. doi: 10.1111/1467-8659.00699 + +[11] Kolås, I. Farup, and A. Rizzi. STRESS: A framework for spatial color algorithms. Journal of Imaging Science and Technology, 55:040503, 2011. + +[12] H. Li and D. Mould. Contrast-aware halftoning. In Computer Graphics Forum, vol. 29, pp. 273-280. Wiley Online Library, 2010. + +[13] Q. Liu, S. Li, J. Xiong, and B. Qin. WpmDecolor: weighted projection maximum solver for contrast-preserving decolorization. The Visual Computer, 35(2):205-221, Feb. 2019. doi: 10.1007/s00371-017-1464 -8 + +[14] N. Markuš, M. Fratarcangeli, I. S. Pandžić, and J. Ahlberg. Fast rendering of image mosaics and ASCII art. Computer Graphics Forum, 34(6):251-261, 2015. + +[15] J. Nelson. Typewriter mystery games. Artistic Typing Headquarters, Baltimore, USA, 1979. + +[16] P. Norvig. English letter frequency counts: Mayzner revisited or ETAOIN SRHLDCU. + +[17] W.-M. Pang, Y. Qu, T.-T. Wong, D. Cohen-Or, and P.-A. Heng. Structure-aware halftoning. In ACM SIGGRAPH 2008 Papers, SIG-GRAPH '08. Association for Computing Machinery, New York, NY, USA, 2008. + +[18] X. Xu, L. Zhang, and T.-T. Wong. Structure-based ASCII art. ACM Transactions on Graphics, 29(4):52:1-52:9, July 2010. + +[19] X. Xu, L. Zhong, M. Xie, X. Liu, J. Qin, and T. Wong. ASCII art synthesis from natural photographs. IEEE Transactions on Visualization and Computer Graphics, 23(8):1910-1923, Aug. 2017. doi: 10. 1109/TVCG.2016.2569084 + +[20] X. Xu, L. Zhong, M. Xie, J. Qin, Y. Chen, Q. Jin, T.-T. Wong, and G. Han. Texture-aware ASCII art synthesis with proportional fonts. In Proceedings of the workshop on Non-Photorealistic Animation and Rendering, NPAR '15, pp. 183-193. Eurographics Association, Istanbul, Turkey, June 2015. + +[21] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600-612, Apr. 2004. doi: 10. 1109/TIP.2003.819861 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/IC54qQMBFJj/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/IC54qQMBFJj/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..14f2300105f9b1ac0a192d88606b488aa5cb7986 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/IC54qQMBFJj/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,385 @@ +§ ALGORITHMIC TYPEWRITER ART: CAN 1000 WORDS PAINT A PICTURE? + +Roy G. Biv ${}^{ * }$ + +Starbucks Research + +Ed Grimley ${}^{ \dagger }$ + +Grimley Widgets, Inc. + +Martha Stewart ${}^{ \ddagger }$ + +Martha Stewart Enterprises + +Microsoft Research + + < g r a p h i c s > + +Figure 1: The left side shows 1 layer of typed characters. Moving right, overlapping layers are added up to 12 layers in 4 positions. + +§ ABSTRACT + +We develop an optimization-based algorithm for converting input photographs into typewriter art. Taking advantage of the typist's ability to move the paper in the typewriter, the optimization algorithm selects characters for four overlapping, staggered layers of type. By typing the characters as instructed, the typist can reproduce the image on the typewriter. + +Compared to text-mode ASCII art, allowing characters to overlap greatly increases tonal range and spatial resolution, at the expense of exponentially increasing the search space. We use a simulated annealing search to find an approximate solution in this high-dimensional search space. Considering only one dimension at a time, we measure the effect of changing a single character in the simulated typed result, repeatedly iterating over all the characters composing the image. + +Both simulated and physical typed results have a high degree of detail, while still being clearly recognizable as type art. The accuracy of the physical typed result is primarily limited by human error and the mechanics of the typewriter. + +Index Terms: Computing methodologies-Non-photorealistic rendering—; Image Processing— + +§ 1 INTRODUCTION + +Typewriter art involves producing images with typewritten text. A modern computer graphics practitioner is likely familiar with ASCII art, where an image is formed out of text characters on the screen. Typewriter art offers additional flexibility, insofar as characters are not restricted to a regular grid. Multiple characters can be typed at a single location, a practice called overstriking, and rows and columns of characters can partially overlap with previously typed rows and columns. Further, keys on the typewriter can be struck with varying levels of force, transferring greater or lesser quantities of ink from the typewriter ribbon. Varying the strike force produces a much smoother tonal range than monochrome ASCII art, which is especially important in lighter-tone regions of the image. Overstriking and overlapping improve outcomes in the darker regions of the image as well as improving detail. + +In this paper, we present an algorithm for converting an input image into typewriter art, exploiting overstriking, overlapping, and strike force to add detail and to improve tone matching. We can directly render an output image, or produce a set of instructions that can be typed to create a physical realization of the image, in keeping with recent trends in computer graphics towards assisting computational fabrication [3]. Of course, we can create digital versions of the images, and by employing a computer font as input, bypass the typewriter entirely should that be desired by the user. + +Manually crafted typewriter art can be extremely detailed, and historically was often created using primarily the period key or other small, geometric shapes. Without restriction to a regular grid, overlapping these small characters yields both fine spatial resolution and a perceptually wide tonal range, with a texture reminiscent of pointillism or stippling. + +Considerable artistic skill was needed to craft these works. However, the typewriter also allowed users with less skill to produce images. "Typewriter mystery games" [15] provided typists with instructions that, when carried out, produced an image; e.g., see Figure 2 [4]. These instructions exploited the backspace key to enable overstriking to create much darker shades. + +In our method, we produce instructions for four separate layers, each offset by half a space horizontally, vertically, or both; formally, the layers have their respective origins at $\left( {0,0}\right) ,\left( {{0.5},0}\right) ,\left( {0,{0.5}}\right)$ , and (0.5,0.5). Together, over-typing and half-spacing provide nearly full ink coverage. Figure 1 shows a rendered result, where superimposed layers of text cooperate to form a detailed image. + +Our algorithm, outlined in Figure 3 takes as inputs a target image to be reproduced on the typewriter and a scan of the typewriter's character set. The program selects the characters to be typed for each of four layers, which then overlap to produce the image. Within each layer, the placement of characters is limited to a grid. The algorithm optimizes by measuring the effect of changing a single character in the simulated typed result, repeatedly iterating over all character positions until no change to any single character can improve the outcome. The selected characters for each layer form the typist's instructions. By following these instructions the image can be reproduced on the typewriter without demanding any artistic skill. + +*e-mail: roy.g.biv@aol.com + +${}^{ \dagger }$ e-mail: ed.grimley@aol.com + +${}^{\frac{1}{4}}$ e-mail: martha.stewart@marthastewart.com + + < g r a p h i c s > + +Figure 2: Instructions from Bob Neill's Book of Typewriter Art + +With respect to both tone reproduction and shape matching, this technique produces a better approximation of the input image than non-overlapping ASCII art with a similar number of characters. We posed the question "Can 1000 words paint a picture?"; we find that 5790 characters suffices for a fair facsimile of a portrait, though more resolution is required for complex scenes. + + < g r a p h i c s > + +Figure 3: Simplified process + +This paper makes the following contributions: + + * We exploited overlapping characters to increase the dynamic range and expressivity of text art. + + * We present an optimization algorithm for automatic creation of typewriter art, paying special attention to physical reproducibility on a mechanical typewriter. + + * We propose asymmetric mean squared error, in which positive error (too little ink) is weighted less than negative error (too much ink). Subjectively, optimizing with asymmetric MSE produced the best results. + +§ 2 BACKGROUND + +Much recent work in ASCII art has focused on improving shape matching, which can be traced back to the introduction of the Structural Similarity metric (SSIM) [21]. Another metric, created specifically for ASCII art, is the Alignment Insensitive Structural Similarity Metric (AISS) [18]; Xu et al. deform the target image to better match the available character shapes at a given position. Deforming the input image poses issues for the high-fidelity approach we pursue, but the idea of optimizing the alignment of the target image proves useful. + +Conventionally, ASCII art used monospaced fonts. Xu et al. $\left\lbrack {{19},{20}}\right\rbrack$ achieved superior results using proportional-width fonts. Although this approach provides flexibility in the columns of type, the rows of type are still fixed; typewriter art allows both rows and columns to vary. + +Some recent approaches to generating ASCII art involve machine learning with no explicit metric. Akiyama employed a convolutional neural network, trained on manually created structural ASCII art, to produce compelling results for this style [2]. Markus et al. use a decision tree to approximate SSIM comparison for a particular character set, yielding a good approximation at high speed [14]. With our interest in overlapping characters, these trained model approaches pose the issue that only single characters are stored in the model, so overlapping composites of those characters would not be compared with the target image. + +In typewriter art, the characters can in principle be freely positioned, which brings to mind stippling [7]. Computer-generated stippling usually seeks non-overlapping stipples, unlike our case which encourages overlaps. Also, while stippling need not be restricted to points $\left\lbrack {6,{10}}\right\rbrack$ , computer-generated stippling methods do not exploit choice of object shape to better approximate an input image, while this is a fundamental aspect of ASCII art and typewriter art. + +Typewriter art can be seen as a variant of halftoning [8], where a continuous-tone input image is represented with a small number of discrete tones. Our output images are quantized in space (discrete character placement positions) and tone (individual character selection) with the quantization partly hidden by overlapping characters. We employ optimization directly informed by an error metric, similar to the method of Pang et al. [17]. + +§ 3 ALGORITHM + +The aim of this project was to marry the mechanical reproducibility of the "typewriter mystery game" with the improved spatial resolution of freehand typewriter art by employing four overlapping layers, offset both vertically and horizontally. We developed an algorithm which considers the effect of overlapping characters. + + < g r a p h i c s > + +Figure 4: Nine offset characters overlap within one character position + +The algorithm takes as input a target image to be reproduced, and an image of a set of characters (which can be produced by scanning typed characters or from a computer font). The "character set" is chopped into a set of images, one for each character plus one blank character. + +§ 3.1 SEARCH TECHNIQUE + +We take an iterative approach, choosing a character for a single position at a time. Each selection answers the question "What character, placed in this position, will best complement the already-chosen characters to most closely match the target image?" + +With greedy search, we find the best character for a particular position independent of the rest. We then do the same for all character positions in the typed image. This completes a single optimization pass. + +Algorithm 1: Simplified optimization algorithm + +Create list of character positions [charPos]: {layer, row, col, + +charId}; + +Initialize each charId in [charPos] randomly; + +for $i = \left\lbrack {1..\text{ maxIterations }}\right\rbrack$ do + + for pos in [charPos] do + + Create list of character scores [charScores]: {charId, + + score}; + + for char in [charScores] do + + char.score $=$ compare(simulated typed output, + + target image at this position); + + end + + pos.charId $=$ highest scoring char in [charScores] for + + this pos; + + end + +end + +Simulate composite typed output; + +Simulate individual typeable layers; + +Due to the use of overlapping layers, past selections may later become suboptimal: when any character overlapping a position changes, that position must be re-evaluated. For example, if all neighbours of a certain position changed from a dark character to a light one, the selection for that position should possibly be changed to a darker one to compensate. Moreover, because all the characters are connected through overlap, changing a single selection can trigger a cascade of selection changes spanning the image. + +§ 3.2 STOPPING CONDITION + +A character position is considered stable if none of its neighbours have changed since it was last evaluated; thus, re-evaluating the position would lead to the same selection as before. The search terminates when no further improvement can occur by changing any one character selection. A stable state is reached when a complete optimization pass has occurred with no selection changes. + +It is possible that the algorithm will never reach a stable state, as the cascades of selection changes create cycles. To ensure termination, we limit the search to 500 selections per character position. In practice, this limit was never reached, with 25 iterations being typical. + +§ 3.3 LOSS FUNCTION + +We relied on previous work in image fidelity measurement to provide the error function used to score character selection. We also employed a variation on mean squared error (MSE), asymmetric mean squared error (AMSE), multiplies positive error by a factor of $1 + a$ before squaring. + +A combined loss function $\left( {1 - \text{ ssim }}\right) \times$ amse and maximizing SSIM were also explored. + +§ 3.4 OPTIMIZED CROPPING + +Before the iterative search begins, we first generate quick approximations - using a single optimization pass - for each of 64 slightly different crops of the target image. The crop parameters (translation and scale) that maximize SSIM + PSNR are applied to the input before iterative optimization. + +§ 3.5 SIMULATED ANNEALING + +A greedy best-first search evaluates every position along a single dimension, selecting the character which results in the highest similarity to the target image. However, there is no guarantee that this is optimal in other dimensions. The greedy search is quick to converge to a local optimum, where no single character swap will increase the similarity to the target image. + +To combat the lock-in to local optima exhibited by greedy search, we use stochastic SA to intelligently expand the search space. SA is modelled after the physical process of heating and cooling metal to reduce defects [9]. At any selection, it is more probably that a high scoring candidate will be chosen than a lower scoring one. The width of this probability distribution is decreased over iterations, as the "temperature" is reduced by a fixed amount after each optimization pass. When the temperature reaches 0, it "reheats" to a percentage of the initial temperature. + +Each time the algorithm visits a character location, it evaluates the candidate characters in a random order. If selecting the character increases the similarity score, it is chosen. A lower scoring character may also be selected with probability inversely proportional to the delta between the current score and the score resulting from its selection. In other words, a very bad selection is possible, but a better selection is more probable, and increasingly so at each iteration, as the temperature is reduced. + +§ 4 RESULTS + +§ 4.1 LOSS FUNCTION + +As a loss function, standard MSE works very well in this application. MSE matches tone quality well, and the penalty for larger error at a single pixel gives it a sharper, shape matching quality than mean absolute error. + +AMSE penalizes positive error (too much ink) to a greater degree than negative error (too little ink). This has two advantages: first, erroneously placed ink is subjectively more noticeable; second, an overlapping character can fill in missing ink later, while erroneously placed ink cannot be removed. In practice, AMSE improves shape matching - both subjectively and as measured by SSIM on the resultant image - while only slightly compromising MSE. + + < g r a p h i c s > + +Figure 5: SSIM, MSE, AMSE,(1-SSIM)*AMSE $\left( {a = {0.2}}\right)$ [20w] + +As a loss function SSIM differed markedly from MSE in this application, working much like an edge detector with little respect for tone matching. At small sizes, SSIM can yield a linear, sketchlike style. + +At small sizes, some of the best results come from blending SSIM with AMSE. The blended loss function performs well at sizes of around 20 characters (Figure 5); we use the notation [20w] to indicate a result generated at 20 characters in width with no overstrike, and $\left\lbrack {{20}\mathrm{w},2\mathrm{p}}\right\rbrack$ to indicate 20 characters in width with 2 overstrike passes. SSIM slightly emphasizes lines while leaving some lighter areas bare of ink, effectively increasing contrast. + +§ 4.2 SEARCH TECHNIQUE + +§ 4.2.1 GREEDY SEARCH SENSITIVITY TO INITIAL STATE + +By default, we begin the search from an initial blank page where every character selection is the blank character (spacebar). To evaluate the effectiveness of greedy search, we performed multiple runs of the algorithm from different initial states, including random states as well as single-pass approximations. The exhibited sensitivity to initial state (shown in Figure 8) indicates that the local optimum reached by a greedy search may still be some distance from the global optimum. + +§ 4.2.2 SIMULATED ANNEALING (SA) + + < g r a p h i c s > + +Figure 6: Effect of search technique on score and convergence time + + < g r a p h i c s > + +Figure 7: Effect of different initial states on score + +Despite taking longer to converge (Figure 6), SA consistently finds a lower-error state than greedy search, and is less affected by initial state (Figure 7). The results using SA from different random states (Figure 8) are hard to distinguish. However, all SA results captured the whites of the eyes while some greedy results blurred them out. + +§ 4.2.3 SELECTION ORDER + +When using a greedy search, the order in which selections are made (which character position, or dimension, is evaluated next) has a predictable influence on the output. This is especially evident when the search is initiated from a blank state. The first selections will tend to be overly dark, as the single character under evaluation takes full responsibility for tone matching over an area that could ultimately include 9 overlapping characters; see Figure 4. The greedy search tends to become trapped in local optima with overly dark initial selections. The characters that are selected first contribute disproportionately to the result. Our intuition was that this would lead to a suboptimal result. + + < g r a p h i c s > + +Figure 8: Greedy and simulated annealing results from random initial states [20w] + +We experimented with priority-ordered selection - where the priority of a position is determined by the maximal increase in score resulting from selecting the best character at that position - but found that it does not improve the results when simulated annealing is used, and comes at a high computational cost. While a linear selection order resulted in noticeable artifacts, a random selection order was sufficient to avoid these without incurring high computational cost and hence was used for all our results. + +§ 4.3 PRE-PROCESSING THE TARGET IMAGE + +Since many source images are colour, the black and white conversion method has a substantial effect on the result (Figure 9), as does edge enhancement and adjustments to the contrast and brightness [11, 13]. We considered these processes out of scope, but have worked to optimize one critical aspect of image pre-processing unique to this application: the alignment of the target image to the typewriter's "character grid". + + < g r a p h i c s > + +Figure 9: RGB average vs. STRESS algorithm [40w] + +§ 4.3.1 OPTIMIZED CROPPING + +Figure 10 shows two results for the same checkered test pattern, with and without cropping (scaling and shifting the image within the same fixed-size container). The results after cropping via the auto-alignment routine are much sharper. In realistic scenarios - rendering a portrait, say - the alignment still has a strong effect. + + < g r a p h i c s > + +Figure 10: Effect of optimized cropping on test pattern. Left: no crop; Right: optimized crop [12w] + +In Figure 11, the target image has been slightly scaled and shifted to better align with the "character grid" of the typed result. Note the improvement in shape resolution of the eyes, glasses, hair and nose. At small image sizes, alignment is one of the dominant factors in the quality of the result. + + < g r a p h i c s > + +Figure 11: Effect of optimized cropping on portrait [20w] + +§ 4.4 NUMBER OF "OVERSTRIKE" CHARACTERS AT A SINGLE POSI- TION AND NUMBER OF TOTAL CHARACTERS + +Despite having partly overlapping typed characters, we have so far discussed only one character to be selected at a particular location. Making use of the backspace key, the typist can "overstrike" multiple characters at the same position within the same layer. This increases the tonal range, especially allowing darker tones, and can to some extent increase shape matching. + +The algorithm incorporates the backspace key by using the simulated result produced by one run of the program as a background layer during compositing operations in a second pass. The most obvious effect is an expanded tonal range in darker regions of the image, along with increased gradient smoothness and shape matching (Figure 12, Figure 13). + + < g r a p h i c s > + +Figure 12: Top: Simulated typed gradient [30w, 3p]. Bottom: Input gradient. + +In exploring selection order, we learned that it was preferable to distribute ink between several layers, rather than concentrating ink in a few. By default, adding a second overstrike pass will contribute little ink, as the majority of the ink has already been placed in the first pass. With the AMSE metric, we can increase the penalty for wrongly placed ink (asymmetry) on the first overstrike pass and reduce it on the second pass. A higher asymmetry factor encourages the selection of characters which are an especially good match in some areas, but have too little ink in others, over darker characters which do not match the shape as closely. The second pass, with a lower asymmetry, can fill in the gaps. + + < g r a p h i c s > + +Figure 13: Distributing ink between multiple overstrike passes [19w] + +Unsurprisingly, increasing character resolution produces better results. Figure 14 shows two simulations, each with three overstrike passes, at 20 characters wide and 30 characters wide. The higher resolution yielded a marked improvement in detail reproduction. + + < g r a p h i c s > + +Figure 14: Left: short row length [20w,3p], Right: longer row length $\left\lbrack {{30}\mathrm{w},3\mathrm{p}}\right\rbrack$ + +Keeping to a fixed maximum number of characters, generally the best results come from dividing the characters between two to three overstrike passes. Darker images especially benefit from overstrike, as seen in Figure 15. The increased tonal range in the shadows more than compensates for the reduction in size, up to around four overstrike passes. With a lighter image, the trade-off favors a larger size with two layers, as overstrike is less needed in light areas. + + < g r a p h i c s > + +Figure 15: Left to right: [45w,1p], [32w,2p], [23w,4p], [16w,8p] + +§ 4.5 CHARACTER SET + +The choice of typewriter greatly influences the outcome, as it forms the character set that it is available to the algorithm. With most + +1234567890-#qwertyuiop nm,.é#"/$%_&'( )%@QWERTYUIOPξASDFGHJKL:2̂XCVBNM?.gi234567890-#quertjuiop是asdigh jkl;`zʌcvbnn,,&g"/&&_&'( );&&w&c \$v1Or& SLFGhJKL:ẐZCVSVN?.5 + +Figure 16: Characters from Smith-Corona typewriter with dry ribbon, typed hard and soft mechanical typewriters, the typist can vary their strike force to produce lighter and darker variations of the same character. This has dramatic results: including lighter variations greatly increases tonal range in the midtones. + +Figure 17 shows the results with different typed character sets and the SF Mono computer font. The leftmost images, created with monotone character sets, exhibit limited tonal range and draw more attention to the textual characters. In comparing these two images alone, the computer font does not reproduce the image as effectively. The uneven distribution of ink on the typewriter characters aids in shape matching, compared to the perfectly flat distribution across the character a computer font. Critically, even the darkest typewriter characters are less than fully black, which yields a greater tonal range when characters are allowed to overlap. + +The center and right columns in Figure 17 exhibit progressively better tonal range as the character set includes a better distribution of greys. Even if we limit ourselves to two intensities, results improve by having medium and light intensities, rather than dark and medium. There is a trade-off here: overlapping lighter characters can produce a smoother tonal range, at the expense of requiring more overlaps to produce full black. + + < g r a p h i c s > + +Figure 17: Top: Typed characters, Bottom: SF Mono font; Left to right: 1 tone,2 tones,2 lighter tones $\left\lbrack {{30}\mathrm{w},4\mathrm{p}}\right\rbrack$ + +§ 4.6 TIMING NOTES + +The algorithm is expensive due to the large number of image compositing and comparison operations required to determine the best character selection for a given position, multiplied by the number of iterations before convergence. Each of these operations is performed at a high resolution (around 1000 pixels total) at 8 -bit depth. + +We leaned on Google Colaboratory, a cloud computer service which supplies 2 threads of an Intel Xeon processor at ${2.3}\mathrm{{GHz}}$ , and 13GB of RAM [5]. The software is written in Python 3, making use of numpy, OpenCV and scikit-image libraries. + +The top left image in Figure 18 is composed of two overstrike passes of 2670 characters each. To converge to a stable state the algorithm visited each character position 25 times. The first pass took 29 minutes to complete; subsequent overstrike passes converge more quickly. Auto-alignment took an additional 10 minutes. In total, the image took approximately one hour to generate. These timings are representative and vary little over different input photographs. + +The time to converge increases exponentially with number of characters: at 420 characters, the algorithm took 0.64 seconds per character position; at 2670 characters, 0.77 seconds; and at 6392 characters, 0.94 seconds. + +§ 5 EVALUATION + +The metric used for comparison between the target image and the simulated typed result has, subjectively, the strongest effect on the results. Best results were obtained from an asymmetric variation on MSE which disproportionately penalizes wrongly placed ink. Blending this metric with SSIM can sharpen results, especially when the typed image is small. + +Greedy search will necessarily converge to a local optimum in which each dimension is at a global optimum in that dimension (no single character swap can improve the score). We can get closer to the overall global optimum state by using simulated annealing. + +Alignment of the target image with the typewriter's "character grid" can have a strong effect on shape matching. Alignment can be optimized by measuring the effect of translation and scaling on the target image, selecting the best parameters for each. This is the primary factor when the typed image is small. + +Allowing multiple characters to be typed at a single position can substantially increase tonal range and shape matching, especially when the character set is monotone (with no variation in strike force). Allowing for variation in strike force is the largest factor in obtaining a good tonal range in the midtones and highlights. + +§ 5.1 SETTINGS FOR BEST RESULTS + +To best answer the question posed by the title of this paper - can 1000 words paint a picture - we limit ourselves to only typing 5790 characters. This number of characters reflects the average English word length (4.79) plus one space per word [16]. For a 4:3 portrait as the target image, this allows for 25 characters width, with 4 overlapping staggered layers and up to 3 typed characters at any single position while remaining within our budget of 5790 characters. + +The settings described below result in the highest combined SSIM + PSNR. To the author's eye, this metric best reflects overall quality as explored in 4.1. + + * Run auto-alignment for the maximal row length within character budget. Choose the crop parameters that maximize SSIM + PSNR. + + * Use a character set which includes all typewriter characters, typed both hard and soft. + + * Search with simulated annealing, reheating to 10% each time the temperature reaches 0 + + * Start from a random state and randomize the selection order + + * Run the algorithm 2 times ( 2 overstrike passes). Use AMSE, with asymmetry settings $a = {1.5}$ on the first pass, and $a = {0.1}$ on the second. + + < g r a p h i c s > + +Figure 18: Gallery of results with consistent settings $\left\lbrack {i = {5790}}\right.$ characters,2p]. Zoom to see details. + +Figure 18 shows 8 images produced with these settings: maximizing our character budget of 5790 characters and distributing the characters across two overstrike passes. In general, the simulated typed results communicate the content and mood of the input photographs regardless of content. + +To represent noisy textures, the algorithm selects a variety of overlapping characters, demonstrated in the thin tree branches, or the windows of the Flatiron Building (top left). In the absence of texture, characters with more even ink distribution - such as @ or #- are selected. This applies to both light and dark regions, with the darkness of the character and number of overlaps determining overall tone. This is evident in the skin tones of the woman with headphones and in the sky of several images, as well as the gradient shown in Figure 12. + +Employing overstrike achieves good tonal range in the shadows, as seen in the handles of the wrenches and the windows of darker buildings; and a deep black level, demonstrated in the landscape and flowers results. Further emphasizing the value of overstrike, light shapes against a dark background are especially clear, like the round nail suspending the rightmost wrench. While the fixed character grid is set at $1/2$ character resolution, some details in the output demonstrate a spatial resolution of less than $1/{16}$ of a character. + +In general, individual characters are difficult to discern except in the lightest areas, where characters do not overlap - e.g., the sky in the cityscape. The algorithm selects characters which nest, hiding their textual origins, unless the character shapes are a particularly good match - as in the underscores used for the decks of the cruise ships. + + < g r a p h i c s > + +Figure 19: Migrant mother, optimized settings $\left\lbrack {{68}\mathrm{w},3\mathrm{p}}\right\rbrack$ + +Details with low local contrast, such as the face of the woman with headphones, display coarser spatial resolution than areas of high local contrast such as the edges of buildings. This is due to relying on overlapping characters to produce sharp edges. When additional overlap would cause a poorer tonal match, shape resolution suffers. This explains why metrics preferring shape matching result in higher local contrast. + +However, relying on MSE suits a high-fidelity approach. Accurate tone matching makes the shadows in the cityscape and waterfront images easy to discern; the three-dimensional nature of the buildings is clearly conveyed. Employing a shape-matching metric such as SSIM obscures this. + +Increasing the size of the typed image improves quality. In Figure 19, the output size of5" x8" (68 characters wide) is sufficient to reproduce an image with multiple persons with excellent quality, conveying the emotional impact of the input photograph. + +§ 5.2 PHYSICAL TYPED RESULTS + +Physical reproducibility on a mechanical typewriter is a key feature of this work. Figure 20 shows the result of following the generated instructions to type a ${70}{\mathrm{\;{cm}}}^{2}$ image over 6 hours. + +Typing the results revealed difficulties in replicating the rendered results. The physical typed result in Figure 21 shows false textures and loss of blackness in the body of the horses, especially in the bottom right quadrant. Even slight misalignment of the offset layers creates gaps between tightly packed characters - especially noticeable when thin characters like $\frac{1}{2}$ overlap within areas of uniform tone. This is an example of over-optimization: while packing thin characters can improve outcomes in the simulation, in practice the physical result would be improved by favoring wider, overlapping characters in smooth-textured regions. + + < g r a p h i c s > + +Figure 20: Physical typed result, in typewriter [40w,2p] + +Tonally, the gradient produced in the rendered result is reflected in the physical typed result, albeit with a compressed dynamic range. While this partly caused by alignment errors, another factor is the incongruity between the subtractive color space of typed ink and the additive color space of the simulation. The simulation uses multiplicative compositing, as a best guess at how the ink would behave. In reality, the combination of ink and paper imposes a lower bound on the black level, while inconsistent strike force creates a crude step between the lightest reproducible tones and the white paper background. The imperfect tone curve of the scanner used to capture the character set also limits the accuracy of reproducing light tones - the characters appear lighter in the rendered result, shown in Figure 22. + + < g r a p h i c s > + +Figure 21: Horses: Rendered vs. physical typed results [40w,2p] + + < g r a p h i c s > + +Figure 22: Details of horses. Top: Rendered, Bottom: Physical + +Figure 22 shows two details from Figure 21. The bottom left image - a detail of the top-most rows - shows excellent alignment of the characters. Tone matching is accurate overall, but the variation in strike force is apparent on $\mathrm{B}$ and $\%$ characters. In contrast, the right image - a detail of lower rows - demonstrates substantial misalignment: note how the $\frac{1}{2}$ and $y$ characters are shifted right and down relative to the b. While the layers were properly aligned at the top of the image, moving the page to create an offset layer can introduce unwanted rotation. In this case, the page was rotated counter-clockwise on the layer containing $\frac{1}{2}$ and $y$ , relative to the layer containing the leftmost b characters. This blurs the edge delineating the fence post and the body of the pony, and creates gaps in what should be a uniform texture, visible on the right. + +While the typed result of the horses has few misplaced characters (0.15%), achieving such a result is difficult. The left image in Figure 23 shows the most apparent typing error: adding an extra character within a line. This image also reveals the difficulty in typing at more than two dynamic levels; hard, medium and soft were attempted, succeeding only in a smoother gradient in the beard. The right image took our current approach of only hard and soft strikes, limiting the tonal resolution but improving consistency of the background tone, thus avoiding unintended texture. + + < g r a p h i c s > + +Figure 23: Two attempts at typing the same image $\left\lbrack {{20}\mathrm{w},2\mathrm{p}}\right\rbrack$ + +§ 5.3 LIMITATIONS + +§ 5.3.1 PHYSICAL LIMITATIONS + +The algorithm is sufficiently optimized that human error in typing is a primary limitation. It is especially difficult to use precise strike force. While the addition of softly typed characters into the character set is crucial for producing smooth midtones and highlights without employing dither, physical limitations make reliable reproduction of the lightest tones difficult. + +The typewriter itself is also a source of error. The distribution of ink on the ribbon is not perfectly uniform, so the typed characters which were scanned to create the character set may not be exactly reproduced when typed later. The typewriter may drift out of alignment due to variation in the amount the platen is moved or rotated at each keystroke. + +These issues can be ameliorated somewhat by improved preparation of the character set. Multiple typed copies of each character could be averaged both in intensity and alignment. This would reduce tone error due to variation in strike force and ribbon inkiness, and increase placement consistency between the simulated and physical typed output. + +§ 5.3.2 ALGORITHMIC LIMITATIONS + +The most notable limitation of this method is that the optimization is quite time-consuming. We were primarily concerned with image quality; future work could focus on acceleration. + +Halftoning-type patterns can emerge which falsely delineate tonally consistent areas, creating spurious shapes. Repeated characters create false textures. Both of these can be addressed by penalizing repeated adjacent characters. and/or through dithering. For large images, dither would also subjectively improve tone matching over the highlight areas [8] [12]. + +Our implementation of "overstrike" was made for pragmatic reasons; in it, each pass is computed in sequence with no backtracking, so selections made in the first pass are not revisited. For example, two slash characters $\smallsetminus$ and / typed in the same position might provide a better shape match than the typewriter’s serif $\mathrm{X}$ character, but this might only be realized if both "overstrike layers" are considered at once. + +§ 6 CONCLUSIONS AND FUTURE WORK + +The typewriter, as an analog mechanical device, possesses degrees of freedom not readily available to digital ASCII art: multiple characters can be typed at a single grid location; typing is not restricted to a grid; and characters can be typed with different levels of force. Our method is able to produce high quality output, both physical (by manually typing the selected characters) and digital (displaying the simulated typed result). + +We used four overlapping layers of characters as a compromise balancing mechanical reproducibility (can the image be physically typed?) and image quality (does it faithfully represent the input?). We were able to obtain good results over a variety of subject matter under these conditions; dark tones and fine details are handled with overlapping and overstriking, while application of lighter strike force helps increase fidelity in light areas of the image. + +We suggest a few different directions for future work, both in modifying details of the existing algorithm and in extending automatic typewriter art to handle additional degrees of freedom. We discuss both types of future work below. + +As the choice of error metric has a strong effect on the result, other metrics should be explored. To improve shape matching, we could employ Alignment Insensitive Shape Similarity, created for ASCII art. To produce results with good tone matching, it would have to be used in combination with other metrics. Machine optimization of the metric blending function's parameters could improve results. + +Freehand typewriter artists such as Paul Smith take advantage of rotating the page: characters can be typed at any angle [1]. We would like to investigate allowing some layers to be typed with the paper rotated. + +Finally, this work has focused on achieving fidelity by optimizing the algorithm more than optimizing physical reproduction. The physical results could be improved by exploring human factors or robotics in future work. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/MavuzTzi4Sy/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/MavuzTzi4Sy/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..1b3d5ebb8a76c818d1ba232df2468895e3ccea61 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/MavuzTzi4Sy/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,301 @@ +# HMaViz: Human-machine analytics for visual recommendation + +Submission ID: 38 + +![01963eb1-0c10-7238-a46d-54beeb417c29_0_220_340_1359_959_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_0_220_340_1359_959_0.jpg) + +Figure 1: The visual interface of HMaViz framework: (1) Overview, (2) Examplar plots, (3) Focus view, (4) Guided navigation, and (5) Expanded view. + +## Abstract + +Visualizations are context-specific. Understanding the context of visualizations before deciding to use them is a daunting task since users have various backgrounds, and there are thousands of available visual representations (and their variances). To this end, this paper proposes a visual analytics framework to achieve the following research goals: (1) to automatically generate a number of suitable representations for visualizing the input data and present it to users as a catalog of visualizations with different levels of abstractions and data characteristics on one/two/multi-dimensional spaces (2) to infer aspects of the user's interest based on their interactions (3) to narrow down a smaller set of visualizations that suit users analysis intention. The results of this process give our analytics system the means to better understand the user's analysis process and enable it to better provide timely recommendations. + +Index Terms: Human-centered computing-Visualization-Visualization application domains-Visual analytics; + +## 1 INTRODUCTION + +Over the years, visualization has become an effective and efficient way to convey information. Its advantages have given birth to visual software, plug-in, tools, or supporting libraries $\left\lbrack {5,{28},{44}}\right\rbrack$ . Tools have their own audiences and playing fields, and they all share common characteristics; that is, no tool fits for all purposes. It is a challenging task for analysts to select the proper visualization tools to meet their needs, even for data domain knowledge experts, because of the ineffective layout design. This problem becomes more challenging for inexperienced users who are not trained with graphical design principles to choose which visualization is best suited for their given tasks. + +In particular, researchers tackle this problem by providing a visualization recommendation system (VRS) [9, 31, 47] that assists analysts in choosing an appropriate presentation of data. When designing a VRS, designers often focus on some factors [49] that are suitable in specific settings. One common factor is based on data characteristics in which data attribute is taking into consideration; one example of this approach was presented by Mackinlay et al. in Show Me [33]. This embedded Tableau's commercial visual analysis system automatically suggests visual representations based on selected data attributes. The task-oriented approach was studied in $\left\lbrack {9,{43}}\right\rbrack$ , where users’ goals and tasks are the primary focus. Roth and Mattis [43] pioneered integrating users' information-seeking goals into the visualization design process. Another factor is based on users' preferences in which the recommendation system automatically generates visual encoding charts according to perceptual guidelines [38]. This paper seeks to address this problem by proposing a visualization recommendation prototype called HMaViz. Thus, Our main contributions of this paper are: + +- We propose a new recommendation framework based on visual characterizations from the data distribution. + +- We develop an interactive prototype, named HMaViz, that supports and captures a wide range of user interactions. + +- We carry out a user study and demonstrate the usefulness of HMaViz on real-world datasets. + +The rest of this paper is organized as follows. Section 2 summarizes existing studies. In Section 3 and Section 4, we describe the methodology and design architecture of HMaViz prototype in detail. Section 5 demonstrates the usefulness and feasibility of HMaViz via a case study. Challenges and future work are discussed in Section 6. + +## 2 RELATED WORK + +### 2.1 Exploratory visual analysis + +In 2016, Mutlu et al. proposed and developed VizRec [38] to automatically create and suggest personalized visualizations based on perceptual guidelines. The goal of VizRec is that it allows users to select suggested visualizations without interrupting their analysis work-flow. Having this goal in mind, VizRec tried to predict the choice of visual encoding by investigating available information that may be an indicator to reduce the number of visual combinations. The collaborative filtering technique $\left\lbrack {{21},{48}}\right\rbrack$ was utilized to estimate various aspects of the suggested quality charts. The idea of collaborative filtering is to gather users' preferences through either explicitly Likert rating scale 1-7 given by a user or implicitly collected from users' behavior. The limitation of this study is whether users are willing to give their responses on tag/rating for ranking visualization because these responses were collected via a crowd-sourced study, which in turn lacks control over many conditions. Another approach based on rule-based system was presented by Voigt et al. [50]. Based on the characteristics of given devices, data properties, and tasks, the system provides ranked visualizations for users. The key idea of this approach is to leverage annotation in semantic web data to construct the visualization component. However, this annotation requires users to annotate data input manually, which leads to the limitation of this approach. In addition, this work is lacking in supporting the empirical study. A similar approach to this study was found in [38]. + +As the number of dimensions grows, the browsable gallery [55, 56] and sequential navigation [15] do not scale. The problem gets worse when users want to inspect the correlation of variables in high dimensional space: the number of possible pairwise correlations grows exponentially to the number of dimensions. A good strategy is to focus on a subset of visual presentations prominent on certain visual characterizations [56] that users might interest in and a focus and context interface charts (of glyph or thumbnails) for users to select. Most recently, Draco [36] uses a formal model that represents visualizations as a set of logical facts. The visual recommendation is then formulated as a constraint-based problem to be resolved using Answer Set Programming [6]. In particular, Draco searches for the visualizations that satisfy the hard constraints and optimize the soft constraints. In this paper, our framework offers personalized recommendations via an intelligent component that learns from users via their interactions and preferences. The recommendations help users find suitable representations that fit their analysis, background knowledge, and cognitive style. + +### 2.2 Personalized visual recommendations + +Personalization in recommendation systems is getting popular in many application domains [27]. At the same time, it is a challenging problem due to the dynamically changing contents of the available items for recommendation and the requirement to dynamically adapt action to individual user feedback [32]. Also, a personalized recommendation system requires data about user attributes, content assets, current/past users' behaviors. Basing on these data, the agent then delivers individually the best content to the user and gathers feedback (reward) for the recommended item(s) and chosen action(s). In many cases, characterizing the specified data is a complicated process. + +Traditional approaches to the personalized recommendation system can be divided into collaborative filtering, content-based filtering, and hybrid methods. Collaborative filtering [45] leverages the similarities across users based on their consumption history. This approach is appropriate when there is an overlap in the users' historical data, and the contents of the recommending items are relatively static. On the other hand, content-based filtering [35] recommends items similar to those that the user consumed in the past. Finally, the hybrid-approaches [29] combine the previous two approaches, e.g., when the collaborative filtering score is low, it leverages the content-based filtering information. The traditional approaches are limited in many real-world problems that impose constant changes in the available items for recommendation and a large number of new users (thus, there is no history data). In these cases, recent works suggest that reinforcement learning (specifically contextual bandits) are gaining favor. A visual recommendation system belongs to this type of real-world problems. Therefore, this work explores different contextual bandit algorithms and apply them in characterizing and realizing a visual recommendation system. The contextual bandit problems are popular in the literature; the algorithms aim to maximize the payoff, or in other words, minimize the regret. The regret is defined as the difference in the award by recommending the items (actions/arms interchangeably in terms of Contextual Bandit), which are different from the optimal ones. + +## 3 Methods + +### 3.1 Data abstraction + +Due to the constant increase of data and the limited cognitive load of human, data abstraction [30] is commonly adopted to reduce the cost of rendering and visual feature computation expenses [3]. Data abstraction is the process of gathering information and presented in a summary form for purposes such as statistical analysis. Figure 2 shows an example of data aggregation on two-dimensional presentations. Notice that the visual features in our framework will be computed on the aggregated data, which allows us to handle large data [13]. + +![01963eb1-0c10-7238-a46d-54beeb417c29_1_926_1664_719_337_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_1_926_1664_719_337_0.jpg) + +Figure 2: Abstraction of the Old Faithful Geyser data [20]: Scattertplot vs. aggregated representation in hexagon bins. + +![01963eb1-0c10-7238-a46d-54beeb417c29_2_176_145_1455_1227_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_2_176_145_1455_1227_0.jpg) + +Figure 3: Our proposed HMaViz visual catalog: (top-down) More abstracted representation of the same data, (left-right) More complicated multivariate analysis, (top) The associated visual features for each type of analysis (the feature cells are colored by the abstraction level if they are plotted in the HMaViz default exemplar view). + +### 3.2 Visualization catalog + +In our framework, the users and visualizations are characterized on the following criteria: number of data dimensions (univariate [26], bivariate [11], and multivariate data [14, 57]), visual abstractions described in Section 3.1(individual data instances, groups [40, 41], or just summary [58]), and visual patterns (trends [23], correlations [51], and outliers [25, 52]). While each of these three dimensions has been studied extensively in the visual analytics field $\left\lbrack {3,{10}}\right\rbrack$ , to the best of our knowledge, there is no existing framework that incorporates all together in human-machine analytics for visual recommendation system. Figure 3 summarizes the projected dimensions in our visual analytics framework: type of multivariate analysis, statistical-driven features, levels of data abstractions, and visual encoding strategies. + +### 3.3 Learning algorithm + +Building upon the first task, the second task focuses on the visual interface that can capture the users' interest [8]. We first explored the four mentioned algorithms for contextual bandit problems, namely $\varepsilon$ -greedy, UCB1 [2], LinUCB [32], and Contextual Thompson Sampling. We defined our problem following the k -armed contextual bandit definition discussed in the previous section. Also, in our case, the reward is the combination of (1) whether the user clicks on the recommended graph and (2) how long after clicking the user spends on analyzing the graph. In this case, clicking on the chart is not enough, since right after clicking, the user may use the provided menu to modify the recommended item. For instance, after clicking on a graph, the user uses the given menu to change the abstraction level of it (e.g., from individual point display to clustering display). This change means the agent does not recommend the appropriate abstraction level (though the other features might be correct). + +After defining the problem, we generate a set of simulated data according to the number of variables, variable types, abstraction levels, and visual features for each graph to test the regret convergence of the four algorithms. These simulated tests help in selecting an appropriate algorithm for our solution or change of the required data collection to reflect the user's behavior [22]. As from experimental results, LinUCB and Thompson Sampling gave better results compared to other algorithms. Notably, LinUCB outperformed the other algorithms on the simulated data. Thus, we select it to build the learning agent in the current implementation. Note that these experimented results do not necessarily imply Thompson Sampling method is not as good as LinUCB for the actual or different datasets or different sets of parameters (that we haven't been able to explore exhaustively) for the Thompson Sampling algorithm. Therefore, the learning agent itself is developed as a separate library with a defined set of interfaces detailed in Section 4. This separate implementation allows us to replace the learning agent using a new algorithm or applying the algorithm to different recommendation tasks in the system without having to change the system architecture much. + +## 4 HMAVIZ ARCHITECTURE + +Before applying machine learning techniques or fitting any models, it is important to understand what your data look like. The system generates a diverse set of visualizations for broad initial exploration for one dimension, two dimensions, and higher dimensions. Lower dimensional visualizations, such as bar charts, box plots, and scatter plots shown in Figure 3 are widely accessible. As the number of dimensions grows, the browsable gallery $\left\lbrack {{55},{56}}\right\rbrack$ and sequential navigation [15] do not scale well. Therefore, our framework provides two unique features to deal with large, complex, and high dimensional data. First, we use statistical-driven components that characterize the data distributions such as density, variance, and skewness (for 1D), shape and texture (for 1D), convergence and line crossings (for nD). Second, we propose the use of 4 abstraction levels in our Human-Machine Analytics: individual instances, regular binning, data-dependent binning, and most abstracted (such as min, max, and median). On the human side, this helps to capture their level of interest in the data (individual, groups, or overall trend). On the machine side, the framework automatically adjusts the level of abstraction in the recommended view to render the larger number of plots (can be requested by the users) as the number of views can be exponentially increased by the number of variables in the input data [34]. + +![01963eb1-0c10-7238-a46d-54beeb417c29_3_151_1254_717_371_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_3_151_1254_717_371_0.jpg) + +Figure 4: Flow chart of HMaViz: (a) Overview panel, (b) Recommended views projected on the four dimensions: statistical-driven features, abstraction level, type of multivariate analysis, and visual encodings (c) Guided navigation view and expanded view. + +### 4.1 Components of the HMaViz + +Figure 4 shows a schematic overview of HMaViz. After data is fed into the system, the statistical-driven features are calculated and plotted on the overview panel 4(a). From the overview panel, heuristically defined initial views are shown (i.e., ticks plot, bar chart, area chart, box plot for 1D). Recommended views are projected on the four dimensions, as shown in Figirue4(b) (statistical-driven features, abstraction level, type of multivariate analysis, and visual encoding) to convey users' interest. Users may select to change one or more dimensions in the interface, which may lead to the partial or full updates of the recommendation interface. For example, if users are interested in a more abstracted representation, the guided navigation, focus view, and expanded view in Figure 4(c) need to be updated. While increasing the number of variables in the analysis might lead to the updates of overview and exemplary plots. The visual features for the next level of analysis will be calculated as well (via another web worker). + +#### 4.1.1 The overview panel + +Figure 5(a) summarizes the input data in the form of Biplots [19] chart, which allows users to explore both data observations and data features on the same 2D projection. From the center point of the panel (the intersection among all connected lines), the horizontal axis represents the primary principal component, while the vertical axis represents the second principal component. Each observation in the data set is represented by a small blue circle that has a relative position to the principal components. Each vector is color encoded [39]. + +![01963eb1-0c10-7238-a46d-54beeb417c29_3_933_688_703_373_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_3_933_688_703_373_0.jpg) + +Figure 5: The overview panel of HMaViz: (a) 1D Biplot (b) 2D Biplot. + +Figure 5(b) shows nine feature vectors of 2D projections, including convex, sparse, clumpy, striated, skewed, stringy, monotonic, skinny, and outlying [53]. Example plots are chosen based on their values on each of the statistical measures to covey possible data patterns in the data. The position of each thumbnail is relative to principal components. Users can start their analysis process by picking up a variable from a list, from in the overview panel, or exemplary plots explained next. + +#### 4.1.2 The exemplary plots + +To avoid overwhelming viewers with a large number of generated plots, we automatically select exemplary plots which are prominent on certain visual features, such as skewness, variances, outliers [12] (for univariate) and correlations [46], clusters [4], Stringy, Striated [53] (for bivariate) among other high dimensional features [14, 17]. We also heuristically associate the visual features and abstraction levels in these four exemplary plots. The predefined associations are color-coded in our catalog in Figure 3. + +For univariate, HMaViz heuristically defines four levels of visual abstraction vs. four data distribution features in the initial view, including low-outlier, medium-multimodality, fair-variance, and high-skewness. The first abstract visual type (as depicted in Figure 6(a)) is the ticks plot of variable SlugRate, which has the highest outlier score (on the top right corner of the plot). The ticks plot is at the lowest abstraction level because every single data instance (including outliers) is plotted and selected (to see its detail). The capability is desirable in many application domains as outlier detection is one of the critical tasks for visual analysis [7]. + +We use the bar chart as a recommended visual abstraction for the second level (as illustrated in Figure 6(b)) because we want to highlight the skewness of data distribution. The highest skewness value is calculated from values in a given dimension. In contrast to regular binning, the data-dependent binning starts out where the actual data located and create a smooth representation of the distribution density [24]. An area chart is used for this purpose (in 1D) as the fair visual abstract type (in Figure 6(c)). The Box plot is recommended for the highest abstraction level type of visual encoding in Figure 6(d) as it is a standardized way of displaying the data distribution of each variable based on the five-number summary: minimum, first quartile, median, third quartile, and maximum [42]. We try to keep this Miller magic number consistently across the highest level abstractions (for multivariate analysis) in our framework. For example, our 2D contours (the most abstracted bivariate representation in $\mathrm{{HMaViz}}$ ) are separated into five different layers. + +![01963eb1-0c10-7238-a46d-54beeb417c29_4_150_146_703_614_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_4_150_146_703_614_0.jpg) + +Figure 6: Univariate exemplar plots for the Baseball data: (left) Declarative language and (right) Visual representations. + +#### 4.1.3 The guided navigation + +To support ordering, filtering, and navigation in high dimensional space, we provide focus and context explorations. In particular, thumbnails and glyphs [16] are used to provide high-level overviews such as Skeleton-Based Scagnostics [34] for multivariate analysis and support focus and context navigation (highlighting the subspace that the user is looking at). The guided navigation view provides a high-level overview of all variables and allows users to explore all possible combinations of variables. The view is color-coded by the selected statistical driven features and order the plots so that users can quickly focus on the more important ones [1]. Within the guided navigation panel, users can change abstraction levels as well as the visual pattern of interest. + +![01963eb1-0c10-7238-a46d-54beeb417c29_4_152_1681_721_353_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_4_152_1681_721_353_0.jpg) + +Figure 7: The navigation panel for 33 variables ordered and colored by (left) pairwise correlations and (right) Striated patterns. + +Voyager [55] and Draco [36] provide interactive navigation of a gallery of generated visualizations. These systems support faceting into trellis plots, layering, and arbitrary concatenation. Our HMaViz incorporates faceted views into the expanded panel and also supports more flexible and complicated layouts such us biplots, scatterplot matrices (as depicted in Figure 7), and parallel coordinates to provide visual guidance via data characterization methods [18,54]. + +#### 4.1.4 The expanded view + +From one dimensional to two-dimensional visualization. Figure 8(a) shows the recommended scatterplot when the current visualization is a tick plot (since every instance can be brushed in both plots). If the focused plot is a bar chart, then the suggested chart is the 2D hexagon bins, as depicted in Figure 8(b) (since they are both in the medium abstraction level). When the area plot is used, the recommended representation is the 2D leader plots, as depicted in Figure 8(c). The leaders (balls) are representative data points that groups other data points in their predefined radius neighborhood [24]. The intensity of the balls represents the density of their cluster, while the variance of their members defines the ball sized. And finally, we use the contour plot as the next level recommendation of the box plot, as shown in Figure 8(d), where the second dimension is selected based on the current selected visual score. + +![01963eb1-0c10-7238-a46d-54beeb417c29_4_931_885_711_964_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_4_931_885_711_964_0.jpg) + +Figure 8: Visualization recommendation from 1D to 2D and from 2D to nD: Plots in the last row are the highest abstraction. Notice that variables in each plot are different. + +From two-dimensional graph to higher-dimensional graph. The rightmost column in Figure 8 shows examples of equivalent higher-dimensional representation for the ones on the left. Notice that in the right panel of Figure 8(c), the closed bands (groups) have different widths as the variance in these groups varies on each dimension. Figure 8(d) presents our new radar bands, which summarize the multivariate data across many dimensions. In particular, the inner and outer border of the bands are the first and third quartiles of each dimension - the middle black curve travel through the medians of the dimensions. + +#### 4.1.5 The learning agent + +We apply reinforcement learning in our framework to learn and provide personalized recommendations to individual users via their interactions and preferences. As discussed in Section 3, we implemented our learning agent as a separate library before deploying to our target application. This separation makes it applicable in different recommendation tasks in our application and also can be easily replaced by a different algorithm without impacting the overall system architecture. + +Figure 9 shows the main components of our learning agent implementation. The first task is to create a new agent. After creating, the newly created agent can have options to learn online and offline. In online learning mode, the agent first observes the user context and combine that with the data of the visualizations available for recommendation. It then uses its learned knowledge to estimate the scores for each of the available graphs and recommend to the user the items with higher estimated scores. After recommending, the agent observes the rewards from the user. In our case, the rewards mean the combination of (1) whether the user clicks on the graph and (2) the number of minutes the user spends on exploring that graph. After having the actual rewards for the recommended visualizations, the agent updates its current knowledge from this trial. + +![01963eb1-0c10-7238-a46d-54beeb417c29_5_152_1013_714_521_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_5_152_1013_714_521_0.jpg) + +Figure 9: Components of the Contextual Bandit learning library. + +On the other hand, in the offline training mode, there is a recorded set of $T$ trials; each trial $t$ contains a set of $K$ graphs with corresponding $d$ context features and also the corresponding rewards for that trial. The agent makes use of this offline dataset and runs through each trial to extract the better reward estimation for any given user context. Finally, it is crucial to be able to save and transfer and reload the learned knowledge from the agent. Therefore, we also provide options to save and reload the agent's learned knowledge. This transferable knowledge and also offline learning capabilities allow us to change the agent algorithm, and/or the agent can learn from available data coming from different sources. + +### 4.2 Implementation + +The HMaViz is implemented in javascript, Plotly, D3.js [5], and an-gularJS. The online demo, video, source codes, and more use cases can be found on our GitHub project: https://git.io/Jv3@Y.The current learning agent (called $\operatorname{LinUCBJS}$ ) is implemented in JavaScript. Also, Firebase [37] is used to store data for the agent. Figure 10 depicts the steps involved in one trial in the online-learning of the agent in this current version. First, the agent reads the user-profiles and the features of the recommended graph. It then recommends a set of four graphs with corresponding IDs, which then be presented to the user. The system then monitors if the user clicks on the recommended graphs and how long the user stays in exploring the graphs to generate corresponding awards (in minutes). Finally, the agent uses these awards to update its knowledge. + +![01963eb1-0c10-7238-a46d-54beeb417c29_5_931_454_717_264_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_5_931_454_717_264_0.jpg) + +Figure 10: LinUCBJS deployment in the HMaViz implementation. + +## 5 CASE STUDY + +To evaluate the effectiveness of the HMaViz, we conduct a study with ten users coming from various domains. Of the ten users, three users are with statistical expertise, two users are in the psychological department, two users are in civil engineering, and three users come from the agriculture department. No member of the proposed framework is included in this study to avoid bias. The purpose of varying users is that we want our system to be viewed from multiple perspectives. + +The goal of this study is to capture the user's opinions on the current system. The advantages, drawbacks of using the learning interface are addressed and iteratively refined during the experiment. The qualitative analysis method is mainly used in this study. We separate the experiment into three phrases: + +- Phase I: Understanding the interface or how the visual framework works. Before users start exploring the dataset, they are introduced to the basic components and elements of the system, how they are connected. Navigation to traverse from one panel to another is illustrated. This phase is anticipated to take approximately 20 minutes. + +- Phase II: Exploring data set. This phase involves users in the active experiment with 14 datasets such as Iris, Population, Cars, Jobs, Baseball, NRCMath, Soil profiles. Most of the datasets are public that can be found on the internet. We try to vary the dataset as much as possible while maintaining user familiarity with some datasets. + +- Phase III: Gathering information. Information is gathered in both previous phases (Phase I and Phase II) for analysis and post-study. After users finished their experiment, we provide them with some opened-ended and focused questions. All data is used for analysis. + +### 5.1 Findings and Discussions + +Interesting trends or patterns. Each user has a different view of data, so similar patterns are interesting to ones but may not the case for the others. We filter out the least mentioned interesting cases and only keep the top favorites in our report. Fig. 11 illustrates some typical findings reported by most users. In Fig. 11(a), users find interesting patterns in the Baseball dataset, which is the outlying point. This point is represented by a small red circle. It is noticed that the outlier can be recognized easily from the one-dimension chart (i.e, SlugRate), but in some situations, this outlier point is not outlier when taking into account in high dimensional space. However, in our case, adding one or two dimensions, the outlier stands out. + +![01963eb1-0c10-7238-a46d-54beeb417c29_6_151_312_717_428_0.jpg](images/01963eb1-0c10-7238-a46d-54beeb417c29_6_151_312_717_428_0.jpg) + +Figure 11: Exploratory analysis of detecting outliers in three navigation steps from overview, focus view, and recommended views using statistical-driven measures: a) The Baseball data b) The soil profiles. + +Regarding the visual abstractions, the three agriculture users were interested in the medium level of abstraction, especially the 2D hexagon plots, since the rendering is much faster. The visual features received unequal attention from users. For example, many users will start with finding outliers in the data and looking for the best projections to highlight and compare the case. + +### 5.2 Expert Feedback + +For experts coming from various domains, we observed that they have various interests. Therefore, the types of analysis (and visual encoding, statistical measures, and level of abstractions) vary significantly. For examples, the three soil scientist was mostly interested in 2D projection and the correlations of variables. They specifically mentioned that they rarely go more than $2\mathrm{D}$ in their type of analysis and unfamiliar with 3D charts and radar charts. They found that the guided scatterplots matrix colored and ordered by our Monotonicity measure is useful and visually appealing. They can easily make sense of the correlation of different chemical elements (such as $\mathrm{{Ca}},\mathrm{{Si}}$ , and $\mathrm{{Zn}}$ ) and therefore differentiate and predict the soil classification. In terms of visual abstraction, they all agreed that the 2D contour map could quickly provide an overview of chemical variations. In brief, for the soil scientist, we can project their interest in our framework dimensions as: + +- Type of analysis $=$ bivariate (2D scatterplot) + +- Level of abstraction $=$ high + +- Visual encoding $=$ area (contour) + +- Statistical feature $=$ monotonicity (Pearson correlation) + +Besides the positive feedback, expertise also pointed out the limitations of HMaViz. For instance, the 2D visual feature (or Scagnostics) is not a user-friendly option. Adding the animated guideline or graphical tutorial will be helpful in this case so that they can go back and forth to find out what visual features are available in the current analysis (to guide their exploration process) and the meanings and computations of each measure. + +## 6 CONCLUSION AND FUTURE WORK + +This paper presents the HMaViz, a visualization recommendation framework that helps analysts to explore, analyze, and discover data patterns, trends, and outliers and to facilitate guided exploration of high-dimensional data. The user indicates which of the presented plots, abstraction levels, and visual features are most interesting to the given task; the system learns the user's interest and presents additional views. The extracted knowledge will be used to suggest more effective visualization along the next step of the user interaction. In summary, we provide view recommending (with different types and complexity) and guidance in the data exploration process. Our technique is designed for scaling with large and high-dimensional data. Also, the learning agent is designed as a separate library with a clear set of interfaces. Thus it is open to incorporating new learning algorithms in the future. Especially when more user data become available, different learning algorithms should be explored and validated for personalized recommendations. + +## REFERENCES + +[1] R. Amar, J. Eagan, and J. Stasko. Low-level components of analytic activity in information visualization. In Proc. of the IEEE Symposium on Information Visualization, pp. 15-24, 2005. + +[2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Mach. Learn., 47(2-3):235-256, May 2002. + +[3] M. Behrisch, M. Blumenschein, N. W. Kim, L. Shao, M. El-Assady, J. Fuchs, D. Seebacher, A. Diehl, U. Brandes, H. Pfister, T. Schreck, D. Weiskopf, and D. A. Keim. Quality metrics for information visualization. Computer Graphics Forum, 37(3):625-662, 2018. doi: 10. 1111/cgf. 13446 + +[4] E. Bertini, A. Tatu, and D. Keim. Quality metrics in high-dimensional data visualization: An overview and systematization. IEEE Transactions on Visualization and Computer Graphics, 17(12):2203-2212, 2011. + +[5] M. Bostock, V. Ogievetsky, and J. Heer. ${\mathrm{D}}^{3}$ data-driven documents. IEEE Transactions on Visualization & Computer Graphics, (12):2301- 2309, 2011. + +[6] G. Brewka, T. Eiter, and M. Truszczynski. Answer set programming at a glance. Communications of the ACM, 54(12):92-103, 2011. + +[7] D. R. Brillinger, L. T. Fernholz, and S. Morgenthaler. The practice of data analysis: Essays in honor of John W. Tukey, vol. 401. Princeton University Press, 2014. + +[8] E. T. Brown, A. Ottley, H. Zhao, Q. Lin, R. Souvenir, A. Endert, and R. Chang. Finding waldo: Learning about users from their interactions. IEEE Transactions on Visualization and Computer Graphics, 20(12):1663-1672, Dec 2014. doi: 10.1109/TVCG.2014.2346575 + +[9] S. M. Casner. Task-analytic approach to the automated design of graphic presentations. ACM Transactions on Graphics (ToG), 10(2):111-151, 1991. + +[10] C. Collins, N. Andrienko, T. Schreck, J. Yang, J. Choo, U. Engelke, A. Jena, and T. Dwyer. Guidance in the human-machine analytics process. Visual Informatics, 2(3):166-180, 2018. doi: 10.1016/j. visinf. 2018.09.003 + +[11] T. N. Dang, A. Anand, and L. Wilkinson. Timeseer: Scagnostics for high-dimensional time series. Visualization and Computer Graphics, IEEE Transactions on, 19(3):470-483, 2013. doi: 10.1109/TVCG. 2012.128 + +[12] T. N. Dang and L. Wilkinson. Timeexplorer: Similarity search time series by their signatures. In International Symposium on Visual Computing, pp. 280-289. Springer, 2013. + +[13] T. N. Dang and L. Wilkinson. Transforming scagnostics to reveal hidden features. IEEE Transactions on Visualization and Computer Graphics, 20(12):1624-1632, Dec 2014. doi: 10.1109/TVCG.2014. 2346572 + +[14] A. Dasgupta and R. Kosara. Pargnostics: Screen-space metrics for parallel coordinates. IEEE Transactions on Visualization & Computer Graphics, (6):1017-1026, 2010. + +[15] V. Dibia and C. Demiralp. Data2vis: Automatic generation of data visualizations using sequence-to-sequence recurrent neural networks. arXiv preprint arXiv:1804.03126, 2018. + +[16] F. Fischer, J. Fuchs, and F. Mansmann. ClockMap: Enhancing Circular Treemaps with Temporal Glyphs for Time-Series Data. In M. Meyer and T. Weinkaufs, eds., EuroVis - Short Papers, 2012. doi: 10.2312/ PE/EuroVisShort/EuroVisShort2012/097-101 + +[17] L. Fu. Implementation of three-dimensional scagnostics. Master's thesis, Dept. of Math., Univ. of Waterloo, 2009. + +[18] L. Fu. Implementation of three-dimensional scagnostics. Master's thesis, University of Waterloo, Department of Mathematics, 2009. + +[19] K. R. Gabriel. The biplot graphic display of matrices with application to principal component analysis. Biometrika, 58(3):453-467, 1971. + +[20] GeyserTimes. Eruptions of Old Faithful Geyser, May 2014 [online database] . https://geysertimes.org, 2017. + +[21] N. Good, J. B. Schafer, J. A. Konstan, A. Borchers, B. Sarwar, J. Her-locker, J. Riedl, et al. Combining collaborative filtering with personal agents for better recommendations. In AAAI/IAAI, pp. 439-446, 1999. + +[22] D. Gotz and Z. Wen. Behavior-driven visualization recommendation. In Proceedings of the 14th international conference on Intelligent user interfaces, pp. 315-324, 2009. + +[23] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. J. Mach. Learn. Res., 3:1157-1182, Mar. 2003. + +[24] J. Hartigan. Clustering Algorithms. John Wiley & Sons, New York, 1975. + +[25] D. Hawkins. Identification of outliers. Monographs on applied probability and statistics. Chapman and Hall, London [u.a.], 1980. + +[26] H. Hochheiser and B. Shneiderman. Dynamic query tools for time series data sets: Timebox widgets for interactive exploration. Information Visualization, 3(1):1-18, Mar. 2004. doi: 10.1145/993176.993177 + +[27] K. Hu, M. A. Bakker, S. Li, T. Kraska, and C. Hidalgo. Vizml: A machine learning approach to visualization recommendation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2019. + +[28] P. T. Inc. Collaborative data science, 2015. + +[29] J. Karim. Hybrid system for personalized recommendations. In 2014 IEEE Eighth International Conference on Research Challenges in Information Science (RCIS), pp. 1-6, May 2014. doi: 10.1109/RCIS. 2014.6861080 + +[30] D. A. Keim. Information visualization and visual data mining. IEEE Transactions on Visualization & Computer Graphics, (1):1-8, 2002. + +[31] D. Koop, C. E. Scheidegger, S. P. Callahan, J. Freire, and C. T. Silva. Viscomplete: Automating suggestions for visualization pipelines. IEEE Transactions on Visualization and Computer Graphics, 14(6):1691- 1698, 2008. + +[32] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web, WWW '10, p. 661-670. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1772690.1772758 + +[33] J. Mackinlay, P. Hanrahan, and C. Stolte. Show me: Automatic presentation for visual analysis. IEEE Transactions on Visualization and Computer Graphics, 13(6):1137-1144, Nov 2007. doi: 10.1109/TVCG .2007.70594 + +[34] J. Matute, A. C. Telea, and L. Linsen. Skeleton-based scagnostics. IEEE Transactions on Visualization & Computer Graphics, (1):1-1, 2018. + +[35] D. Mladenic. Text-learning and related intelligent agents: a survey. IEEE Intelligent Systems and their Applications, 14(4):44-54, July 1999. doi: 10.1109/5254.784084 + +[36] D. Moritz, C. Wang, G. Nelson, H. Lin, A. M. Smith, B. Howe, and J. Heer. Formalizing visualization design knowledge as constraints: Actionable and extensible models in draco. IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis), 2019. + +[37] L. Moroney. The firebase realtime database. In The Definitive Guide to Firebase, pp. 51-71. Springer, 2017. + +[38] B. Mutlu, E. Veas, and C. Trattner. Vizrec: Recommending personalized visualizations. ACM Transactions on Interactive Intelligent Systems (TiiS), 6(4):31, 2016. + +[39] P. K. M. Owonibi. A review on visualization recommendation strategies. + +2017. + +[40] G. Palmas, M. Bachynskyi, A. Oulasvirta, H. P. Seidel, and T. Weinkauf. An edge-bundling layout for interactive parallel coordinates. In 2014 + +IEEE Pacific Visualization Symposium, pp. 57-64, March 2014. doi: 10.1109/PacificVis.2014.40 + +[41] R. Pamula, J. K. Deka, and S. Nandi. An outlier detection method based on clustering. In 2011 Second International Conference on Emerging Applications of Information Technology, pp. 253-256, Feb 2011. doi: 10.1109/EAIT.2011.25 + +[42] P. Patil. What is exploratory data analysis?, 2018. Accessed: 2020-05- 05. + +[43] S. F. Roth and J. Mattis. Data characterization for intelligent graphics presentation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 193-200. ACM, 1990. + +[44] A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer. Vega-lite: A grammar of interactive graphics. IEEE Transactions on Visualization and Computer Graphics, 23(1):341-350, 2017. + +[45] J. B. Schafer, J. Konstan, and J. Riedl. Recommender systems in e-commerce. In Proceedings of the 1st ACM Conference on Electronic Commerce, EC '99, p. 158-166. Association for Computing Machinery, New York, NY, USA, 1999. doi: 10.1145/336992.337035 + +[46] J. Seo and B. Shneiderman. A rank-by-feature framework for unsupervised multidimensional data exploration using low dimensional projections. In Information Visualization, 2004. INFOVIS 2004. IEEE Symposium on, pp. 65-72. IEEE, 2004. + +[47] C. Stolte, D. Tang, and P. Hanrahan. Polaris: A system for query, analysis, and visualization of multidimensional relational databases. IEEE Transactions on Visualization and Computer Graphics, 8(1):52- 65, 2002. + +[48] X. Su and T. M. Khoshgoftaar. A survey of collaborative filtering techniques. Advances in artificial intelligence, 2009, 2009. + +[49] M. Vartak, S. Huang, T. Siddiqui, S. Madden, and A. Parameswaran. Towards visualization recommendation systems. ACM SIGMOD Record, 45(4):34-39, 2017. + +[50] M. Voigt, M. Franke, and K. Meissner. Using expert and empirical knowledge for context-aware recommendation of visualization components. Int. J. Adv. Life Sci, 5:27-41, 2013. + +[51] K. Watanabe, H.-Y. Wu, Y. Niibe, S. Takahashi, and I. Fujishiro. Biclus-tering multivariate data for correlated subspace mining. In Visualization Symposium (PacificVis), 2015 IEEE Pacific, pp. 287-294. IEEE, 2015. + +[52] L. Wilkinson. Visualizing big data outliers through distributed aggregation. IEEE transactions on visualization and computer graphics, 2017. + +[53] L. Wilkinson, A. Anand, and R. Grossman. Graph-theoretic scagnostics. 2005. + +[54] L. Wilkinson, A. Anand, and R. Grossman. High-dimensional visual analytics: Interactive exploration guided by pairwise views of point distributions. IEEE Transactions on Visualization and Computer Graphics, 12(6):1363-1372, 2006. + +[55] K. Wongsuphasawat, D. Moritz, A. Anand, J. Mackinlay, B. Howe, and J. Heer. Voyager: Exploratory analysis via faceted browsing of visualization recommendations. IEEE Transactions on Visualization & Computer Graphics, (1):1-1, 2016. + +[56] K. Wongsuphasawat, Z. Qu, D. Moritz, R. Chang, F. Ouk, A. Anand, J. Mackinlay, B. Howe, and J. Heer. Voyager 2: Augmenting visual analysis with partial view specifications. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2648- 2659, 2017. + +[57] J. Yang, A. Patro, S. Huang, N. Mehta, M. O. Ward, and E. A. Run-densteiner. Value and relation display for interactive exploration of high dimensional datasets. In Proceedings of the IEEE Symposium on Information Visualization, INFOVIS '04, pp. 73-80. IEEE Computer Society, Washington, DC, USA, 2004. doi: 10.1109/INFOVIS.2004. 71 + +[58] A. Yates, A. Webb, M. Sharpnack, H. Chamberlin, K. Huang, and R. Machiraju. Visualizing multidimensional data with glyph sploms. In Computer Graphics Forum, vol. 33, pp. 301-310. Wiley Online Library, 2014. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/MavuzTzi4Sy/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/MavuzTzi4Sy/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..09c7ae9c7f25e0d0ca77ac20402250be3a5b9706 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/MavuzTzi4Sy/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,179 @@ +§ HMAVIZ: HUMAN-MACHINE ANALYTICS FOR VISUAL RECOMMENDATION + +Submission ID: 38 + + < g r a p h i c s > + +Figure 1: The visual interface of HMaViz framework: (1) Overview, (2) Examplar plots, (3) Focus view, (4) Guided navigation, and (5) Expanded view. + +§ ABSTRACT + +Visualizations are context-specific. Understanding the context of visualizations before deciding to use them is a daunting task since users have various backgrounds, and there are thousands of available visual representations (and their variances). To this end, this paper proposes a visual analytics framework to achieve the following research goals: (1) to automatically generate a number of suitable representations for visualizing the input data and present it to users as a catalog of visualizations with different levels of abstractions and data characteristics on one/two/multi-dimensional spaces (2) to infer aspects of the user's interest based on their interactions (3) to narrow down a smaller set of visualizations that suit users analysis intention. The results of this process give our analytics system the means to better understand the user's analysis process and enable it to better provide timely recommendations. + +Index Terms: Human-centered computing-Visualization-Visualization application domains-Visual analytics; + +§ 1 INTRODUCTION + +Over the years, visualization has become an effective and efficient way to convey information. Its advantages have given birth to visual software, plug-in, tools, or supporting libraries $\left\lbrack {5,{28},{44}}\right\rbrack$ . Tools have their own audiences and playing fields, and they all share common characteristics; that is, no tool fits for all purposes. It is a challenging task for analysts to select the proper visualization tools to meet their needs, even for data domain knowledge experts, because of the ineffective layout design. This problem becomes more challenging for inexperienced users who are not trained with graphical design principles to choose which visualization is best suited for their given tasks. + +In particular, researchers tackle this problem by providing a visualization recommendation system (VRS) [9, 31, 47] that assists analysts in choosing an appropriate presentation of data. When designing a VRS, designers often focus on some factors [49] that are suitable in specific settings. One common factor is based on data characteristics in which data attribute is taking into consideration; one example of this approach was presented by Mackinlay et al. in Show Me [33]. This embedded Tableau's commercial visual analysis system automatically suggests visual representations based on selected data attributes. The task-oriented approach was studied in $\left\lbrack {9,{43}}\right\rbrack$ , where users’ goals and tasks are the primary focus. Roth and Mattis [43] pioneered integrating users' information-seeking goals into the visualization design process. Another factor is based on users' preferences in which the recommendation system automatically generates visual encoding charts according to perceptual guidelines [38]. This paper seeks to address this problem by proposing a visualization recommendation prototype called HMaViz. Thus, Our main contributions of this paper are: + + * We propose a new recommendation framework based on visual characterizations from the data distribution. + + * We develop an interactive prototype, named HMaViz, that supports and captures a wide range of user interactions. + + * We carry out a user study and demonstrate the usefulness of HMaViz on real-world datasets. + +The rest of this paper is organized as follows. Section 2 summarizes existing studies. In Section 3 and Section 4, we describe the methodology and design architecture of HMaViz prototype in detail. Section 5 demonstrates the usefulness and feasibility of HMaViz via a case study. Challenges and future work are discussed in Section 6. + +§ 2 RELATED WORK + +§ 2.1 EXPLORATORY VISUAL ANALYSIS + +In 2016, Mutlu et al. proposed and developed VizRec [38] to automatically create and suggest personalized visualizations based on perceptual guidelines. The goal of VizRec is that it allows users to select suggested visualizations without interrupting their analysis work-flow. Having this goal in mind, VizRec tried to predict the choice of visual encoding by investigating available information that may be an indicator to reduce the number of visual combinations. The collaborative filtering technique $\left\lbrack {{21},{48}}\right\rbrack$ was utilized to estimate various aspects of the suggested quality charts. The idea of collaborative filtering is to gather users' preferences through either explicitly Likert rating scale 1-7 given by a user or implicitly collected from users' behavior. The limitation of this study is whether users are willing to give their responses on tag/rating for ranking visualization because these responses were collected via a crowd-sourced study, which in turn lacks control over many conditions. Another approach based on rule-based system was presented by Voigt et al. [50]. Based on the characteristics of given devices, data properties, and tasks, the system provides ranked visualizations for users. The key idea of this approach is to leverage annotation in semantic web data to construct the visualization component. However, this annotation requires users to annotate data input manually, which leads to the limitation of this approach. In addition, this work is lacking in supporting the empirical study. A similar approach to this study was found in [38]. + +As the number of dimensions grows, the browsable gallery [55, 56] and sequential navigation [15] do not scale. The problem gets worse when users want to inspect the correlation of variables in high dimensional space: the number of possible pairwise correlations grows exponentially to the number of dimensions. A good strategy is to focus on a subset of visual presentations prominent on certain visual characterizations [56] that users might interest in and a focus and context interface charts (of glyph or thumbnails) for users to select. Most recently, Draco [36] uses a formal model that represents visualizations as a set of logical facts. The visual recommendation is then formulated as a constraint-based problem to be resolved using Answer Set Programming [6]. In particular, Draco searches for the visualizations that satisfy the hard constraints and optimize the soft constraints. In this paper, our framework offers personalized recommendations via an intelligent component that learns from users via their interactions and preferences. The recommendations help users find suitable representations that fit their analysis, background knowledge, and cognitive style. + +§ 2.2 PERSONALIZED VISUAL RECOMMENDATIONS + +Personalization in recommendation systems is getting popular in many application domains [27]. At the same time, it is a challenging problem due to the dynamically changing contents of the available items for recommendation and the requirement to dynamically adapt action to individual user feedback [32]. Also, a personalized recommendation system requires data about user attributes, content assets, current/past users' behaviors. Basing on these data, the agent then delivers individually the best content to the user and gathers feedback (reward) for the recommended item(s) and chosen action(s). In many cases, characterizing the specified data is a complicated process. + +Traditional approaches to the personalized recommendation system can be divided into collaborative filtering, content-based filtering, and hybrid methods. Collaborative filtering [45] leverages the similarities across users based on their consumption history. This approach is appropriate when there is an overlap in the users' historical data, and the contents of the recommending items are relatively static. On the other hand, content-based filtering [35] recommends items similar to those that the user consumed in the past. Finally, the hybrid-approaches [29] combine the previous two approaches, e.g., when the collaborative filtering score is low, it leverages the content-based filtering information. The traditional approaches are limited in many real-world problems that impose constant changes in the available items for recommendation and a large number of new users (thus, there is no history data). In these cases, recent works suggest that reinforcement learning (specifically contextual bandits) are gaining favor. A visual recommendation system belongs to this type of real-world problems. Therefore, this work explores different contextual bandit algorithms and apply them in characterizing and realizing a visual recommendation system. The contextual bandit problems are popular in the literature; the algorithms aim to maximize the payoff, or in other words, minimize the regret. The regret is defined as the difference in the award by recommending the items (actions/arms interchangeably in terms of Contextual Bandit), which are different from the optimal ones. + +§ 3 METHODS + +§ 3.1 DATA ABSTRACTION + +Due to the constant increase of data and the limited cognitive load of human, data abstraction [30] is commonly adopted to reduce the cost of rendering and visual feature computation expenses [3]. Data abstraction is the process of gathering information and presented in a summary form for purposes such as statistical analysis. Figure 2 shows an example of data aggregation on two-dimensional presentations. Notice that the visual features in our framework will be computed on the aggregated data, which allows us to handle large data [13]. + + < g r a p h i c s > + +Figure 2: Abstraction of the Old Faithful Geyser data [20]: Scattertplot vs. aggregated representation in hexagon bins. + + < g r a p h i c s > + +Figure 3: Our proposed HMaViz visual catalog: (top-down) More abstracted representation of the same data, (left-right) More complicated multivariate analysis, (top) The associated visual features for each type of analysis (the feature cells are colored by the abstraction level if they are plotted in the HMaViz default exemplar view). + +§ 3.2 VISUALIZATION CATALOG + +In our framework, the users and visualizations are characterized on the following criteria: number of data dimensions (univariate [26], bivariate [11], and multivariate data [14, 57]), visual abstractions described in Section 3.1(individual data instances, groups [40, 41], or just summary [58]), and visual patterns (trends [23], correlations [51], and outliers [25, 52]). While each of these three dimensions has been studied extensively in the visual analytics field $\left\lbrack {3,{10}}\right\rbrack$ , to the best of our knowledge, there is no existing framework that incorporates all together in human-machine analytics for visual recommendation system. Figure 3 summarizes the projected dimensions in our visual analytics framework: type of multivariate analysis, statistical-driven features, levels of data abstractions, and visual encoding strategies. + +§ 3.3 LEARNING ALGORITHM + +Building upon the first task, the second task focuses on the visual interface that can capture the users' interest [8]. We first explored the four mentioned algorithms for contextual bandit problems, namely $\varepsilon$ -greedy, UCB1 [2], LinUCB [32], and Contextual Thompson Sampling. We defined our problem following the k -armed contextual bandit definition discussed in the previous section. Also, in our case, the reward is the combination of (1) whether the user clicks on the recommended graph and (2) how long after clicking the user spends on analyzing the graph. In this case, clicking on the chart is not enough, since right after clicking, the user may use the provided menu to modify the recommended item. For instance, after clicking on a graph, the user uses the given menu to change the abstraction level of it (e.g., from individual point display to clustering display). This change means the agent does not recommend the appropriate abstraction level (though the other features might be correct). + +After defining the problem, we generate a set of simulated data according to the number of variables, variable types, abstraction levels, and visual features for each graph to test the regret convergence of the four algorithms. These simulated tests help in selecting an appropriate algorithm for our solution or change of the required data collection to reflect the user's behavior [22]. As from experimental results, LinUCB and Thompson Sampling gave better results compared to other algorithms. Notably, LinUCB outperformed the other algorithms on the simulated data. Thus, we select it to build the learning agent in the current implementation. Note that these experimented results do not necessarily imply Thompson Sampling method is not as good as LinUCB for the actual or different datasets or different sets of parameters (that we haven't been able to explore exhaustively) for the Thompson Sampling algorithm. Therefore, the learning agent itself is developed as a separate library with a defined set of interfaces detailed in Section 4. This separate implementation allows us to replace the learning agent using a new algorithm or applying the algorithm to different recommendation tasks in the system without having to change the system architecture much. + +§ 4 HMAVIZ ARCHITECTURE + +Before applying machine learning techniques or fitting any models, it is important to understand what your data look like. The system generates a diverse set of visualizations for broad initial exploration for one dimension, two dimensions, and higher dimensions. Lower dimensional visualizations, such as bar charts, box plots, and scatter plots shown in Figure 3 are widely accessible. As the number of dimensions grows, the browsable gallery $\left\lbrack {{55},{56}}\right\rbrack$ and sequential navigation [15] do not scale well. Therefore, our framework provides two unique features to deal with large, complex, and high dimensional data. First, we use statistical-driven components that characterize the data distributions such as density, variance, and skewness (for 1D), shape and texture (for 1D), convergence and line crossings (for nD). Second, we propose the use of 4 abstraction levels in our Human-Machine Analytics: individual instances, regular binning, data-dependent binning, and most abstracted (such as min, max, and median). On the human side, this helps to capture their level of interest in the data (individual, groups, or overall trend). On the machine side, the framework automatically adjusts the level of abstraction in the recommended view to render the larger number of plots (can be requested by the users) as the number of views can be exponentially increased by the number of variables in the input data [34]. + + < g r a p h i c s > + +Figure 4: Flow chart of HMaViz: (a) Overview panel, (b) Recommended views projected on the four dimensions: statistical-driven features, abstraction level, type of multivariate analysis, and visual encodings (c) Guided navigation view and expanded view. + +§ 4.1 COMPONENTS OF THE HMAVIZ + +Figure 4 shows a schematic overview of HMaViz. After data is fed into the system, the statistical-driven features are calculated and plotted on the overview panel 4(a). From the overview panel, heuristically defined initial views are shown (i.e., ticks plot, bar chart, area chart, box plot for 1D). Recommended views are projected on the four dimensions, as shown in Figirue4(b) (statistical-driven features, abstraction level, type of multivariate analysis, and visual encoding) to convey users' interest. Users may select to change one or more dimensions in the interface, which may lead to the partial or full updates of the recommendation interface. For example, if users are interested in a more abstracted representation, the guided navigation, focus view, and expanded view in Figure 4(c) need to be updated. While increasing the number of variables in the analysis might lead to the updates of overview and exemplary plots. The visual features for the next level of analysis will be calculated as well (via another web worker). + +§ 4.1.1 THE OVERVIEW PANEL + +Figure 5(a) summarizes the input data in the form of Biplots [19] chart, which allows users to explore both data observations and data features on the same 2D projection. From the center point of the panel (the intersection among all connected lines), the horizontal axis represents the primary principal component, while the vertical axis represents the second principal component. Each observation in the data set is represented by a small blue circle that has a relative position to the principal components. Each vector is color encoded [39]. + + < g r a p h i c s > + +Figure 5: The overview panel of HMaViz: (a) 1D Biplot (b) 2D Biplot. + +Figure 5(b) shows nine feature vectors of 2D projections, including convex, sparse, clumpy, striated, skewed, stringy, monotonic, skinny, and outlying [53]. Example plots are chosen based on their values on each of the statistical measures to covey possible data patterns in the data. The position of each thumbnail is relative to principal components. Users can start their analysis process by picking up a variable from a list, from in the overview panel, or exemplary plots explained next. + +§ 4.1.2 THE EXEMPLARY PLOTS + +To avoid overwhelming viewers with a large number of generated plots, we automatically select exemplary plots which are prominent on certain visual features, such as skewness, variances, outliers [12] (for univariate) and correlations [46], clusters [4], Stringy, Striated [53] (for bivariate) among other high dimensional features [14, 17]. We also heuristically associate the visual features and abstraction levels in these four exemplary plots. The predefined associations are color-coded in our catalog in Figure 3. + +For univariate, HMaViz heuristically defines four levels of visual abstraction vs. four data distribution features in the initial view, including low-outlier, medium-multimodality, fair-variance, and high-skewness. The first abstract visual type (as depicted in Figure 6(a)) is the ticks plot of variable SlugRate, which has the highest outlier score (on the top right corner of the plot). The ticks plot is at the lowest abstraction level because every single data instance (including outliers) is plotted and selected (to see its detail). The capability is desirable in many application domains as outlier detection is one of the critical tasks for visual analysis [7]. + +We use the bar chart as a recommended visual abstraction for the second level (as illustrated in Figure 6(b)) because we want to highlight the skewness of data distribution. The highest skewness value is calculated from values in a given dimension. In contrast to regular binning, the data-dependent binning starts out where the actual data located and create a smooth representation of the distribution density [24]. An area chart is used for this purpose (in 1D) as the fair visual abstract type (in Figure 6(c)). The Box plot is recommended for the highest abstraction level type of visual encoding in Figure 6(d) as it is a standardized way of displaying the data distribution of each variable based on the five-number summary: minimum, first quartile, median, third quartile, and maximum [42]. We try to keep this Miller magic number consistently across the highest level abstractions (for multivariate analysis) in our framework. For example, our 2D contours (the most abstracted bivariate representation in $\mathrm{{HMaViz}}$ ) are separated into five different layers. + + < g r a p h i c s > + +Figure 6: Univariate exemplar plots for the Baseball data: (left) Declarative language and (right) Visual representations. + +§ 4.1.3 THE GUIDED NAVIGATION + +To support ordering, filtering, and navigation in high dimensional space, we provide focus and context explorations. In particular, thumbnails and glyphs [16] are used to provide high-level overviews such as Skeleton-Based Scagnostics [34] for multivariate analysis and support focus and context navigation (highlighting the subspace that the user is looking at). The guided navigation view provides a high-level overview of all variables and allows users to explore all possible combinations of variables. The view is color-coded by the selected statistical driven features and order the plots so that users can quickly focus on the more important ones [1]. Within the guided navigation panel, users can change abstraction levels as well as the visual pattern of interest. + + < g r a p h i c s > + +Figure 7: The navigation panel for 33 variables ordered and colored by (left) pairwise correlations and (right) Striated patterns. + +Voyager [55] and Draco [36] provide interactive navigation of a gallery of generated visualizations. These systems support faceting into trellis plots, layering, and arbitrary concatenation. Our HMaViz incorporates faceted views into the expanded panel and also supports more flexible and complicated layouts such us biplots, scatterplot matrices (as depicted in Figure 7), and parallel coordinates to provide visual guidance via data characterization methods [18,54]. + +§ 4.1.4 THE EXPANDED VIEW + +From one dimensional to two-dimensional visualization. Figure 8(a) shows the recommended scatterplot when the current visualization is a tick plot (since every instance can be brushed in both plots). If the focused plot is a bar chart, then the suggested chart is the 2D hexagon bins, as depicted in Figure 8(b) (since they are both in the medium abstraction level). When the area plot is used, the recommended representation is the 2D leader plots, as depicted in Figure 8(c). The leaders (balls) are representative data points that groups other data points in their predefined radius neighborhood [24]. The intensity of the balls represents the density of their cluster, while the variance of their members defines the ball sized. And finally, we use the contour plot as the next level recommendation of the box plot, as shown in Figure 8(d), where the second dimension is selected based on the current selected visual score. + + < g r a p h i c s > + +Figure 8: Visualization recommendation from 1D to 2D and from 2D to nD: Plots in the last row are the highest abstraction. Notice that variables in each plot are different. + +From two-dimensional graph to higher-dimensional graph. The rightmost column in Figure 8 shows examples of equivalent higher-dimensional representation for the ones on the left. Notice that in the right panel of Figure 8(c), the closed bands (groups) have different widths as the variance in these groups varies on each dimension. Figure 8(d) presents our new radar bands, which summarize the multivariate data across many dimensions. In particular, the inner and outer border of the bands are the first and third quartiles of each dimension - the middle black curve travel through the medians of the dimensions. + +§ 4.1.5 THE LEARNING AGENT + +We apply reinforcement learning in our framework to learn and provide personalized recommendations to individual users via their interactions and preferences. As discussed in Section 3, we implemented our learning agent as a separate library before deploying to our target application. This separation makes it applicable in different recommendation tasks in our application and also can be easily replaced by a different algorithm without impacting the overall system architecture. + +Figure 9 shows the main components of our learning agent implementation. The first task is to create a new agent. After creating, the newly created agent can have options to learn online and offline. In online learning mode, the agent first observes the user context and combine that with the data of the visualizations available for recommendation. It then uses its learned knowledge to estimate the scores for each of the available graphs and recommend to the user the items with higher estimated scores. After recommending, the agent observes the rewards from the user. In our case, the rewards mean the combination of (1) whether the user clicks on the graph and (2) the number of minutes the user spends on exploring that graph. After having the actual rewards for the recommended visualizations, the agent updates its current knowledge from this trial. + + < g r a p h i c s > + +Figure 9: Components of the Contextual Bandit learning library. + +On the other hand, in the offline training mode, there is a recorded set of $T$ trials; each trial $t$ contains a set of $K$ graphs with corresponding $d$ context features and also the corresponding rewards for that trial. The agent makes use of this offline dataset and runs through each trial to extract the better reward estimation for any given user context. Finally, it is crucial to be able to save and transfer and reload the learned knowledge from the agent. Therefore, we also provide options to save and reload the agent's learned knowledge. This transferable knowledge and also offline learning capabilities allow us to change the agent algorithm, and/or the agent can learn from available data coming from different sources. + +§ 4.2 IMPLEMENTATION + +The HMaViz is implemented in javascript, Plotly, D3.js [5], and an-gularJS. The online demo, video, source codes, and more use cases can be found on our GitHub project: https://git.io/Jv3@Y.The current learning agent (called $\operatorname{LinUCBJS}$ ) is implemented in JavaScript. Also, Firebase [37] is used to store data for the agent. Figure 10 depicts the steps involved in one trial in the online-learning of the agent in this current version. First, the agent reads the user-profiles and the features of the recommended graph. It then recommends a set of four graphs with corresponding IDs, which then be presented to the user. The system then monitors if the user clicks on the recommended graphs and how long the user stays in exploring the graphs to generate corresponding awards (in minutes). Finally, the agent uses these awards to update its knowledge. + + < g r a p h i c s > + +Figure 10: LinUCBJS deployment in the HMaViz implementation. + +§ 5 CASE STUDY + +To evaluate the effectiveness of the HMaViz, we conduct a study with ten users coming from various domains. Of the ten users, three users are with statistical expertise, two users are in the psychological department, two users are in civil engineering, and three users come from the agriculture department. No member of the proposed framework is included in this study to avoid bias. The purpose of varying users is that we want our system to be viewed from multiple perspectives. + +The goal of this study is to capture the user's opinions on the current system. The advantages, drawbacks of using the learning interface are addressed and iteratively refined during the experiment. The qualitative analysis method is mainly used in this study. We separate the experiment into three phrases: + + * Phase I: Understanding the interface or how the visual framework works. Before users start exploring the dataset, they are introduced to the basic components and elements of the system, how they are connected. Navigation to traverse from one panel to another is illustrated. This phase is anticipated to take approximately 20 minutes. + + * Phase II: Exploring data set. This phase involves users in the active experiment with 14 datasets such as Iris, Population, Cars, Jobs, Baseball, NRCMath, Soil profiles. Most of the datasets are public that can be found on the internet. We try to vary the dataset as much as possible while maintaining user familiarity with some datasets. + + * Phase III: Gathering information. Information is gathered in both previous phases (Phase I and Phase II) for analysis and post-study. After users finished their experiment, we provide them with some opened-ended and focused questions. All data is used for analysis. + +§ 5.1 FINDINGS AND DISCUSSIONS + +Interesting trends or patterns. Each user has a different view of data, so similar patterns are interesting to ones but may not the case for the others. We filter out the least mentioned interesting cases and only keep the top favorites in our report. Fig. 11 illustrates some typical findings reported by most users. In Fig. 11(a), users find interesting patterns in the Baseball dataset, which is the outlying point. This point is represented by a small red circle. It is noticed that the outlier can be recognized easily from the one-dimension chart (i.e, SlugRate), but in some situations, this outlier point is not outlier when taking into account in high dimensional space. However, in our case, adding one or two dimensions, the outlier stands out. + + < g r a p h i c s > + +Figure 11: Exploratory analysis of detecting outliers in three navigation steps from overview, focus view, and recommended views using statistical-driven measures: a) The Baseball data b) The soil profiles. + +Regarding the visual abstractions, the three agriculture users were interested in the medium level of abstraction, especially the 2D hexagon plots, since the rendering is much faster. The visual features received unequal attention from users. For example, many users will start with finding outliers in the data and looking for the best projections to highlight and compare the case. + +§ 5.2 EXPERT FEEDBACK + +For experts coming from various domains, we observed that they have various interests. Therefore, the types of analysis (and visual encoding, statistical measures, and level of abstractions) vary significantly. For examples, the three soil scientist was mostly interested in 2D projection and the correlations of variables. They specifically mentioned that they rarely go more than $2\mathrm{D}$ in their type of analysis and unfamiliar with 3D charts and radar charts. They found that the guided scatterplots matrix colored and ordered by our Monotonicity measure is useful and visually appealing. They can easily make sense of the correlation of different chemical elements (such as $\mathrm{{Ca}},\mathrm{{Si}}$ , and $\mathrm{{Zn}}$ ) and therefore differentiate and predict the soil classification. In terms of visual abstraction, they all agreed that the 2D contour map could quickly provide an overview of chemical variations. In brief, for the soil scientist, we can project their interest in our framework dimensions as: + + * Type of analysis $=$ bivariate (2D scatterplot) + + * Level of abstraction $=$ high + + * Visual encoding $=$ area (contour) + + * Statistical feature $=$ monotonicity (Pearson correlation) + +Besides the positive feedback, expertise also pointed out the limitations of HMaViz. For instance, the 2D visual feature (or Scagnostics) is not a user-friendly option. Adding the animated guideline or graphical tutorial will be helpful in this case so that they can go back and forth to find out what visual features are available in the current analysis (to guide their exploration process) and the meanings and computations of each measure. + +§ 6 CONCLUSION AND FUTURE WORK + +This paper presents the HMaViz, a visualization recommendation framework that helps analysts to explore, analyze, and discover data patterns, trends, and outliers and to facilitate guided exploration of high-dimensional data. The user indicates which of the presented plots, abstraction levels, and visual features are most interesting to the given task; the system learns the user's interest and presents additional views. The extracted knowledge will be used to suggest more effective visualization along the next step of the user interaction. In summary, we provide view recommending (with different types and complexity) and guidance in the data exploration process. Our technique is designed for scaling with large and high-dimensional data. Also, the learning agent is designed as a separate library with a clear set of interfaces. Thus it is open to incorporating new learning algorithms in the future. Especially when more user data become available, different learning algorithms should be explored and validated for personalized recommendations. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/UjWOHNd93Qd/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/UjWOHNd93Qd/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..1e6c460b78a7f9cf985d7710afc3291e1692393b --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/UjWOHNd93Qd/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,201 @@ +## A graph-based visualization for monitoring of high-performance computing system + +![01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_0_220_392_1358_563_0.jpg](images/01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_0_220_392_1358_563_0.jpg) + +Figure 1: The main components of JobViewer: (a) timeline, (b) main visualization, (c) control panel, (d) table of jobs’ information. + +## Abstract + +Visualization aims to strengthen data exploration and analysis, especially for complex and high dimensional data. High-performance computing (HPC) systems are typically large and complicated instruments that generate massive performance and operation data. Monitoring the performance and operation of HPC systems is a daunting task for HPC admins and researchers due to their dynamic natures. This work proposes a visual design using the idea of the bipartite graph to visualize the monitoring the structural and metrics data of HPC clusters. We build a web-based prototype, called JobViewer, that integrates advanced methods in visualization and human-computer interaction (HCI) to demonstrate the benefits of visualization in real-time monitoring an HPC at a university. We also show real use cases and a user study to validate the efficiency and highlight the drawbacks of the current approach. + +Index Terms: Human-centered computing-Visualization-Visualization application domains-Visual analytics; + +## 1 INTRODUCTION + +High-performance computing (HPC) systems can provide powerful computing resources for many scientific fields, such as quantum chemistry, bioinformatics, high energy physics, and many others. TheESE typically complex research instruments ranging from hundreds to thorusand of computing nodes require substantial efforts in monitoring to ensure the trade off of cost versus performance. One approach that can strengthen monitoring efficiency is to apply visualization and human-computer interaction (HCI) to the operational data. The HCI theory relies on cognitive principles to design the user interface and interactive activities [11], so it allows HPC administrators to gain necessary information quickly and intuitively. This work aims to apply the advantages of HCI to monitoring data at an HPC center [22] to demonstrate the benefits of visualization in monitoring activities. + +Monitoring tasks vary significantly for different administrators and their purposes, but the usual activities are to look at the current states of the system or analyze the historical data for the overview of long-term trends [1]. The administrators often consider the compute node health because they can know what happens with the system by analyzing memory usage, CPU usage, temperature, etc... They may also put their attention into job state information to assess the operation of the system. It is interesting to analyze both users' jobs information and the compute node health because the combined view can help the administrators understand how these jobs utilize the set of resources in the system and know the relations between users' activities and the system's state. Moreover, the knowledge about regular users or jobs' behaviors can encourage the administrators to improve their HPC performance. Therefore, we target the users' jobs and the compute node health for developing an interactive web-based prototype, called JobViewer, to provide a novel monitoring aspect for an HPC system. + +The main contribution of this work is three folds. + +- We apply HCI's advantages to visualize users' jobs and node health monitoring of an HPC system by building a web-based prototype, namely JobViewer, for this purpose. + +- We illustrate the benefits of monitoring both the above aspects in their relations with some real use cases. + +- We carry out a user study to verify whether the approach and designs are suitable for practical uses. + +This paper's structure is as the following. The next section covers some related works, and then, section 3 will consider the designs of the proposed web-based prototype. Section 4 discuss real use cases of the visualizations, while section 5 mentions user study for the approach. Finally, section 6 discuss all results of this work and section 7 summarizes the work. + +## 2 RELATED WORKS + +### 2.1 HPC performance monitoring + +HPC monitoring is not a new problem, so there are several wellknown performance analysis tools, both commercial and open-source ones. Ganglia is an open-source distributed monitoring system for clusters and grids. Ganglia's strength is the scalability, with some measurements showing that Ganglia can scale on clusters of up to 2000 nodes and federations of up to 42 sites [14]. This tool uses RRDtool [19] to store and visualize time series data. Nagios [8] is another tool that many organizations utilize. It can be suitable for monitoring a variety of servers and operating systems with industrial standards. The tool has two versions: one commercial (Nagios XI) and open-source (Nagios Core). The commercial version has web interface and performance graphing [3]. However, there are some issues with traditional Nagios including: + +- Nagios requires human intervention for the definition and maintenance of remote hosts configurations in Nagios Core. + +- Nagios requires Nagios Remote Plugin Executor on Nagios Server and each monitored remote host. + +- Nagios mandates Nagios Service Check Acceptor (NSCA) on each monitored remote host. + +- Nagios also requires to check specific agents (e.g. SNMP) on each monitored remote host. + +Besides, CHReME [16] provides a web-based interface for monitoring HPC resources that took non-expert away from conventional command lines. This tool, however, focuses on basic tasks which can also be found on Nagios engine. Splunk [5] is another software platform for mining and investigating log data for system analysts. Its most significant advantages are the capability to work with multiple data types (e.g., csv, json or other formats) in real-time. It has been used and shown consistent performance in the study [21,26]. However, Greenberg and Debardeleben [12] pointed out that Splunk was not feasible for searching a vast amount of data generated every day (e.g., hundreds of gigabytes of data) due to slow performance. Grafana [9] provides a vibrant interactive visualization dashboard which enables users to view metrics via a set of widgets (e.g., text, table, temporal data). Grafana defines a place holder (i.e., arrays) that automatically generates widgets based on its values. This also a limitation of Grafana: customized visualizations (such as parallel coordinates [20] and scatterplot matrices [24] for analyzing high-dimensional data) are not supported. This visualization package has been used in $\left\lbrack {4,{12}}\right\rbrack$ due to its multiple data stores features. Windows Azure Diagnostic or Amazon cloud watch [13], are also common tools for performance monitoring purposes. The survey of these tools [3] can give more details of interest. + +### 2.2 Time Series Visualizations + +One crucial factor that we need to consider if we want to do visualizations is the data structure. This work investigates a high dimensional temporal dataset with four dimensions: 1) User and job, 2) Compute node, 3) Health metrics, and 4) Time. In other words, we consider the data of 467 compute nodes at an HPC system [22]. Each compute node has nine health metrics, including two CPU temperatures, inlet temperature, four fans' speed, memory usage, and power consumption. Each metric of a compute node is recorded every 5 minutes to form a time series. Moreover, users utilize the compute nodes to run their jobs. If we ignore the user and job, this data becomes the panel data, and there are various ways to visualize it. One example is TimeSeer [6], which transforms the panel data into time series of Scagnostics. The Scagnostics are measures for scoring point distribution in scatterplots [25]. The main idea of TimeSeer is to use these measures as a sign to quickly identify time steps with rare events. Another method is the use of connected scatterplots for displaying the dynamic evolution of pairwise relations between variables in the data [18]. Besides, parallel coordinates can also be extended for the panel data $\left\lbrack {2,7,{23}}\right\rbrack$ . However, these common projections' extensions cannot visualize the relationships between users' jobs with the compute nodes. It is the reason why we propose a novel design of visualization for the dataset with four dimensions, and the detailed discussions are in the next section. + +## 3 DESIGN DESCRIPTIONS + +Based on our weekly discussions with the domain experts, the HPC visualization requirements are expanded on the following dimensions: HPC spatial layout, temporal domain, resource allocations and usages, and system health metrics such as CPU temperature, memory usage, and power consumption. We therefore focus the following design goals on: (D1) Provides spatial and temporal overview across hosts and racks, (D2) Provides the holistics overview of the system on a health metrics at a selected timestamp. (D3) Highlights the correlation of system health services and resource allocation information within a single view, and (D4) Allows system administrators to drill down a particular user/job/compute to investigate the raw time series data for system debugging purposes. Figure 1 depicts four main components of the JobViewer, including the time-line, main visualization, control panel, and the job table. Let first consider the timeline in Figure 1(a) We use animation to illustrate the temporal flow of the dataset. Animation has positive impacts on cognitive aspects such as interpreting, comparing, or focusing [17]. Although this method cannot grasp the whole temporal information, it is convenient for both uses: analyzing historical data and visualizing the system lively. The timeline has another benefit in quickly investigating time steps of interest. + +Figure 1(b) shows the main visualization at a particular time step. The design bases on the idea of the bipartite graph with two disjoint sets of vertices. One set contains users, and another consists of the compute nodes. The link between a user with a compute node implies that the node is running at least one job of the user (Design goal D3). We design this graph with all users in the central list, and all compute nodes surrounding it. The compute nodes are divided into ten racks as their actual spatial locations. A benefit of graph-based visualization is that it is easy to highlight the link between a user and a compute node. For instance, we implement the mouse over the user or the compute node to highlight the corresponding vertex's links. The graph-based design also allows us to apply a simple visual method for illustrating the compute nodes' health metrics. JobViewer uses color to display the value of a chosen metric. The map from color to value is depicted by the color scale on the control panel tab, as can be seen in Figure 1(c). We use these simple visual presentations to display all four dimensions of the dataset mentioned in the previous section. + +Besides the above designs, we also implement others to give related information and improve the cognitions. It is easy to recognize a new user appearing on the central list; however, if a current user, who has some jobs running somewhere in the system, run a new job, it is difficult to identify. If this case happens, we highlight the user at the corresponding time step by its outline and the color of links to compute nodes allocated the new jobs. About the compute nodes, one may wonder how many jobs a compute node is running. We visualize the number of jobs running on a compute node by the thick of its outline. Additionally, if the chosen metric's value on a compute node varies significantly over two consecutive time steps, we use the blur effect to highlight the sudden change. + +Figure 1(c) shows the control panel and all options of the drop-down menus. There are two options for displaying on the central list: one offers the user, and another gives the job name. The next function is ranking that sorts all users on the list by a chosen option. Three options for the ranking are the number of jobs, the number of compute nodes utilized by the user or job, and the selected health metric. We can also select one of the nine metrics to visualize by the compute nodes' color in the visualizing tab. Two more options beyond health metrics in the visualizing tab are user/job name and radar view [15]. If we select the user/job option, all compute nodes are colored according to their users/jobs. If we select the radar view, JobViewer visualizes every compute node by a radar chart representing all its health metrics. + +However, if we observe the compute nodes by their radar chart, it is difficult to recognize the shapes because their size is relatively small. We found that the use of color is more effective for cognitive activities. It is the reason why we apply some clustering algorithm to cluster the compute nodes based on their health metrics. Then, we color each cluster a different color. Every radar chart representing a compute node has the color of its group. This method also improves the analyzing process because it reduces 467 compute nodes to a much smaller number of patterns of health states. We can quickly gain characteristics of the system states or detect strange behaviors of some compute nodes. Two clustering algorithms integrated into JobViewer are k-means and leader algorithm [10]. + +Another interaction with the main visualization is the mouse click to show the corresponding table of jobs' information, as seen in figure 1.d. On the table, we can find all information related to the jobs, such as the job's identity, job name, users, number of cores the job is using, etc. (Design goal D4). Moreover, we also display the time series of the selected health metric on the clicked compute node, and we highlight the period when the job runs on that compute node by the grey area. This visualization helps us understand what happens with compute nodes when particular users are using them. As we will show in the next section, this feature may give information about relations between jobs or users with compute nodes' health states. + +To sum up, JobViewer designs the visualization using the idea of a bipartite graph. It also integrates some simple visual methods and clustering algorithms to improve cognitions for the analyzing process. The next sections demonstrate how we can use JobViewer in monitoring an HPC system. + +## 4 Use CASES + +### 4.1 Job allocation + +The first use case focuses on how JobViewer provides information about job allocations. Figure 2.a shows a snapshot of the main visualization on ${08}/{14}/{2020}$ at $5 : {50}\mathrm{{PM}}$ . The color distinguishes between different users, along with their related compute nodes. If a compute node runs several jobs of multiple users, it has all corresponding colors. If a compute node is white, no user's job is running on it. At $5 : {50}\mathrm{{PM}}$ , there are nine white compute nodes that locate in six different racks. Ten minutes later, userO's job starts, as highlighted by the black outline and links in figure 2.b. It takes 1080 cores, or 30 compute nodes (each compute node has 36 cores). The system allocates seven out of nine white compute nodes to this job, and there are still two white compute nodes at $6 : {00}\mathrm{{PM}}$ . One is on rack 2, and another is on rack 9. Figure 2.c highlights all 30 compute nodes running the userO's job. From these 30 compute nodes, 18 ones run two jobs, and 12 others run only one job. We have checked and found that most of the 18 compute nodes' former jobs consume all 36 cores at 5:55 PM. It means some of the compute nodes utilize up to 72 cores, including virtual cores, at $6 : {00}\mathrm{{PM}}$ . These figures show information about job scheduling. Although there are two unused compute nodes, and the job requires so many cores to run, the system reuses the compute nodes running another job and does not allocate the two unused ones to the job. This use case is an example that can illustrate how efficient JobViewer can help HPC administrators to monitor job scheduling. + +![01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_2_952_152_668_1706_0.jpg](images/01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_2_952_152_668_1706_0.jpg) + +Figure 2: Snapshots of the main visualization at (a) 5:50 PM and (b) 6:00 PM on 08/14/2020. (c) If we click on user0, the highlight of all its links and related compute nodes appears. + +### 4.2 Clustering of health states + +This use case investigates the health monitoring aspect of the Job-Viewer. As mentioned in section 3, we use color to depict values of a selected health metric from the list of nine. Another option to observe all nine health metrics in a single view is to display each compute node by a radar chart. The radar charts can illustrate all health metrics; however, their size is relatively small for users to recognize quickly. The clustering algorithm can overcome this issue because it clusters all compute nodes to a small number of groups. + +![01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_3_151_470_720_526_0.jpg](images/01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_3_151_470_720_526_0.jpg) + +Figure 3: (a) Result of leader algorithm for all 467 compute nodes on 08/18/2020 at 11:30 AM. (b) Visualization of the compute nodes in rack 4 with radar charts representing the compute nodes. The color of these radar charts depicts its group in the result of the leader algorithm. + +Figure 3.a gives the result of the leader algorithm for the system on 08/18/2020 at 11:30 AM. This algorithm clusters 467 compute nodes to 5 groups with different patterns of their health states. The blue group has 450 compute nodes, with medium values of two CPU temperature, four fans' speed, and power consumption. All compute nodes have low inlet temperature, and five of them have high memory usage. We can also see that 13 compute nodes do not have fan speed information, while only four lack information about CPU temperature. The red group has four compute nodes with a common state of high fan speed and CPU2 temperature. Figure 3.b shows these four red compute nodes in rack 4. We have investigated their CPU2 temperature and found that only the compute node 4.33 got heat in its CPU2. Figure 4.b verifies this statement, as the color of compute node 4.33 is red while that of the other three are light green. We can also get the CPU1 temperature of these four compute nodes from figure 4.a. Their CPU1 temperatures are all low due to their corresponding colors. One possible explanation for this event is their location. They may locate near each other, so the three compute nodes(4.34,4.35, and 4.36)can feel the heat from the compute node 4.33. Then, their fan must work harder to cool the CPUs. + +### 4.3 Relation between job and health state + +This use case clarifies the relations between jobs and the health states of compute nodes. We firstly look at the time series of CPU2 temperature of the compute node 4.33 in figure 5 . The unit of temperature is degree Celcius, and the time takes place in August 2020. The vertical dash line indicates the time step at which we stop the timeline to get the time series. It is ${08}/{18}/{2020}$ at 11:30 AM when we investigate the previous use case. The colorful areas highlight periods when a job is running on the compute node. We use text notations, which have similar colors to the corresponding areas, to denote users and their jobs. There are five long jobs on compute node 4.33 over the whole temporal period. None of them overlaps each other. The CPU2 temperature has a high value when user1 runs his/her job, but the value suddenly reduces when user10 starts his/her job. The same jump or drop happens when there is a switch of users. Therefore, it is reasonable to state that the CPU2 temperature of compute node 4.33 depends on the job running on it. If we look at the CPU1 temperature of compute node 2.60 in figure 6, we can observe a similar behavior of the relation. Some jobs are responsible for high CPU temperature, while some jobs do not cause hot CPUs. + +![01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_3_928_148_720_506_0.jpg](images/01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_3_928_148_720_506_0.jpg) + +Figure 4: The visualization of all compute nodes in rack 4. The color indicates (a) CPU1 temperature and (b) CPU2 temperature. Red color means high value, yellow depicts a medium temperature, and green represents low value. + +![01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_3_926_836_719_373_0.jpg](images/01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_3_926_836_719_373_0.jpg) + +Figure 5: The time series of CPU2 temperature of compute node 4.33. The colorful areas highlight the period when the user with similar color runs his/her job. The vertical dash line indicates the time step at which we take the figure. It is ${08}/{18}/{2020}$ at ${11} : {30}\mathrm{{AM}}$ . + +Can we use these relations to investigate the reason for the irregular hot CPU2 temperature of compute node 4.33 , as mentioned in the previous use case? If we compare the user and job running on the compute nodes, namely 4.33 and 2.60, on 08/18/2020 at 11:30 AM, they run only one job of precisely one user. The value of CPU2 temperature of the compute node 4.33 is also higher than other users, such as user1 and user13. This job is suspicious. However, it is impossible to make a strong conclusion about whether this job is the cause of the heat in CPU2 of compute node 4.33 or this compute node has a problem itself. What JobViewer can show to the administrators is the monitoring information. If they want to find the correct reasons for any irregular event, they should do other investigations to see the real causes. + +![01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_4_153_151_715_371_0.jpg](images/01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_4_153_151_715_371_0.jpg) + +Figure 6: The time series of CPU1 temperature of compute node 2.60. The colorful areas highlight the period when the user with similar color runs his/her job. The vertical dash line indicates the time step at which we take the figure. It is ${08}/{18}/{2020}$ at ${11} : {30}\mathrm{{AM}}$ . + +## 5 USER STUDY + +### 5.1 Overview + +We contact three volunteers, who have experience working with the HPC system (in both academia and industry), and carry out the user study through video calls. The user study begins with an introduction to the JobViewer. The introduction covers all features and functions of four main components. After that, we ask whether the volunteers have questions or any confusion about the application. If they are still not clear about our web-based prototype, we explain carefully again to ensure they fully understand what they can achieve from the JobViewer. The next step is to ask them to answer some questions and record their actions while finding the answers. Finally, we ask whether they have feedbacks on the application or not. + +We divide the questions into five tasks as the following: + +- Health metrics: This task aims to check whether volunteers can gain information about the compute nodes' health states. We require volunteers to select a health metric and name one compute node with a high value of the chosen metric. Also, the volunteers need to point out users linked to that compute node. + +- Job information: This task checks whether the volunteers know how to get information about a job. We ask them which user's job starts at a particular time step and some compute nodes allocated for the job. + +- Clustering: This task requires the volunteers to understand how to use the clustering algorithms for detecting the compute nodes with irregular health states. The volunteers need to identify and name all compute nodes with a given pattern of health metrics. + +- Metric vs. Job relation: This task asks the volunteers to use the time series of a selected metric of a specific compute node to comment on the dependency between the job and the selected metric. + +- General comments: This task gets the volunteers' feeling when using JobViewer to answer questions in the above tasks. We want to know whether the application is easy for them to find answers to the above questions. Also, we ask whether they think this application is helpful for monitoring activities. + +For the task Clustering and Metric vs. Job relation, we aim to ask the volunteers question related to the use case of compute node 4.33 , as mentioned in section 4.2. We hope they can see the benefits of our approach through these questions. One user recognizes some issues with compute node 4.33 and spends time investigating it. + +![01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_4_924_153_724_115_0.jpg](images/01963e85-1fbb-75c4-9d7c-b50cb9ff2f59_4_924_153_724_115_0.jpg) + +Figure 7: It is difficult to read time information and reach a particular time step in the timeline's old design, so we implement a new component for the timeline. The HPC administrators can click on the right/left button to move toward the time step of interest or type it directly. + +### 5.2 Results + +Overall, two volunteers can quickly go over the questions and use the application quite correctly, while a volunteer fails almost all the tasks. The first volunteer moves smoothly to all the questions, except reaching a particular time step. It is difficult for him to observe the time on the timeline because it is too small. Also, he takes some issues when trying to reach a specific time step as required by the questions. This volunteer is the only one that spends lots of time on task 4 because he thinks it is interesting to find the reason behind the irregular hot in CPU2 of compute node 4.33, as mentioned in section 4.2. He moves the timeline to look at jobs at different periods and switch to various health metrics to understand the situation. He finally ends up with an assumption about the positions of the four compute nodes. The second volunteer also does well with the tasks, except for the first one. He says that red compute nodes correspond to high values of the selected metric, but he decides to pick up a yellow one to answer the questions. For the task Metric vs. Job relation, he replies that the job consuming high CPU usage will cause high CPU temperature. Regarding their opinions about whether JobViewer is helpful for monitoring activities, these two volunteers have common comments. Although JobViewer has a good design and is useful for a human to build up investigations, the monitoring administrators may not spend too much time on any irregular event step by step. What they want is to catch the problems quickly, so they prefer a large monitor with all information and data. About the last volunteer, his only correct answer for the question is to find the node with a high value of Memory usage. He comments that the application is hard for him to use because it is challenging to navigate the activities. He also does not understand the use of time series and other stuff. + +The first two volunteers also give feedback on how to improve the JobViewer. One is the design of the timeline. Because the whole time interval is long, reaching a particular time step may be a challenging activity. Besides, the text on the timeline may be too small for some application's users to read. It is the reason why we improve our design with a new component above the timeline to make it more useful, as depicted in figure 7. We can directly type the time of interest in this component. Another possible action to get a certain time step is to move near it and use the right/left button to move toward the correct position. Moreover, the second volunteer mentions the scalability of the JobViewer because some HPC clusters may have thousands of compute nodes. Regarding this idea, we believe the graph-based design is suitable for scaling up to a number that is much larger than the current 467 compute nodes. Two reasons that support this argument are as the following. + +1. We use color as the primary visual signal to inform the health states of compute nodes. We can select individual HPC users or compute nodes to observe further details and time series. The color helps improve the cognitions if there are too many compute nodes in the system. + +2. If we have more rack and compute nodes, we can expend the main visualization because it uses a graph-based design. For example, we can use multiple layers of racks. In this case, the links may look cluttered and crowded. However, it can be overcome by simple highlights. + +## 6 Discussion + +The strength of JobViewer is its ability to display both system health states and resource allocation information in a single view. It is easy to gain job allocation information in the main visualization, as depicted in section 4.1. The clustering algorithm integrated into the application can quickly show the characteristics of the system health states. From these characteristics, we can point out any compute node with an irregular health state pattern to investigate the problems behind it. Section 4.2 describes a use case for this benefit. Moreover, JobViewer can allow us to observe the relations between jobs and compute node health, as illustrating in section 4.3. This feature highlights jobs and users' behaviors to understand them better for improving or finding suspicious causes of any problem. One volunteer in the user study also finds it interesting to use this feature to investigate the irregular heat in CPU2 of compute node 4.33. + +To use JobViewer efficiently, we need the training to know interactions and activities to get desirable information. One volunteer out of three comments on the difficulty of using the application, while the other two can easily go through the tasks. Besides, the timeline's original design is not optimal, so we improve it, as shown in figure 7 and mentioned in section 5.2. Another issue related to JobViewer is that it is not a complete tool for HPC monitoring. We focus on the four design goals rather than an efficient and comprehensive tool for commercial purposes. The JobViewer is an application that can show the advantages of visualization and human-centered computing in a complex task of HPC monitoring. + +## 7 CONCLUSION + +We have presented an application of human-centered computing in the case of HPC monitoring data. The visualization design bases on the idea of the bipartite graph that has prominence in scalability. The visualization can intuitively show an HPC system with resource allocation information and the system health states. We have demonstrated three use cases of historical data of an HPC cluster with 467 compute nodes to illustrate the proposed approach's usability. Besides, we have carried out a user study with three experienced experts in HPC monitoring. The results point out the strength of JobViewer and its weakness for further improvement in the future. + +## REFERENCES + +[1] W. Allcock, E. Felix, M. Lowe, R. Rheinheimer, and J. Fullop. Challenges of hpc monitoring. In SC'11: Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-6. IEEE, 2011. + +[2] N. Barlow and L. J. Stuart. Animator: A tool for the animation of parallel coordinates. In Proceedings. Eighth International Conference on Information Visualisation, 2004. IV 2004., pp. 725-730. IEEE, 2004. + +[3] S. Benedict. Performance issues and performance analysis tools for hpc cloud applications: a survey. Computing, 95(2):89-108, 2013. + +[4] E. Betke and J. Kunkel. Real-time i/o-monitoring of hpc applications with siox, elasticsearch, grafana and fuse. In International Conference on High Performance Computing, pp. 174-186. Springer, 2017. + +[5] D. Carasso. Exploring splunk. CITO Research New York, USA, 2012. + +[6] T. N. Dang, A. Anand, and L. Wilkinson. Timeseer: Scagnostics for high-dimensional time series. IEEE Transactions on Visualization and Computer Graphics, 19(3):470-483, 2012. + +[7] A. Dasgupta, R. Kosara, and L. Gosink. Meta parallel coordinates for visualizing features in large, high-dimensional, time-varying data. In IEEE Symposium on Large Data Analysis and Visualization (LDAV), pp. 85-89. IEEE, 2012. + +[8] N. Enterprises. Nagios. website. + +[9] Grafana. The open platform for beautiful analytics and monitoring, 2019. https://grafana.com/. + +[10] J. A. Hartigan. Clustering Algorithms. John Wiley & Sons, Inc., New York, NY, USA, 99th ed., 1975. + +[11] H. R. Hartson. Human-computer interaction: Interdisciplinary roots + +and trends. Journal of systems and software, 43(2):103-118, 1998. + +[12] N. D. Hugh Greenberg. Tivan: A scalable data collection and analytics cluster. 2018. The 2nd Industry/University Joint International Workshop on Data Center Automation, Analytics, and Control (DAAC). + +[13] A. Inc. Amazon cloudwatch, 2012. http://aws.amazon.com/ cloudwatch/.. + +[14] M. L. Massie, B. N. Chun, and D. E. Culler. The ganglia distributed monitoring system: design, implementation, and experience. Parallel Computing, 30(7):817-840, 2004. + +[15] M. Meyer, T. Munzner, and H. Pfister. Mizbee: A multiscale synteny browser. IEEE Transactions on Visualization and Computer Graphics, 15(6):897-904, Nov. 2009. doi: 10.1109/TVCG.2009.167 + +[16] G. Misra, S. Agrawal, N. Kurkure, S. Pawar, and K. Mathur. Chreme: A web based application execution tool for using hpc resources. In International Conference on High Performance Computing, pp. 12-14, 2011. + +[17] K. Nakakoji, A. Takashima, and Y. Yamamoto. Cognitive effects of animated visualization in exploratory visual data analysis. In Proceedings Fifth International Conference on Information Visualisation, pp. 77-84. IEEE, 2001. + +[18] B. Nguyen, R. Hewett, and T. Dang. Congnostics: Visual features for doubly time series plots. 2020. + +[19] T. Oetiker. Rrdtool. website, February 2017. Retrieved December 14, 2020 from https://oss.oetiker.ch/rrdtool/index.en.html. + +[20] G. Palmas, M. Bachynskyi, A. Oulasvirta, H. P. Seidel, and T. Weinkauf. An edge-bundling layout for interactive parallel coordinates. In 2014 IEEE Pacific Visualization Symposium, pp. 57-64, March 2014. doi: 10.1109/PacificVis.2014.40 + +[21] J. Stearley, S. Corwell, and K. Lord. Bridging the gaps: Joining information sources with splunk. In SLAML, 2010. + +[22] TTU. High performance computing center (hpcc) at texas tech university. website, January 2020. Retrieved July 6, 2020 from http://www.depts.ttu.edu/hpcc/. + +[23] R. Wegenkittl, H. Loffelmann, and E. Groller. Visualizing the behaviour of higher dimensional dynamical systems. In Proceedings. Visualization'97 (Cat. No. 97CB36155), pp. 119-125. IEEE, 1997. + +[24] L. Wilkinson, A. Anand, and R. Grossman. Graph-theoretic scagnostics. In Proceedings of the IEEE Information Visualization 2005, pp. 157- 164. IEEE Computer Society Press, 2005. + +[25] L. Wilkinson, A. Anand, and R. Grossman. Graph-theoretic scagnostics. In IEEE Symposium on Information Visualization, 2005. INFOVIS 2005., pp. 157-164. IEEE, 2005. + +[26] P. Zadrozny and R. Kodali. Big data analytics using Splunk: Deriving operational intelligence from social media, machine data, existing data warehouses, and other real-time streaming sources. Apress, 2013. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/UjWOHNd93Qd/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/UjWOHNd93Qd/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..7a7c3d6bfb5e07426aa092a9087e51d1bebedcbb --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/UjWOHNd93Qd/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,145 @@ +§ A GRAPH-BASED VISUALIZATION FOR MONITORING OF HIGH-PERFORMANCE COMPUTING SYSTEM + + < g r a p h i c s > + +Figure 1: The main components of JobViewer: (a) timeline, (b) main visualization, (c) control panel, (d) table of jobs’ information. + +§ ABSTRACT + +Visualization aims to strengthen data exploration and analysis, especially for complex and high dimensional data. High-performance computing (HPC) systems are typically large and complicated instruments that generate massive performance and operation data. Monitoring the performance and operation of HPC systems is a daunting task for HPC admins and researchers due to their dynamic natures. This work proposes a visual design using the idea of the bipartite graph to visualize the monitoring the structural and metrics data of HPC clusters. We build a web-based prototype, called JobViewer, that integrates advanced methods in visualization and human-computer interaction (HCI) to demonstrate the benefits of visualization in real-time monitoring an HPC at a university. We also show real use cases and a user study to validate the efficiency and highlight the drawbacks of the current approach. + +Index Terms: Human-centered computing-Visualization-Visualization application domains-Visual analytics; + +§ 1 INTRODUCTION + +High-performance computing (HPC) systems can provide powerful computing resources for many scientific fields, such as quantum chemistry, bioinformatics, high energy physics, and many others. TheESE typically complex research instruments ranging from hundreds to thorusand of computing nodes require substantial efforts in monitoring to ensure the trade off of cost versus performance. One approach that can strengthen monitoring efficiency is to apply visualization and human-computer interaction (HCI) to the operational data. The HCI theory relies on cognitive principles to design the user interface and interactive activities [11], so it allows HPC administrators to gain necessary information quickly and intuitively. This work aims to apply the advantages of HCI to monitoring data at an HPC center [22] to demonstrate the benefits of visualization in monitoring activities. + +Monitoring tasks vary significantly for different administrators and their purposes, but the usual activities are to look at the current states of the system or analyze the historical data for the overview of long-term trends [1]. The administrators often consider the compute node health because they can know what happens with the system by analyzing memory usage, CPU usage, temperature, etc... They may also put their attention into job state information to assess the operation of the system. It is interesting to analyze both users' jobs information and the compute node health because the combined view can help the administrators understand how these jobs utilize the set of resources in the system and know the relations between users' activities and the system's state. Moreover, the knowledge about regular users or jobs' behaviors can encourage the administrators to improve their HPC performance. Therefore, we target the users' jobs and the compute node health for developing an interactive web-based prototype, called JobViewer, to provide a novel monitoring aspect for an HPC system. + +The main contribution of this work is three folds. + + * We apply HCI's advantages to visualize users' jobs and node health monitoring of an HPC system by building a web-based prototype, namely JobViewer, for this purpose. + + * We illustrate the benefits of monitoring both the above aspects in their relations with some real use cases. + + * We carry out a user study to verify whether the approach and designs are suitable for practical uses. + +This paper's structure is as the following. The next section covers some related works, and then, section 3 will consider the designs of the proposed web-based prototype. Section 4 discuss real use cases of the visualizations, while section 5 mentions user study for the approach. Finally, section 6 discuss all results of this work and section 7 summarizes the work. + +§ 2 RELATED WORKS + +§ 2.1 HPC PERFORMANCE MONITORING + +HPC monitoring is not a new problem, so there are several wellknown performance analysis tools, both commercial and open-source ones. Ganglia is an open-source distributed monitoring system for clusters and grids. Ganglia's strength is the scalability, with some measurements showing that Ganglia can scale on clusters of up to 2000 nodes and federations of up to 42 sites [14]. This tool uses RRDtool [19] to store and visualize time series data. Nagios [8] is another tool that many organizations utilize. It can be suitable for monitoring a variety of servers and operating systems with industrial standards. The tool has two versions: one commercial (Nagios XI) and open-source (Nagios Core). The commercial version has web interface and performance graphing [3]. However, there are some issues with traditional Nagios including: + + * Nagios requires human intervention for the definition and maintenance of remote hosts configurations in Nagios Core. + + * Nagios requires Nagios Remote Plugin Executor on Nagios Server and each monitored remote host. + + * Nagios mandates Nagios Service Check Acceptor (NSCA) on each monitored remote host. + + * Nagios also requires to check specific agents (e.g. SNMP) on each monitored remote host. + +Besides, CHReME [16] provides a web-based interface for monitoring HPC resources that took non-expert away from conventional command lines. This tool, however, focuses on basic tasks which can also be found on Nagios engine. Splunk [5] is another software platform for mining and investigating log data for system analysts. Its most significant advantages are the capability to work with multiple data types (e.g., csv, json or other formats) in real-time. It has been used and shown consistent performance in the study [21,26]. However, Greenberg and Debardeleben [12] pointed out that Splunk was not feasible for searching a vast amount of data generated every day (e.g., hundreds of gigabytes of data) due to slow performance. Grafana [9] provides a vibrant interactive visualization dashboard which enables users to view metrics via a set of widgets (e.g., text, table, temporal data). Grafana defines a place holder (i.e., arrays) that automatically generates widgets based on its values. This also a limitation of Grafana: customized visualizations (such as parallel coordinates [20] and scatterplot matrices [24] for analyzing high-dimensional data) are not supported. This visualization package has been used in $\left\lbrack {4,{12}}\right\rbrack$ due to its multiple data stores features. Windows Azure Diagnostic or Amazon cloud watch [13], are also common tools for performance monitoring purposes. The survey of these tools [3] can give more details of interest. + +§ 2.2 TIME SERIES VISUALIZATIONS + +One crucial factor that we need to consider if we want to do visualizations is the data structure. This work investigates a high dimensional temporal dataset with four dimensions: 1) User and job, 2) Compute node, 3) Health metrics, and 4) Time. In other words, we consider the data of 467 compute nodes at an HPC system [22]. Each compute node has nine health metrics, including two CPU temperatures, inlet temperature, four fans' speed, memory usage, and power consumption. Each metric of a compute node is recorded every 5 minutes to form a time series. Moreover, users utilize the compute nodes to run their jobs. If we ignore the user and job, this data becomes the panel data, and there are various ways to visualize it. One example is TimeSeer [6], which transforms the panel data into time series of Scagnostics. The Scagnostics are measures for scoring point distribution in scatterplots [25]. The main idea of TimeSeer is to use these measures as a sign to quickly identify time steps with rare events. Another method is the use of connected scatterplots for displaying the dynamic evolution of pairwise relations between variables in the data [18]. Besides, parallel coordinates can also be extended for the panel data $\left\lbrack {2,7,{23}}\right\rbrack$ . However, these common projections' extensions cannot visualize the relationships between users' jobs with the compute nodes. It is the reason why we propose a novel design of visualization for the dataset with four dimensions, and the detailed discussions are in the next section. + +§ 3 DESIGN DESCRIPTIONS + +Based on our weekly discussions with the domain experts, the HPC visualization requirements are expanded on the following dimensions: HPC spatial layout, temporal domain, resource allocations and usages, and system health metrics such as CPU temperature, memory usage, and power consumption. We therefore focus the following design goals on: (D1) Provides spatial and temporal overview across hosts and racks, (D2) Provides the holistics overview of the system on a health metrics at a selected timestamp. (D3) Highlights the correlation of system health services and resource allocation information within a single view, and (D4) Allows system administrators to drill down a particular user/job/compute to investigate the raw time series data for system debugging purposes. Figure 1 depicts four main components of the JobViewer, including the time-line, main visualization, control panel, and the job table. Let first consider the timeline in Figure 1(a) We use animation to illustrate the temporal flow of the dataset. Animation has positive impacts on cognitive aspects such as interpreting, comparing, or focusing [17]. Although this method cannot grasp the whole temporal information, it is convenient for both uses: analyzing historical data and visualizing the system lively. The timeline has another benefit in quickly investigating time steps of interest. + +Figure 1(b) shows the main visualization at a particular time step. The design bases on the idea of the bipartite graph with two disjoint sets of vertices. One set contains users, and another consists of the compute nodes. The link between a user with a compute node implies that the node is running at least one job of the user (Design goal D3). We design this graph with all users in the central list, and all compute nodes surrounding it. The compute nodes are divided into ten racks as their actual spatial locations. A benefit of graph-based visualization is that it is easy to highlight the link between a user and a compute node. For instance, we implement the mouse over the user or the compute node to highlight the corresponding vertex's links. The graph-based design also allows us to apply a simple visual method for illustrating the compute nodes' health metrics. JobViewer uses color to display the value of a chosen metric. The map from color to value is depicted by the color scale on the control panel tab, as can be seen in Figure 1(c). We use these simple visual presentations to display all four dimensions of the dataset mentioned in the previous section. + +Besides the above designs, we also implement others to give related information and improve the cognitions. It is easy to recognize a new user appearing on the central list; however, if a current user, who has some jobs running somewhere in the system, run a new job, it is difficult to identify. If this case happens, we highlight the user at the corresponding time step by its outline and the color of links to compute nodes allocated the new jobs. About the compute nodes, one may wonder how many jobs a compute node is running. We visualize the number of jobs running on a compute node by the thick of its outline. Additionally, if the chosen metric's value on a compute node varies significantly over two consecutive time steps, we use the blur effect to highlight the sudden change. + +Figure 1(c) shows the control panel and all options of the drop-down menus. There are two options for displaying on the central list: one offers the user, and another gives the job name. The next function is ranking that sorts all users on the list by a chosen option. Three options for the ranking are the number of jobs, the number of compute nodes utilized by the user or job, and the selected health metric. We can also select one of the nine metrics to visualize by the compute nodes' color in the visualizing tab. Two more options beyond health metrics in the visualizing tab are user/job name and radar view [15]. If we select the user/job option, all compute nodes are colored according to their users/jobs. If we select the radar view, JobViewer visualizes every compute node by a radar chart representing all its health metrics. + +However, if we observe the compute nodes by their radar chart, it is difficult to recognize the shapes because their size is relatively small. We found that the use of color is more effective for cognitive activities. It is the reason why we apply some clustering algorithm to cluster the compute nodes based on their health metrics. Then, we color each cluster a different color. Every radar chart representing a compute node has the color of its group. This method also improves the analyzing process because it reduces 467 compute nodes to a much smaller number of patterns of health states. We can quickly gain characteristics of the system states or detect strange behaviors of some compute nodes. Two clustering algorithms integrated into JobViewer are k-means and leader algorithm [10]. + +Another interaction with the main visualization is the mouse click to show the corresponding table of jobs' information, as seen in figure 1.d. On the table, we can find all information related to the jobs, such as the job's identity, job name, users, number of cores the job is using, etc. (Design goal D4). Moreover, we also display the time series of the selected health metric on the clicked compute node, and we highlight the period when the job runs on that compute node by the grey area. This visualization helps us understand what happens with compute nodes when particular users are using them. As we will show in the next section, this feature may give information about relations between jobs or users with compute nodes' health states. + +To sum up, JobViewer designs the visualization using the idea of a bipartite graph. It also integrates some simple visual methods and clustering algorithms to improve cognitions for the analyzing process. The next sections demonstrate how we can use JobViewer in monitoring an HPC system. + +§ 4 USE CASES + +§ 4.1 JOB ALLOCATION + +The first use case focuses on how JobViewer provides information about job allocations. Figure 2.a shows a snapshot of the main visualization on ${08}/{14}/{2020}$ at $5 : {50}\mathrm{{PM}}$ . The color distinguishes between different users, along with their related compute nodes. If a compute node runs several jobs of multiple users, it has all corresponding colors. If a compute node is white, no user's job is running on it. At $5 : {50}\mathrm{{PM}}$ , there are nine white compute nodes that locate in six different racks. Ten minutes later, userO's job starts, as highlighted by the black outline and links in figure 2.b. It takes 1080 cores, or 30 compute nodes (each compute node has 36 cores). The system allocates seven out of nine white compute nodes to this job, and there are still two white compute nodes at $6 : {00}\mathrm{{PM}}$ . One is on rack 2, and another is on rack 9. Figure 2.c highlights all 30 compute nodes running the userO's job. From these 30 compute nodes, 18 ones run two jobs, and 12 others run only one job. We have checked and found that most of the 18 compute nodes' former jobs consume all 36 cores at 5:55 PM. It means some of the compute nodes utilize up to 72 cores, including virtual cores, at $6 : {00}\mathrm{{PM}}$ . These figures show information about job scheduling. Although there are two unused compute nodes, and the job requires so many cores to run, the system reuses the compute nodes running another job and does not allocate the two unused ones to the job. This use case is an example that can illustrate how efficient JobViewer can help HPC administrators to monitor job scheduling. + + < g r a p h i c s > + +Figure 2: Snapshots of the main visualization at (a) 5:50 PM and (b) 6:00 PM on 08/14/2020. (c) If we click on user0, the highlight of all its links and related compute nodes appears. + +§ 4.2 CLUSTERING OF HEALTH STATES + +This use case investigates the health monitoring aspect of the Job-Viewer. As mentioned in section 3, we use color to depict values of a selected health metric from the list of nine. Another option to observe all nine health metrics in a single view is to display each compute node by a radar chart. The radar charts can illustrate all health metrics; however, their size is relatively small for users to recognize quickly. The clustering algorithm can overcome this issue because it clusters all compute nodes to a small number of groups. + + < g r a p h i c s > + +Figure 3: (a) Result of leader algorithm for all 467 compute nodes on 08/18/2020 at 11:30 AM. (b) Visualization of the compute nodes in rack 4 with radar charts representing the compute nodes. The color of these radar charts depicts its group in the result of the leader algorithm. + +Figure 3.a gives the result of the leader algorithm for the system on 08/18/2020 at 11:30 AM. This algorithm clusters 467 compute nodes to 5 groups with different patterns of their health states. The blue group has 450 compute nodes, with medium values of two CPU temperature, four fans' speed, and power consumption. All compute nodes have low inlet temperature, and five of them have high memory usage. We can also see that 13 compute nodes do not have fan speed information, while only four lack information about CPU temperature. The red group has four compute nodes with a common state of high fan speed and CPU2 temperature. Figure 3.b shows these four red compute nodes in rack 4. We have investigated their CPU2 temperature and found that only the compute node 4.33 got heat in its CPU2. Figure 4.b verifies this statement, as the color of compute node 4.33 is red while that of the other three are light green. We can also get the CPU1 temperature of these four compute nodes from figure 4.a. Their CPU1 temperatures are all low due to their corresponding colors. One possible explanation for this event is their location. They may locate near each other, so the three compute nodes(4.34,4.35, and 4.36)can feel the heat from the compute node 4.33. Then, their fan must work harder to cool the CPUs. + +§ 4.3 RELATION BETWEEN JOB AND HEALTH STATE + +This use case clarifies the relations between jobs and the health states of compute nodes. We firstly look at the time series of CPU2 temperature of the compute node 4.33 in figure 5 . The unit of temperature is degree Celcius, and the time takes place in August 2020. The vertical dash line indicates the time step at which we stop the timeline to get the time series. It is ${08}/{18}/{2020}$ at 11:30 AM when we investigate the previous use case. The colorful areas highlight periods when a job is running on the compute node. We use text notations, which have similar colors to the corresponding areas, to denote users and their jobs. There are five long jobs on compute node 4.33 over the whole temporal period. None of them overlaps each other. The CPU2 temperature has a high value when user1 runs his/her job, but the value suddenly reduces when user10 starts his/her job. The same jump or drop happens when there is a switch of users. Therefore, it is reasonable to state that the CPU2 temperature of compute node 4.33 depends on the job running on it. If we look at the CPU1 temperature of compute node 2.60 in figure 6, we can observe a similar behavior of the relation. Some jobs are responsible for high CPU temperature, while some jobs do not cause hot CPUs. + + < g r a p h i c s > + +Figure 4: The visualization of all compute nodes in rack 4. The color indicates (a) CPU1 temperature and (b) CPU2 temperature. Red color means high value, yellow depicts a medium temperature, and green represents low value. + + < g r a p h i c s > + +Figure 5: The time series of CPU2 temperature of compute node 4.33. The colorful areas highlight the period when the user with similar color runs his/her job. The vertical dash line indicates the time step at which we take the figure. It is ${08}/{18}/{2020}$ at ${11} : {30}\mathrm{{AM}}$ . + +Can we use these relations to investigate the reason for the irregular hot CPU2 temperature of compute node 4.33, as mentioned in the previous use case? If we compare the user and job running on the compute nodes, namely 4.33 and 2.60, on 08/18/2020 at 11:30 AM, they run only one job of precisely one user. The value of CPU2 temperature of the compute node 4.33 is also higher than other users, such as user1 and user13. This job is suspicious. However, it is impossible to make a strong conclusion about whether this job is the cause of the heat in CPU2 of compute node 4.33 or this compute node has a problem itself. What JobViewer can show to the administrators is the monitoring information. If they want to find the correct reasons for any irregular event, they should do other investigations to see the real causes. + + < g r a p h i c s > + +Figure 6: The time series of CPU1 temperature of compute node 2.60. The colorful areas highlight the period when the user with similar color runs his/her job. The vertical dash line indicates the time step at which we take the figure. It is ${08}/{18}/{2020}$ at ${11} : {30}\mathrm{{AM}}$ . + +§ 5 USER STUDY + +§ 5.1 OVERVIEW + +We contact three volunteers, who have experience working with the HPC system (in both academia and industry), and carry out the user study through video calls. The user study begins with an introduction to the JobViewer. The introduction covers all features and functions of four main components. After that, we ask whether the volunteers have questions or any confusion about the application. If they are still not clear about our web-based prototype, we explain carefully again to ensure they fully understand what they can achieve from the JobViewer. The next step is to ask them to answer some questions and record their actions while finding the answers. Finally, we ask whether they have feedbacks on the application or not. + +We divide the questions into five tasks as the following: + + * Health metrics: This task aims to check whether volunteers can gain information about the compute nodes' health states. We require volunteers to select a health metric and name one compute node with a high value of the chosen metric. Also, the volunteers need to point out users linked to that compute node. + + * Job information: This task checks whether the volunteers know how to get information about a job. We ask them which user's job starts at a particular time step and some compute nodes allocated for the job. + + * Clustering: This task requires the volunteers to understand how to use the clustering algorithms for detecting the compute nodes with irregular health states. The volunteers need to identify and name all compute nodes with a given pattern of health metrics. + + * Metric vs. Job relation: This task asks the volunteers to use the time series of a selected metric of a specific compute node to comment on the dependency between the job and the selected metric. + + * General comments: This task gets the volunteers' feeling when using JobViewer to answer questions in the above tasks. We want to know whether the application is easy for them to find answers to the above questions. Also, we ask whether they think this application is helpful for monitoring activities. + +For the task Clustering and Metric vs. Job relation, we aim to ask the volunteers question related to the use case of compute node 4.33, as mentioned in section 4.2. We hope they can see the benefits of our approach through these questions. One user recognizes some issues with compute node 4.33 and spends time investigating it. + + < g r a p h i c s > + +Figure 7: It is difficult to read time information and reach a particular time step in the timeline's old design, so we implement a new component for the timeline. The HPC administrators can click on the right/left button to move toward the time step of interest or type it directly. + +§ 5.2 RESULTS + +Overall, two volunteers can quickly go over the questions and use the application quite correctly, while a volunteer fails almost all the tasks. The first volunteer moves smoothly to all the questions, except reaching a particular time step. It is difficult for him to observe the time on the timeline because it is too small. Also, he takes some issues when trying to reach a specific time step as required by the questions. This volunteer is the only one that spends lots of time on task 4 because he thinks it is interesting to find the reason behind the irregular hot in CPU2 of compute node 4.33, as mentioned in section 4.2. He moves the timeline to look at jobs at different periods and switch to various health metrics to understand the situation. He finally ends up with an assumption about the positions of the four compute nodes. The second volunteer also does well with the tasks, except for the first one. He says that red compute nodes correspond to high values of the selected metric, but he decides to pick up a yellow one to answer the questions. For the task Metric vs. Job relation, he replies that the job consuming high CPU usage will cause high CPU temperature. Regarding their opinions about whether JobViewer is helpful for monitoring activities, these two volunteers have common comments. Although JobViewer has a good design and is useful for a human to build up investigations, the monitoring administrators may not spend too much time on any irregular event step by step. What they want is to catch the problems quickly, so they prefer a large monitor with all information and data. About the last volunteer, his only correct answer for the question is to find the node with a high value of Memory usage. He comments that the application is hard for him to use because it is challenging to navigate the activities. He also does not understand the use of time series and other stuff. + +The first two volunteers also give feedback on how to improve the JobViewer. One is the design of the timeline. Because the whole time interval is long, reaching a particular time step may be a challenging activity. Besides, the text on the timeline may be too small for some application's users to read. It is the reason why we improve our design with a new component above the timeline to make it more useful, as depicted in figure 7. We can directly type the time of interest in this component. Another possible action to get a certain time step is to move near it and use the right/left button to move toward the correct position. Moreover, the second volunteer mentions the scalability of the JobViewer because some HPC clusters may have thousands of compute nodes. Regarding this idea, we believe the graph-based design is suitable for scaling up to a number that is much larger than the current 467 compute nodes. Two reasons that support this argument are as the following. + +1. We use color as the primary visual signal to inform the health states of compute nodes. We can select individual HPC users or compute nodes to observe further details and time series. The color helps improve the cognitions if there are too many compute nodes in the system. + +2. If we have more rack and compute nodes, we can expend the main visualization because it uses a graph-based design. For example, we can use multiple layers of racks. In this case, the links may look cluttered and crowded. However, it can be overcome by simple highlights. + +§ 6 DISCUSSION + +The strength of JobViewer is its ability to display both system health states and resource allocation information in a single view. It is easy to gain job allocation information in the main visualization, as depicted in section 4.1. The clustering algorithm integrated into the application can quickly show the characteristics of the system health states. From these characteristics, we can point out any compute node with an irregular health state pattern to investigate the problems behind it. Section 4.2 describes a use case for this benefit. Moreover, JobViewer can allow us to observe the relations between jobs and compute node health, as illustrating in section 4.3. This feature highlights jobs and users' behaviors to understand them better for improving or finding suspicious causes of any problem. One volunteer in the user study also finds it interesting to use this feature to investigate the irregular heat in CPU2 of compute node 4.33. + +To use JobViewer efficiently, we need the training to know interactions and activities to get desirable information. One volunteer out of three comments on the difficulty of using the application, while the other two can easily go through the tasks. Besides, the timeline's original design is not optimal, so we improve it, as shown in figure 7 and mentioned in section 5.2. Another issue related to JobViewer is that it is not a complete tool for HPC monitoring. We focus on the four design goals rather than an efficient and comprehensive tool for commercial purposes. The JobViewer is an application that can show the advantages of visualization and human-centered computing in a complex task of HPC monitoring. + +§ 7 CONCLUSION + +We have presented an application of human-centered computing in the case of HPC monitoring data. The visualization design bases on the idea of the bipartite graph that has prominence in scalability. The visualization can intuitively show an HPC system with resource allocation information and the system health states. We have demonstrated three use cases of historical data of an HPC cluster with 467 compute nodes to illustrate the proposed approach's usability. Besides, we have carried out a user study with three experienced experts in HPC monitoring. The results point out the strength of JobViewer and its weakness for further improvement in the future. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/YXUV-SU6i8I/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/YXUV-SU6i8I/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..42ade0df2fd33a58f986c3ce983564449df830cd --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/YXUV-SU6i8I/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,259 @@ +# Modeling One-Dimensional Touch Pointing with Nominal Target Width + +Leave Authors Anonymous + +## Abstract + +Finger-Fitts law [6] is a variant of Fitts' law which accounts for the finger ambiguity in touch pointing. It involves the effective target width ${W}_{e}$ (i.e., $\sqrt{2\pi e}\sigma$ ) in modeling touch pointing. We hypothesize that the nominal target width(W)can be used in lieu of ${W}_{e}$ in Finger-Fitts law. Such a model, called Finger-Fitts- $W$ model, complements the original Finger-Fitts law because it suits the situation where the distribution of endpoints is unknown, such as answering the following question without running a study: "what would be the target selection time if the target size increases from 2 to $3\mathrm{\;{cm}}$ ?". Although the Finger-Fitts $- W$ model has been implied, it is understudied. In this short paper, we compare using nominal width(W) vs. effective width $\left( {W}_{e}\right)$ in one-dimensional touch modeling. The results showed that the Finger-Fitts- $W$ model improves the model fitness over the conventional Fitts' law and has a slight improvement over the original Finger-Fitts law. Our key takeaway is that Finger-Fitts- $W$ is a valid model for predicting touch pointing movement time. It complements the original Finger-Fitts law as it can predict movement time of touch pointing even if the distribution of endpoints is unknown. + +Index Terms: Human-centered computing-Human computer interaction (HCI)—Interaction techniques—Pointing; Human-centered computing-Human computer interaction (HCI)-HCI theory, concepts and models; Human-centered computing-Human computer interaction (HCI)-Empirical studies in HCI + +## 1 INTRODUCTION + +Among a number of finger-touch based interaction, pointing has been a dominant input modality on mobile devices such as smartphones and tablets. Due to its prevalence, modeling touch pointing is crucial in designing touch interfaces. Fitts' law [13,22] (Equation 1), which relates the pointing movement time(MT)to the relative precision of the tasks $\left( \frac{A}{W}\right)$ , is the most widely known pointing model. However, despite its success in modeling pointing actions with mouse or stylus, Fitts' law does not address the ambiguity caused by finger touch, which is the widely recognized "fat finger" problem. Hence, it cannot accurately model touch-based pointing. + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{W} + 1}\right) . \tag{1} +$$ + +Finger-Fitts law (a.k.a FFitts law, Equation 2) [6] is a refinement of Fitts' law for modeling touch pointing: + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{2\pi e}\left( {{\sigma }^{2} - {\sigma }_{a}^{2}}\right) }} + 1}\right) +$$ + +$$ += a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}_{e}^{2} - {2\pi e}{\sigma }_{a}^{2}}} + 1}\right) . \tag{2} +$$ + +Previous research $\left\lbrack {6,{34}}\right\rbrack$ has shown that Finger-Fitts law (Equation 2) can more accurately model finger-touch pointing than Fitts' law, and has been used for modeling typing speed on soft keyboard [4], for developing a keyboard decoding algorithm [5], and for modeling other touch interaction such as crossing [21]. + +The Finger-Fitts law (Equation 2) uses the effective width ${W}_{e}$ for modeling, which is calculated from the observed touch points variance $\left( {{W}_{e} = \sqrt{2\pi e}\sigma }\right)$ . Drawing an analogy from Fitts’ law research that both effective width $\left( {W}_{e}\right)$ and nominal width $W$ (i.e., the width defined by the geometry of the target) are commonly used to model pointing, we hypothesize that using the nominal target width $W$ in lieu of the ${W}_{e}$ in Finger-Fitts law with a small adjustment is also a valid touch pointing model. We call it Finger-Fitts- $W$ model (Equation 3): + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}^{2} - {c}^{2}}} + 1}\right) , \tag{3} +$$ + +where $a, b$ , and $c$ are empirically determined parameters. Because the Finger-Fitts- $W$ model avoids using the observed touch point variance $\left( {\sigma }^{2}\right)$ , it supports predicting the movement time ${MT}$ without actually carrying out the studies to obtain the variance of touch point distribution $\left( {\sigma }^{2}\right)$ . It allows interface designers to ask "what if" questions such as "what would be the target selection time if $\mathrm{I}$ increase the target size from $2\mathrm{\;{cm}}$ to $3\mathrm{\;{cm}}$ ?". In contrast, the original Finger-Fitts law (referred to as Finger-Fitts- ${W}_{e}$ model hereafter) requires to observe the variance of touch point distribution with the new target size to make prediction. + +The Finger-Fitts- $W$ model (Equation 3) is also an extension of the recently proposed 2D Finger-Fitts law [19], which uses nominal target width and height in the model. Given the promising performance of 2D Finger-Fitts law, it is likely that using the nominal task parameter $W$ in lieu of the observed touch points variance $\left( {{W}_{e} = \sqrt{2\pi e}\sigma }\right)$ is also valid for modeling one-dimensional touch pointing. Although such a model has been implied, it is not explicitly expressed nor studied, especially in the context of one-dimensional touch pointing. + +To fill this knowledge gap, in this short paper, we explicitly express the Finger-Fitts- $W$ model (Equation 3), and present a study comparing the effective width $\left( {W}_{e}\right)$ vs. nominal-width(W)in Finger-Fitts law for modeling one-dimenional touch pointing. Our investigation showed that Finger-Fitts- $W$ model performed the best among tested models including Fitts’ law and Finger-Fitts- ${W}_{e}$ model, showing that Finger-Fitts-W model is valid for modeling touch pointing. Although it is only a small adjustment over the original Finger-Fitts law [6], it is rather necessary and advances our understanding of touch pointing. It also generalizes the nominal parameter based two-dimensional Finger-Fitts law model [19] to one-dimensional touch pointing. + +## 2 RELATED WORK + +We review related work on (1) using Fitts' law and its variants to model pointing, and (2) modeling finger touch pointing with Finger-Fitts law. + +### 2.1 Modeling 1D pointing + +As one of the best known theoretical foundations of HCI, Fitts' law (Equation 1) [13, 22] has served as a cornerstone for interface and input device evaluation [9,22], interface optimization [20], and interaction behavior modeling [11]. + +The beauty of the original Fitts' law lies in its simplicity. It is a pure task model of human pointing performance, in which all of the model’s independent variables are a priori task parameters $A$ and $W$ . For a given graphical object’s distance and size, for example, designers can predict or estimate the average time it takes a user to complete a pointing task at it. + +One challenge of applying Fitts' law is that a user might or might not comply with the task precision defined by $A/W$ when performing the tasks, causing over- or under-utilization of target width [35]. This is partly because a user may adopt different speed-accuracy trade-off policies $\left\lbrack {3,4,{15},{16},{23},{25},{33}}\right\rbrack$ . The way researchers have addressed the varied degree of task compliance is to bend Fitts' law away from a pure task model towards a behavioral one by changing an independent variable in the model from a task parameter $W$ (target width) to "effective width", an a posterior quantity depending on user's behavior. First proposed by Crossman [12] and explored further $\left\lbrack {{22},{26},{32}}\right\rbrack$ , the effective width adjustment method has shown stronger model fit if the observed error rates deviate from 4%. It replaces the nominal target width $W$ with the so-called effective width ${W}_{e}$ (i.e., $\sqrt{2\pi e}\sigma$ ), as shown in Equation 4. + +$$ +{MT} = a + b \cdot {\log }_{2}\left( {\frac{A}{\sqrt{2\pi e}\sigma } + 1}\right) \tag{4} +$$ + +$$ += a + b \cdot {\log }_{2}\left( {\frac{A}{{W}_{e}} + 1}\right) , \tag{5} +$$ + +Controlled studies [35] showed that using ${W}_{e}$ could partially but not fully account for the subjective layer of speed-accuracy tradeoff. Involving the posterior variable $\sigma$ complicates Fitts’ law as a predictive tool for design. Later in the next section we explain in detail that because the Fitts' law with effective width adjustment (Equations 4 and 5) is the basis of Finger-Fitts law [6]), the limitation of involving a posterior variable also limits the predictive power of Finger-Fitts law. + +Another line of Fitts' law research closely related to the current work is about modeling small-sized target acquisition tasks. Previous researchers [32] have proposed using $W - c$ instead of ${W}_{e} = \sqrt{2\pi e}\sigma$ to adjust the target width in Fitts’ law, where $c$ was an experimentally determined constant attributed to hand tremor. The modified version gave a good fit for both pencil-based [32] and mouse-based [10] pointing tasks. Our research later shows that $c$ -constant model could serve as a simplification of the refined Finger-Fitts model, with reduced model fitness. + +### 2.2 Modeling finger touch pointing + +As finger touch has become the dominant input modality in mobile computing, a sizable amount of research has been carried out to understand and model the uncertainty in touch interaction. On a capacitive touchscreen, a touch point is converted from the contact region of the finger. This is an ambiguous and "noisy" procedure, which inevitably introduces errors. Factors such as finger angle [17, 18] and pressure [14] may affect the size and shape of the contact region, unintentionally altering the touch position. The lack of visual feedback on where the finger lands due to occlusion (the "fat finger" problem) further exacerbates the issue [17, 18, 27-29]. As a result, it is hard to precisely control the touch position even with fine motor control ability. + +This "fat finger" problem, or the lack of absolute precision in finger touch, presented a challenge to use Fitts' law as a model for finger touch-based pointing, because the only variable in Fitts' law, namely Fitts’ index of difficulty, ${\log }_{2}\left( {A/W + 1}\right)$ , is solely determined by the relative movement precision, or the distance to target size ratio. + +$\mathrm{{Bi}},\mathrm{{Li}}$ and $\mathrm{{Zhai}}\left\lbrack {6 - 8}\right\rbrack$ identified this challenge, and proposed the Finger Fitts law [6] to address it. They derived their model by separating two sources of end point variance - those due to the absolute imprecision of finger touch (denoted by ${\sigma }_{a}{}^{2}$ ) and those due to the speed-accuracy trade-off demonstrated in a pointing process (denoted by ${\sigma }_{r}{}^{2}$ ). The end point variance caused by the imprecision of finger touch $\left( {{\sigma }_{a}{}^{2}}\right)$ is irrelevant to the speed-accuracy trade-off so it should be accounted for. They accounted for it by subtracting ${\sigma }_{a}{}^{2}$ from the observed variance ${\sigma }^{2}$ , which led to Finger-Fitts law (Equation 2). Following the notation of effective width ${W}_{e} = \sqrt{2\pi e}\sigma$ (or ${4.133\sigma }$ ) [12,26,32], Finger-Fitts law (Equation 2) can be reexpressed as Equation 6: + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}_{e}^{2} - {2\pi e}{\sigma }_{a}{}^{2}}} + 1}\right) . \tag{6} +$$ + +Later research $\left\lbrack {4,6,{21},{34}}\right\rbrack$ showed that Finger-Fitts law was successful in modeling touch interaction. For example, research [4] showed it was more accurate than the typical Fitts' law in estimating the upper bound of typing speed on a virtual keyboard. Researchers [21] extended the Finger-Fitts law to the crossing action with finger touch, which improved the model fitness $\left( {R}^{2}\right)$ from 0.75 to 0.84 over the original Fitts' law. The recent work [19] extends Finger-Fitts law from 1D to 2D, which shows using nominal target width and height is valid for modeling 2-dimensional touch pointing. Complementary to the previous work [19], this work investigates modeling 1-dimensional target selection with nominal target widths. We also compare effective width vs. nominal width while the previous work [19] did not draw such a comparison. + +As alluded to earlier, previous research on Finger-Fitts law is mostly based on using the effective width ${W}_{e}$ . Next, we describe how we use the nominal width $W$ in Finger-Fitts law (a.k.a the Finger-Fitts-W model), and present a study comparing it with using effective width and the typical Fitts' law. + +## 3 USING NOMINAL WIDTH W TO MODEL TOUCH POINTING + +To use nominal width $W$ in touch modeling, a straightforward approach is to replace the effective width ${W}_{e}$ in Finger-Fitts- ${W}_{e}$ mode (Equation 2) with $W$ : + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}^{2} - {2\pi e}{\sigma }_{a}^{2}}} + 1}\right) \tag{7} +$$ + +A potential problem is that it leaves the equation undefined if $W < \sqrt{2\pi e}{\sigma }_{a}$ . To address this problem, we assume that ${\sigma }_{a}^{2}$ , which represents the absolute variance caused by finger touch, is an empirical parameter determined from the data, instead of a pre-defined constant: + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}^{2} - {c}^{2}}} + 1}\right) , \tag{8} +$$ + +where $a, b, c$ are all empirically determined parameters. The implication of this adjustment is that ${\sigma }_{a}^{2}$ may differ across task contexts, and treating it as a free parameter would provide more flexibility in modeling. The drawback is that it introduces an extra free parameter $c$ to the model. In the model evaluation, we take the number of free parameters into consideration and control for the overfitting. + +Replacing ${W}_{e}$ with $W$ also has a physical meaning. The ${W}_{e}$ represents the observed variance in the endpoint distribution, which is the actual endpoint variability a user exhibits. In contrast, ${W}^{2}$ represents the endpoint variability allowance specified by the task parameter, which is the variability allowance a user is supposed to consume. + +## 4 Evaluation in 1D Pointing Tasks + +We carried out an experiment to investigate whether the Finger-Fitts- $W$ law can accurately predict ${MT}$ , compared with Fitts’ law and the original Finger-Fitts law. The study was a reciprocal target acquisition task with finger touch. + +### 4.1 Participants and Apparatus + +We recruited 14 subjects for an IRB approved study ( 3 females; aged from 22 - 35). All of them were right-handed and daily smartphone users. A Google Pixel C tablet with 2560x1800 resolution and 308 ppi was used throughout the experiment. Each participant was instructed to perform the tasks on the tablet. They were instructed to select the target with the index finger as fast and accurately as possible. All the subjects were daily smartphone users. + +### 4.2 Design and Data Processing + +#### 4.2.1 Target Acquisition Task + +We designed a within-subject reciprocal target acquisition task for circular targets with various diameters. We chose circular target acquisition mainly because of two reasons. First, this is one of the common tasks used in Fitts’ law studies (e.g.,[6,24]). Second, ${\sigma }_{a} =$ 1.5 is obtained from the circular target acquisitions in [6]. Adopting a similar circular targets experiment setting allows us to investigate not only vertical finger traveling but horizontal movements. + +The study included 15 conditions with 5 levels $(4,6,8,{10},{12}$ $\mathrm{{mm}})$ of diameters(W)and 3 levels(16,28,60mm)of distance (A). It had two different movement directions, which are vertical and horizontal movements. Each condition included 20 touches (19 trials, where the first touches in each condition are considered the starting action) and the condition would show up in random order. We have 14 (participants) $\times {15}$ (conditions) $\times 2$ (directions) $\times {19}$ (trials) $= 7,{980}$ (trials) in total. + +At the beginning of each trial, two circular targets were displayed on the touch screen, one in red (a.k.a the start circle) and one in blue (a.k.a the destination circle). The participant was instructed to select the start circle to start the trial. Upon successfully selecting the start circles, the colors of start and destination circles got swapped and the participant was instructed to select the destination circle as fast and accurately as possible. A successful sound would be played if the target was successfully selected. Otherwise, a failure sound was played. The elapsed time between the moment the user successfully selected the start circle and the moment the user subsequently landed down the touch point to select the destination circle was recorded as the movement time of the current trial; the touch point for selecting the destination circle was the location of the endpoint, regardless of whether the touch point was within or outside the target boundary. If the participant succeeded in selecting the destination circle, the colors of two circles were swapped again and the next trial started immediately. If the participant failed in selecting the destination circle, she had to successfully select it again to start the next trial. This setting ensured that in each trial the finger always starts from somewhere within the starting circle, reducing the noise in measuring $A$ . + +#### 4.2.2 Data processing + +We pre-processed the data by removing touch points which fell beyond 3 standard deviations to the target center. In circular acquisition tasks, 50 out of 7,980 touch points (0.63%) were removed as outliers. + +### 4.3 Results + +#### 4.3.1 ${MT}$ and error rates across the condition + +We observe movement time and the error rates across different target widths and distances (Table 1 and 2). + +For movement times, a repeated measure ANOVA test showed that both width $W\left( {{F}_{4.52} = {175.3}, p < {0.0001}}\right)$ and distance $A$ $\left( {{F}_{2.26} = {320.7}, p < {0.0001}}\right)$ had a statistically significant effect. The interaction effect of width and distance was also significant $\left( {{F}_{8,{104}} = {2.077}, p < {0.05}}\right)$ . For error rates, a repeated measure ANOVA test showed that width $W$ had a significant effect $\left( {{F}_{4,{52}} = }\right.$ ${56.19}, p < {.0001})$ , but not distance $A\left( {{F}_{2,{26}} = {1.443}, p = {0.255}}\right)$ . The interaction effect of width and distance was not significant $\left( {{F}_{8,{104}} = {1.965}, p = {0.058}}\right)$ . + +![01963eab-8ab6-7e93-9f93-e0dfea6b060a_2_945_163_680_339_0.jpg](images/01963eab-8ab6-7e93-9f93-e0dfea6b060a_2_945_163_680_339_0.jpg) + +Figure 1: (a) A participant was doing the task. (b) A screenshot of the task. + +The results also showed that participants were more error-prone with smaller targets, especially with diameters under $6\mathrm{\;{mm}}$ . A repeated measure ANOVA test showed that the size of the targets (targets which are 4 and $6\mathrm{\;{mm}}$ are considered small targets) had a significant effect $\left( {{F}_{1,{13}} = {80.21}, p < {0.0001}}\right)$ . This results concurred with conclusion from other research $\left\lbrack {6,8}\right\rbrack$ . + +
Diameters (mm)MT Mean (SD)Error rate
40.50 (0.13)24.9%
60.37 (0.13)10.9%
80.31 (0.11)6.4%
100.28 (0.10)2.8%
120.25 (0.09)1.1%
+ +Table 1: Movement time and error rates over different target widths + +
Distances (mm)MT Mean (SD)Error rate
160.26 (0.11)8.2%
280.31 (0.12)9.3%
600.45 (0.13)10.0%
+ +Table 2: Movement time and error rates over different distances + +#### 4.3.2 Regression for ${MT}$ vs. ${ID}$ + +Figure 2 shows the regression results of ${MT}$ vs. ${ID}$ . As shown, the Finger-Fitts- $W$ law has the highest ${R}^{2}$ value (0.986) among all the test models, indicating its high model fitness. The results also showed that Finger-Fitts- ${W}_{e}$ model was better than the typical Fitts’ law - $W$ and the Fitts’ law - ${W}_{e}$ , consistent with findings from previous work [6]. + +#### 4.3.3 RMSE of ${MT}$ Prediction + +To increase the external validity of the evaluation, we also examine the Root Mean Square Error (RMSE) of ${MT}$ prediction with cross validation. We conduct leave-one-(A, W)-out cross validation and obtain the RMSE for Finger-Fitts- $W$ , Fitts’ law - ${W}_{e}$ and Fitts’ law - $W$ . The results (Table 2) are 0.015 for Finger-Fitts- $W$ model,0.021 for Finger-Fitts- ${W}_{e}$ model,0.033 for Fitts’ law - $W$ , and 0.064 for Fitts’ law - ${W}_{e}$ . It showed Finger-Fitts- $W$ outperformed all the other three models. + +![01963eab-8ab6-7e93-9f93-e0dfea6b060a_3_154_155_1491_358_0.jpg](images/01963eab-8ab6-7e93-9f93-e0dfea6b060a_3_154_155_1491_358_0.jpg) + +Figure 2: MTvs.ID regressions for Fitts’ law, Fitts’ law with effective width, Finger-Fitts- $W$ , and Finger-Fitts- ${W}_{e}$ models. As shown, Finger-Fitts- $W$ model shows the best model fitness. + +
${R}^{2}$RMSEAICWAICParameters
Fitts’ Law - $W$Eq. (1)0.9270.033-94.00-86.87$a = - {0.009}, b = {0.149}$
Fitts’ Law - ${W}_{e}$Eq. (4)0.7190.064-73.82-69.21$a = - {0.051}, b = {0.180}$
Finger-Fitts Law - $W$Eq. (8)0.9860.015-118.81-109.46$a = {0.021}, b = {0.123},{c}^{2} = {11.260}$
Finger-Fitts Law - ${W}_{e}$Eq. (2)0.9680.021-106.37-100.35$a = - {0.109}, b = {0.167},{\sigma }_{a} = {1.5}$
+ +Table 3: The parameters, ${R}^{2},{RMSE}$ of leave-one-(A, W)-out cross validation, and Information Criteria AIC and WAIC of the models. For AIC and WAIC, the smaller the values, the more accurate the model prediction. + +#### 4.3.4 Information Criteria + +Information criteria $\left\lbrack {1,2,{30},{31}}\right\rbrack$ have been widely used to compare the quality of models because they take into account the complexity of the model (i.e., the number of parameters). Commonly used information criteria include ${AIC}$ , and ${WAIC}$ , both of which penalize the complexity of a model. In general, the smaller the information criterion, the better the model is. We have calculated multiple information criteria including ${AIC}$ and ${WAIC}$ (Table 3). As shown, the Finger-Fitts- $W$ law outperforms the Fitts’ law - $W$ and Finger-Fitts- ${W}_{e}$ law in these metrics. + +### 4.4 Discussion + +#### 4.4.1 The validity of the Finger-Fitts- $W$ model. + +Our study showed that the Finger-Fitts- $W$ law had the highest prediction accuracy among all the test models across a number of metrics, including information criteria that take into account the number of model parameters. The ${R}^{2}$ value, a commonly used measure for pointing model fitness, is 0.986, higher than both Fitts' law - $W\left( {{R}^{2} = {0.927}}\right)$ and Finger-Fitts- ${W}_{e}\left( {{R}^{2} = {0.968}}\right)$ . Its RMSE, a cross-validation metric, is also the lowest among all the test models. To take into account the number of model parameters, we examined the information criteria. The AIC and WAIC showed that the Finger-Fitts- $W$ law improved prediction accuracy over Fitts’ and Finger-Fitts law. It reduced AIC from -94.00 to -118.81, and WAIC from -86.87 to -109.46. These two metrics added penalty to adding extra parameters in the Finger-Fitts- $W$ law, adding evidence to the strength of the Finger-Fitts- $W$ model. + +#### 4.4.2 The " $w - c$ " model serves as a simplification of Finger- Fitts- $W$ model. + +We notice that the Finger-Fitts- $W$ law resembles the model proposed in 1968 by Welford [32] in which $W - c$ was used in lieu of $W$ to account for the hand tremor when acquiring small-sized targets with pencil (i.e., ${MT} = a + b{\log }_{2}\left( {\frac{A}{W - c} + 1}\right)$ ). We investigated whether this $W - c$ model could serve as a simplification of the Finger-Fitts- $W$ model based on the data collected in our experiment. Our analysis shows that the $W - c$ model has slightly weaker prediction performance than the Finger-Fitts- $W$ law, with ${R}^{2} = {0.984}$ and RMSE $=$ 0.016. Its performance is still better than the Fitts’ law - $W$ and Finger-Fitts- ${W}_{e}$ law. Therefore, it can serve as a simplification of the Finger-Fitts- $W$ law for touch modeling. + +## 5 CONCLUSIONS + +Our main conclusion is that the one-dimensional Finger-Fitts- $W$ model (Equation 9), a variant of the Finger-Fitts law [6] can model the movement time in touch pointing quite well: + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}^{2} - {c}^{2}}} + 1}\right) . \tag{9} +$$ + +It complements the original Finger-Fitts law [6] which predicts movement time with the variance of observed touch point distribution. Because it uses nominal parameters only for prediction, the Finger-Fitts- $W$ model can answer "what if" questions without obtaining the variance of touch point distribution. Our evaluation shows the Finger-Fitts- $W$ model outperforms Fitts’ law (which also uses nominal parameters only for prediction) in model fitness, measured by ${R}^{2}$ values, cross-validation RMSE, and information criteria. Overall, our investigation shows that the Finger-Fitts- $W$ model is a valid model for modeling one-dimensional touch pointing. + +## REFERENCES + +[1] H. Akaike. A new look at the statistical model identification. IEEE transactions on automatic control, 19(6):716-723, 1974. + +[2] H. Akaike. Information theory and an extension of the maximum likelihood principle. In Selected papers of hirotugu akaike, pp. 199- 213. Springer, 1998. + +[3] N. Banovic, T. Grossman, and G. Fitzmaurice. The effect of time-based cost of error in target-directed pointing tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, pp. 1373-1382. ACM, New York, NY, USA, 2013. doi: 10. 1145/2470654.2466181 + +[4] N. Banovic, V. Rao, A. Saravanan, A. K. Dey, and J. Mankoff. Quantifying aversion to costly typing errors in expert mobile text entry. In + +Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pp. 4229-4241. ACM, New York, NY, USA, 2017. doi: 10.1145/3025453.3025695 + +[5] X. Bi, C. Chelba, T. Ouyang, K. Partridge, and S. Zhai. Mobile interaction research at google. https://research.googleblog.com/ 2013/02/mobile-interaction-research-at-google.html, February 2013. Online; accessed 16-July-2017. + +[6] X. Bi, Y. Li, and S. Zhai. Ffitts law: Modeling finger touch with fitts' law. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, pp. 1363-1372. ACM, New York, NY, USA, 2013. doi: 10.1145/2470654.2466180 + +[7] X. Bi and S. Zhai. Bayesian touch: A statistical criterion of target selection with finger touch. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST '13, pp. 51-60. ACM, New York, NY, USA, 2013. doi: 10.1145/2501988. 2502058 + +[8] X. Bi and S. Zhai. Predicting finger-touch accuracy based on the dual gaussian distribution model. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST '16, pp. 313-319. ACM, New York, NY, USA, 2016. doi: 10.1145/2984511. 2984546 + +[9] S. K. Card, W. K. English, and B. J. Burr. Evaluation of mouse, rate-controlled isometric joystick, step keys, and text keys, for text selection on a crt. Ergonomics, 21(8):601-613, 1978. doi: 10.1080/ 00140137808931762 + +[10] O. Chapuis and P. Dragicevic. Effects of motor scale, visual scale, and quantization on small target acquisition difficulty. ACM Transactions on Computer-Human Interaction (TOCHI), 18(3):13, 2011. + +[11] A. Cockburn, C. Gutwin, and S. Greenberg. A predictive model of menu performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, pp. 627-636. ACM, New York, NY, USA, 2007. doi: 10.1145/1240624.1240723 + +[12] E. Crossman. The speed and accuracy of simple hand movements. The nature and acquisition of industrial skills, 1957. + +[13] P. M. Fitts. The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology, 47(6):381, 1954. + +[14] M. Goel, J. Wobbrock, and S. Patel. Gripsense: Using built-in sensors to detect hand posture and pressure on commodity mobile phones. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST ' 12, pp. 545-554. ACM, New York, NY, USA, 2012. doi: 10.1145/2380116.2380184 + +[15] Y. Guiard, H. B. Olafsdottir, and S. T. Perrault. Fitt's law as an explicit time/error trade-off. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, pp. 1619-1628. ACM, New York, NY, USA, 2011. doi: 10.1145/1978942.1979179 + +[16] Y. Guiard and O. Rioul. A mathematical description of the speed/accuracy trade-off of aimed movement. In Proceedings of the 2015 British HCI Conference, British HCI '15, pp. 91-100. ACM, New York, NY, USA, 2015. doi: 10.1145/2783446.2783574 + +[17] C. Holz and P. Baudisch. The generalized perceived input point model and how to double touch accuracy by extracting fingerprints. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, pp. 581-590. ACM, New York, NY, USA, 2010. doi: 10.1145/1753326.1753413 + +[18] C. Holz and P. Baudisch. Understanding touch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, pp. 2501-2510. ACM, New York, NY, USA, 2011. doi: 10. 1145/1978942.1979308 + +[19] Y.-J. Ko, H. Zhao, Y. Kim, I. Ramakrishnan, S. Zhai, and X. Bi. Modeling two dimensional touch pointing. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, pp. 858-868, 2020. + +[20] J. R. Lewis, P. J. Kennedy, and M. J. LaLomia. Development of a digram-based typing key layout for single-finger/stylus input. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 43(5):415-419, 1999. doi: 10.1177/154193129904300505 + +[21] Y. Luo and D. Vogel. Crossing-based selection with direct touch input. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems, CHI '14, pp. 2627-2636. ACM, New + +York, NY, USA, 2014. doi: 10.1145/2556288.2557397 + +[22] I. S. MacKenzie. Fitts' law as a research and design tool in human-computer interaction. Human-Computer Interaction, 7(1):91-139, Mar. 1992. doi: 10.1207/s15327051 hci0701_3 + +[23] I. S. MacKenzie and P. Isokoski. Fitts' throughput and the speed-accuracy tradeoff. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, pp. 1633-1636. ACM, New York, NY, USA, 2008. doi: 10.1145/1357054.1357308 + +[24] I. S. MacKenzie, T. Kauppinen, and M. Silfverberg. Accuracy measures for evaluating computer pointing devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '01, pp. 9-16. ACM, New York, NY, USA, 2001. doi: 10.1145/365024. 365028 + +[25] R. Plamondon and A. M. Alimi. Speed/accuracy trade-offs in target-directed movements. Behavioral and Brain Sciences, 20(2):279-303, 1997. doi: 10.1017/S0140525X97001441 + +[26] R. W. Soukoreff and I. S. MacKenzie. Towards a standard for pointing device evaluation, perspectives on 27 years of fitts' law research in hci. International journal of human-computer studies, 61(6):751-789, 2004. + +[27] D. Vogel and R. Balakrishnan. Occlusion-aware interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, pp. 263-272. ACM, New York, NY, USA, 2010. doi: 10.1145/1753326.1753365 + +[28] D. Vogel and P. Baudisch. Shift: A technique for operating pen-based interfaces using touch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, pp. 657-666. ACM, New York, NY, USA, 2007. doi: 10.1145/1240624.1240727 + +[29] D. Vogel and G. Casiez. Hand occlusion on a multi-touch tabletop. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, pp. 2307-2316. ACM, New York, NY, USA, 2012. doi: 10.1145/2207676.2208390 + +[30] S. Watanabe. A widely applicable bayesian information criterion. Journal of Machine Learning Research, 14(Mar):867-897, 2013. + +[31] S. Watanabe and M. Opper. Asymptotic equivalence of bayes cross validation and widely applicable information criterion in singular learning theory. Journal of machine learning research, 11(12), 2010. + +[32] A. T. Welford. Fundamentals of skill. Methuen, New York, NY, USA, 1968. + +[33] J. O. Wobbrock, E. Cutrell, S. Harada, and I. S. MacKenzie. An error model for pointing based on fitts' law. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, pp. 1613-1622. ACM, New York, NY, USA, 2008. doi: 10.1145/1357054. 1357306 + +[34] S. Yamanaka and H. Usuba. Rethinking the dual gaussian distribution model for predicting touch accuracy in on-screen-start pointing tasks. Proceedings of the ACM on Human-Computer Interaction, 4(ISS):1-20, 2020. + +[35] S. Zhai, J. Kong, and X. Ren. Speed-accuracy tradeoff in fitts' law tasks: On the equivalency of actual and nominal pointing precision. Int. J. Hum.-Comput. Stud., 61(6):823-856, Dec. 2004. doi: 10.1016/j. ijhcs. 2004.09.007 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/YXUV-SU6i8I/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/YXUV-SU6i8I/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..c9692f584da024ef00ed6eb28c40da4baf156ff4 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/YXUV-SU6i8I/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,231 @@ +§ MODELING ONE-DIMENSIONAL TOUCH POINTING WITH NOMINAL TARGET WIDTH + +Leave Authors Anonymous + +§ ABSTRACT + +Finger-Fitts law [6] is a variant of Fitts' law which accounts for the finger ambiguity in touch pointing. It involves the effective target width ${W}_{e}$ (i.e., $\sqrt{2\pi e}\sigma$ ) in modeling touch pointing. We hypothesize that the nominal target width(W)can be used in lieu of ${W}_{e}$ in Finger-Fitts law. Such a model, called Finger-Fitts- $W$ model, complements the original Finger-Fitts law because it suits the situation where the distribution of endpoints is unknown, such as answering the following question without running a study: "what would be the target selection time if the target size increases from 2 to $3\mathrm{\;{cm}}$ ?". Although the Finger-Fitts $- W$ model has been implied, it is understudied. In this short paper, we compare using nominal width(W) vs. effective width $\left( {W}_{e}\right)$ in one-dimensional touch modeling. The results showed that the Finger-Fitts- $W$ model improves the model fitness over the conventional Fitts' law and has a slight improvement over the original Finger-Fitts law. Our key takeaway is that Finger-Fitts- $W$ is a valid model for predicting touch pointing movement time. It complements the original Finger-Fitts law as it can predict movement time of touch pointing even if the distribution of endpoints is unknown. + +Index Terms: Human-centered computing-Human computer interaction (HCI)—Interaction techniques—Pointing; Human-centered computing-Human computer interaction (HCI)-HCI theory, concepts and models; Human-centered computing-Human computer interaction (HCI)-Empirical studies in HCI + +§ 1 INTRODUCTION + +Among a number of finger-touch based interaction, pointing has been a dominant input modality on mobile devices such as smartphones and tablets. Due to its prevalence, modeling touch pointing is crucial in designing touch interfaces. Fitts' law [13,22] (Equation 1), which relates the pointing movement time(MT)to the relative precision of the tasks $\left( \frac{A}{W}\right)$ , is the most widely known pointing model. However, despite its success in modeling pointing actions with mouse or stylus, Fitts' law does not address the ambiguity caused by finger touch, which is the widely recognized "fat finger" problem. Hence, it cannot accurately model touch-based pointing. + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{W} + 1}\right) . \tag{1} +$$ + +Finger-Fitts law (a.k.a FFitts law, Equation 2) [6] is a refinement of Fitts' law for modeling touch pointing: + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{2\pi e}\left( {{\sigma }^{2} - {\sigma }_{a}^{2}}\right) }} + 1}\right) +$$ + +$$ += a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}_{e}^{2} - {2\pi e}{\sigma }_{a}^{2}}} + 1}\right) . \tag{2} +$$ + +Previous research $\left\lbrack {6,{34}}\right\rbrack$ has shown that Finger-Fitts law (Equation 2) can more accurately model finger-touch pointing than Fitts' law, and has been used for modeling typing speed on soft keyboard [4], for developing a keyboard decoding algorithm [5], and for modeling other touch interaction such as crossing [21]. + +The Finger-Fitts law (Equation 2) uses the effective width ${W}_{e}$ for modeling, which is calculated from the observed touch points variance $\left( {{W}_{e} = \sqrt{2\pi e}\sigma }\right)$ . Drawing an analogy from Fitts’ law research that both effective width $\left( {W}_{e}\right)$ and nominal width $W$ (i.e., the width defined by the geometry of the target) are commonly used to model pointing, we hypothesize that using the nominal target width $W$ in lieu of the ${W}_{e}$ in Finger-Fitts law with a small adjustment is also a valid touch pointing model. We call it Finger-Fitts- $W$ model (Equation 3): + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}^{2} - {c}^{2}}} + 1}\right) , \tag{3} +$$ + +where $a,b$ , and $c$ are empirically determined parameters. Because the Finger-Fitts- $W$ model avoids using the observed touch point variance $\left( {\sigma }^{2}\right)$ , it supports predicting the movement time ${MT}$ without actually carrying out the studies to obtain the variance of touch point distribution $\left( {\sigma }^{2}\right)$ . It allows interface designers to ask "what if" questions such as "what would be the target selection time if $\mathrm{I}$ increase the target size from $2\mathrm{\;{cm}}$ to $3\mathrm{\;{cm}}$ ?". In contrast, the original Finger-Fitts law (referred to as Finger-Fitts- ${W}_{e}$ model hereafter) requires to observe the variance of touch point distribution with the new target size to make prediction. + +The Finger-Fitts- $W$ model (Equation 3) is also an extension of the recently proposed 2D Finger-Fitts law [19], which uses nominal target width and height in the model. Given the promising performance of 2D Finger-Fitts law, it is likely that using the nominal task parameter $W$ in lieu of the observed touch points variance $\left( {{W}_{e} = \sqrt{2\pi e}\sigma }\right)$ is also valid for modeling one-dimensional touch pointing. Although such a model has been implied, it is not explicitly expressed nor studied, especially in the context of one-dimensional touch pointing. + +To fill this knowledge gap, in this short paper, we explicitly express the Finger-Fitts- $W$ model (Equation 3), and present a study comparing the effective width $\left( {W}_{e}\right)$ vs. nominal-width(W)in Finger-Fitts law for modeling one-dimenional touch pointing. Our investigation showed that Finger-Fitts- $W$ model performed the best among tested models including Fitts’ law and Finger-Fitts- ${W}_{e}$ model, showing that Finger-Fitts-W model is valid for modeling touch pointing. Although it is only a small adjustment over the original Finger-Fitts law [6], it is rather necessary and advances our understanding of touch pointing. It also generalizes the nominal parameter based two-dimensional Finger-Fitts law model [19] to one-dimensional touch pointing. + +§ 2 RELATED WORK + +We review related work on (1) using Fitts' law and its variants to model pointing, and (2) modeling finger touch pointing with Finger-Fitts law. + +§ 2.1 MODELING 1D POINTING + +As one of the best known theoretical foundations of HCI, Fitts' law (Equation 1) [13, 22] has served as a cornerstone for interface and input device evaluation [9,22], interface optimization [20], and interaction behavior modeling [11]. + +The beauty of the original Fitts' law lies in its simplicity. It is a pure task model of human pointing performance, in which all of the model’s independent variables are a priori task parameters $A$ and $W$ . For a given graphical object’s distance and size, for example, designers can predict or estimate the average time it takes a user to complete a pointing task at it. + +One challenge of applying Fitts' law is that a user might or might not comply with the task precision defined by $A/W$ when performing the tasks, causing over- or under-utilization of target width [35]. This is partly because a user may adopt different speed-accuracy trade-off policies $\left\lbrack {3,4,{15},{16},{23},{25},{33}}\right\rbrack$ . The way researchers have addressed the varied degree of task compliance is to bend Fitts' law away from a pure task model towards a behavioral one by changing an independent variable in the model from a task parameter $W$ (target width) to "effective width", an a posterior quantity depending on user's behavior. First proposed by Crossman [12] and explored further $\left\lbrack {{22},{26},{32}}\right\rbrack$ , the effective width adjustment method has shown stronger model fit if the observed error rates deviate from 4%. It replaces the nominal target width $W$ with the so-called effective width ${W}_{e}$ (i.e., $\sqrt{2\pi e}\sigma$ ), as shown in Equation 4. + +$$ +{MT} = a + b \cdot {\log }_{2}\left( {\frac{A}{\sqrt{2\pi e}\sigma } + 1}\right) \tag{4} +$$ + +$$ += a + b \cdot {\log }_{2}\left( {\frac{A}{{W}_{e}} + 1}\right) , \tag{5} +$$ + +Controlled studies [35] showed that using ${W}_{e}$ could partially but not fully account for the subjective layer of speed-accuracy tradeoff. Involving the posterior variable $\sigma$ complicates Fitts’ law as a predictive tool for design. Later in the next section we explain in detail that because the Fitts' law with effective width adjustment (Equations 4 and 5) is the basis of Finger-Fitts law [6]), the limitation of involving a posterior variable also limits the predictive power of Finger-Fitts law. + +Another line of Fitts' law research closely related to the current work is about modeling small-sized target acquisition tasks. Previous researchers [32] have proposed using $W - c$ instead of ${W}_{e} = \sqrt{2\pi e}\sigma$ to adjust the target width in Fitts’ law, where $c$ was an experimentally determined constant attributed to hand tremor. The modified version gave a good fit for both pencil-based [32] and mouse-based [10] pointing tasks. Our research later shows that $c$ -constant model could serve as a simplification of the refined Finger-Fitts model, with reduced model fitness. + +§ 2.2 MODELING FINGER TOUCH POINTING + +As finger touch has become the dominant input modality in mobile computing, a sizable amount of research has been carried out to understand and model the uncertainty in touch interaction. On a capacitive touchscreen, a touch point is converted from the contact region of the finger. This is an ambiguous and "noisy" procedure, which inevitably introduces errors. Factors such as finger angle [17, 18] and pressure [14] may affect the size and shape of the contact region, unintentionally altering the touch position. The lack of visual feedback on where the finger lands due to occlusion (the "fat finger" problem) further exacerbates the issue [17, 18, 27-29]. As a result, it is hard to precisely control the touch position even with fine motor control ability. + +This "fat finger" problem, or the lack of absolute precision in finger touch, presented a challenge to use Fitts' law as a model for finger touch-based pointing, because the only variable in Fitts' law, namely Fitts’ index of difficulty, ${\log }_{2}\left( {A/W + 1}\right)$ , is solely determined by the relative movement precision, or the distance to target size ratio. + +$\mathrm{{Bi}},\mathrm{{Li}}$ and $\mathrm{{Zhai}}\left\lbrack {6 - 8}\right\rbrack$ identified this challenge, and proposed the Finger Fitts law [6] to address it. They derived their model by separating two sources of end point variance - those due to the absolute imprecision of finger touch (denoted by ${\sigma }_{a}{}^{2}$ ) and those due to the speed-accuracy trade-off demonstrated in a pointing process (denoted by ${\sigma }_{r}{}^{2}$ ). The end point variance caused by the imprecision of finger touch $\left( {{\sigma }_{a}{}^{2}}\right)$ is irrelevant to the speed-accuracy trade-off so it should be accounted for. They accounted for it by subtracting ${\sigma }_{a}{}^{2}$ from the observed variance ${\sigma }^{2}$ , which led to Finger-Fitts law (Equation 2). Following the notation of effective width ${W}_{e} = \sqrt{2\pi e}\sigma$ (or ${4.133\sigma }$ ) [12,26,32], Finger-Fitts law (Equation 2) can be reexpressed as Equation 6: + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}_{e}^{2} - {2\pi e}{\sigma }_{a}{}^{2}}} + 1}\right) . \tag{6} +$$ + +Later research $\left\lbrack {4,6,{21},{34}}\right\rbrack$ showed that Finger-Fitts law was successful in modeling touch interaction. For example, research [4] showed it was more accurate than the typical Fitts' law in estimating the upper bound of typing speed on a virtual keyboard. Researchers [21] extended the Finger-Fitts law to the crossing action with finger touch, which improved the model fitness $\left( {R}^{2}\right)$ from 0.75 to 0.84 over the original Fitts' law. The recent work [19] extends Finger-Fitts law from 1D to 2D, which shows using nominal target width and height is valid for modeling 2-dimensional touch pointing. Complementary to the previous work [19], this work investigates modeling 1-dimensional target selection with nominal target widths. We also compare effective width vs. nominal width while the previous work [19] did not draw such a comparison. + +As alluded to earlier, previous research on Finger-Fitts law is mostly based on using the effective width ${W}_{e}$ . Next, we describe how we use the nominal width $W$ in Finger-Fitts law (a.k.a the Finger-Fitts-W model), and present a study comparing it with using effective width and the typical Fitts' law. + +§ 3 USING NOMINAL WIDTH W TO MODEL TOUCH POINTING + +To use nominal width $W$ in touch modeling, a straightforward approach is to replace the effective width ${W}_{e}$ in Finger-Fitts- ${W}_{e}$ mode (Equation 2) with $W$ : + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}^{2} - {2\pi e}{\sigma }_{a}^{2}}} + 1}\right) \tag{7} +$$ + +A potential problem is that it leaves the equation undefined if $W < \sqrt{2\pi e}{\sigma }_{a}$ . To address this problem, we assume that ${\sigma }_{a}^{2}$ , which represents the absolute variance caused by finger touch, is an empirical parameter determined from the data, instead of a pre-defined constant: + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}^{2} - {c}^{2}}} + 1}\right) , \tag{8} +$$ + +where $a,b,c$ are all empirically determined parameters. The implication of this adjustment is that ${\sigma }_{a}^{2}$ may differ across task contexts, and treating it as a free parameter would provide more flexibility in modeling. The drawback is that it introduces an extra free parameter $c$ to the model. In the model evaluation, we take the number of free parameters into consideration and control for the overfitting. + +Replacing ${W}_{e}$ with $W$ also has a physical meaning. The ${W}_{e}$ represents the observed variance in the endpoint distribution, which is the actual endpoint variability a user exhibits. In contrast, ${W}^{2}$ represents the endpoint variability allowance specified by the task parameter, which is the variability allowance a user is supposed to consume. + +§ 4 EVALUATION IN 1D POINTING TASKS + +We carried out an experiment to investigate whether the Finger-Fitts- $W$ law can accurately predict ${MT}$ , compared with Fitts’ law and the original Finger-Fitts law. The study was a reciprocal target acquisition task with finger touch. + +§ 4.1 PARTICIPANTS AND APPARATUS + +We recruited 14 subjects for an IRB approved study ( 3 females; aged from 22 - 35). All of them were right-handed and daily smartphone users. A Google Pixel C tablet with 2560x1800 resolution and 308 ppi was used throughout the experiment. Each participant was instructed to perform the tasks on the tablet. They were instructed to select the target with the index finger as fast and accurately as possible. All the subjects were daily smartphone users. + +§ 4.2 DESIGN AND DATA PROCESSING + +§ 4.2.1 TARGET ACQUISITION TASK + +We designed a within-subject reciprocal target acquisition task for circular targets with various diameters. We chose circular target acquisition mainly because of two reasons. First, this is one of the common tasks used in Fitts’ law studies (e.g.,[6,24]). Second, ${\sigma }_{a} =$ 1.5 is obtained from the circular target acquisitions in [6]. Adopting a similar circular targets experiment setting allows us to investigate not only vertical finger traveling but horizontal movements. + +The study included 15 conditions with 5 levels $(4,6,8,{10},{12}$ $\mathrm{{mm}})$ of diameters(W)and 3 levels(16,28,60mm)of distance (A). It had two different movement directions, which are vertical and horizontal movements. Each condition included 20 touches (19 trials, where the first touches in each condition are considered the starting action) and the condition would show up in random order. We have 14 (participants) $\times {15}$ (conditions) $\times 2$ (directions) $\times {19}$ (trials) $= 7,{980}$ (trials) in total. + +At the beginning of each trial, two circular targets were displayed on the touch screen, one in red (a.k.a the start circle) and one in blue (a.k.a the destination circle). The participant was instructed to select the start circle to start the trial. Upon successfully selecting the start circles, the colors of start and destination circles got swapped and the participant was instructed to select the destination circle as fast and accurately as possible. A successful sound would be played if the target was successfully selected. Otherwise, a failure sound was played. The elapsed time between the moment the user successfully selected the start circle and the moment the user subsequently landed down the touch point to select the destination circle was recorded as the movement time of the current trial; the touch point for selecting the destination circle was the location of the endpoint, regardless of whether the touch point was within or outside the target boundary. If the participant succeeded in selecting the destination circle, the colors of two circles were swapped again and the next trial started immediately. If the participant failed in selecting the destination circle, she had to successfully select it again to start the next trial. This setting ensured that in each trial the finger always starts from somewhere within the starting circle, reducing the noise in measuring $A$ . + +§ 4.2.2 DATA PROCESSING + +We pre-processed the data by removing touch points which fell beyond 3 standard deviations to the target center. In circular acquisition tasks, 50 out of 7,980 touch points (0.63%) were removed as outliers. + +§ 4.3 RESULTS + +§ 4.3.1 ${MT}$ AND ERROR RATES ACROSS THE CONDITION + +We observe movement time and the error rates across different target widths and distances (Table 1 and 2). + +For movement times, a repeated measure ANOVA test showed that both width $W\left( {{F}_{4.52} = {175.3},p < {0.0001}}\right)$ and distance $A$ $\left( {{F}_{2.26} = {320.7},p < {0.0001}}\right)$ had a statistically significant effect. The interaction effect of width and distance was also significant $\left( {{F}_{8,{104}} = {2.077},p < {0.05}}\right)$ . For error rates, a repeated measure ANOVA test showed that width $W$ had a significant effect $\left( {{F}_{4,{52}} = }\right.$ ${56.19},p < {.0001})$ , but not distance $A\left( {{F}_{2,{26}} = {1.443},p = {0.255}}\right)$ . The interaction effect of width and distance was not significant $\left( {{F}_{8,{104}} = {1.965},p = {0.058}}\right)$ . + + < g r a p h i c s > + +Figure 1: (a) A participant was doing the task. (b) A screenshot of the task. + +The results also showed that participants were more error-prone with smaller targets, especially with diameters under $6\mathrm{\;{mm}}$ . A repeated measure ANOVA test showed that the size of the targets (targets which are 4 and $6\mathrm{\;{mm}}$ are considered small targets) had a significant effect $\left( {{F}_{1,{13}} = {80.21},p < {0.0001}}\right)$ . This results concurred with conclusion from other research $\left\lbrack {6,8}\right\rbrack$ . + +max width= + +Diameters (mm) MT Mean (SD) Error rate + +1-3 +4 0.50 (0.13) 24.9% + +1-3 +6 0.37 (0.13) 10.9% + +1-3 +8 0.31 (0.11) 6.4% + +1-3 +10 0.28 (0.10) 2.8% + +1-3 +12 0.25 (0.09) 1.1% + +1-3 + +Table 1: Movement time and error rates over different target widths + +max width= + +Distances (mm) MT Mean (SD) Error rate + +1-3 +16 0.26 (0.11) 8.2% + +1-3 +28 0.31 (0.12) 9.3% + +1-3 +60 0.45 (0.13) 10.0% + +1-3 + +Table 2: Movement time and error rates over different distances + +§ 4.3.2 REGRESSION FOR ${MT}$ VS. ${ID}$ + +Figure 2 shows the regression results of ${MT}$ vs. ${ID}$ . As shown, the Finger-Fitts- $W$ law has the highest ${R}^{2}$ value (0.986) among all the test models, indicating its high model fitness. The results also showed that Finger-Fitts- ${W}_{e}$ model was better than the typical Fitts’ law - $W$ and the Fitts’ law - ${W}_{e}$ , consistent with findings from previous work [6]. + +§ 4.3.3 RMSE OF ${MT}$ PREDICTION + +To increase the external validity of the evaluation, we also examine the Root Mean Square Error (RMSE) of ${MT}$ prediction with cross validation. We conduct leave-one-(A, W)-out cross validation and obtain the RMSE for Finger-Fitts- $W$ , Fitts’ law - ${W}_{e}$ and Fitts’ law - $W$ . The results (Table 2) are 0.015 for Finger-Fitts- $W$ model,0.021 for Finger-Fitts- ${W}_{e}$ model,0.033 for Fitts’ law - $W$ , and 0.064 for Fitts’ law - ${W}_{e}$ . It showed Finger-Fitts- $W$ outperformed all the other three models. + + < g r a p h i c s > + +Figure 2: MTvs.ID regressions for Fitts’ law, Fitts’ law with effective width, Finger-Fitts- $W$ , and Finger-Fitts- ${W}_{e}$ models. As shown, Finger-Fitts- $W$ model shows the best model fitness. + +max width= + +3|c|X ${R}^{2}$ RMSE AIC WAIC Parameters + +1-8 +X Fitts’ Law - $W$ Eq. (1) 0.927 0.033 -94.00 -86.87 $a = - {0.009},b = {0.149}$ + +1-8 +X Fitts’ Law - ${W}_{e}$ Eq. (4) 0.719 0.064 -73.82 -69.21 $a = - {0.051},b = {0.180}$ + +1-8 +X Finger-Fitts Law - $W$ Eq. (8) 0.986 0.015 -118.81 -109.46 $a = {0.021},b = {0.123},{c}^{2} = {11.260}$ + +1-8 +X Finger-Fitts Law - ${W}_{e}$ Eq. (2) 0.968 0.021 -106.37 -100.35 $a = - {0.109},b = {0.167},{\sigma }_{a} = {1.5}$ + +1-8 + +Table 3: The parameters, ${R}^{2},{RMSE}$ of leave-one-(A, W)-out cross validation, and Information Criteria AIC and WAIC of the models. For AIC and WAIC, the smaller the values, the more accurate the model prediction. + +§ 4.3.4 INFORMATION CRITERIA + +Information criteria $\left\lbrack {1,2,{30},{31}}\right\rbrack$ have been widely used to compare the quality of models because they take into account the complexity of the model (i.e., the number of parameters). Commonly used information criteria include ${AIC}$ , and ${WAIC}$ , both of which penalize the complexity of a model. In general, the smaller the information criterion, the better the model is. We have calculated multiple information criteria including ${AIC}$ and ${WAIC}$ (Table 3). As shown, the Finger-Fitts- $W$ law outperforms the Fitts’ law - $W$ and Finger-Fitts- ${W}_{e}$ law in these metrics. + +§ 4.4 DISCUSSION + +§ 4.4.1 THE VALIDITY OF THE FINGER-FITTS- $W$ MODEL. + +Our study showed that the Finger-Fitts- $W$ law had the highest prediction accuracy among all the test models across a number of metrics, including information criteria that take into account the number of model parameters. The ${R}^{2}$ value, a commonly used measure for pointing model fitness, is 0.986, higher than both Fitts' law - $W\left( {{R}^{2} = {0.927}}\right)$ and Finger-Fitts- ${W}_{e}\left( {{R}^{2} = {0.968}}\right)$ . Its RMSE, a cross-validation metric, is also the lowest among all the test models. To take into account the number of model parameters, we examined the information criteria. The AIC and WAIC showed that the Finger-Fitts- $W$ law improved prediction accuracy over Fitts’ and Finger-Fitts law. It reduced AIC from -94.00 to -118.81, and WAIC from -86.87 to -109.46. These two metrics added penalty to adding extra parameters in the Finger-Fitts- $W$ law, adding evidence to the strength of the Finger-Fitts- $W$ model. + +§ 4.4.2 THE " $W - C$ " MODEL SERVES AS A SIMPLIFICATION OF FINGER- FITTS- $W$ MODEL. + +We notice that the Finger-Fitts- $W$ law resembles the model proposed in 1968 by Welford [32] in which $W - c$ was used in lieu of $W$ to account for the hand tremor when acquiring small-sized targets with pencil (i.e., ${MT} = a + b{\log }_{2}\left( {\frac{A}{W - c} + 1}\right)$ ). We investigated whether this $W - c$ model could serve as a simplification of the Finger-Fitts- $W$ model based on the data collected in our experiment. Our analysis shows that the $W - c$ model has slightly weaker prediction performance than the Finger-Fitts- $W$ law, with ${R}^{2} = {0.984}$ and RMSE $=$ 0.016. Its performance is still better than the Fitts’ law - $W$ and Finger-Fitts- ${W}_{e}$ law. Therefore, it can serve as a simplification of the Finger-Fitts- $W$ law for touch modeling. + +§ 5 CONCLUSIONS + +Our main conclusion is that the one-dimensional Finger-Fitts- $W$ model (Equation 9), a variant of the Finger-Fitts law [6] can model the movement time in touch pointing quite well: + +$$ +{MT} = a + b{\log }_{2}\left( {\frac{A}{\sqrt{{W}^{2} - {c}^{2}}} + 1}\right) . \tag{9} +$$ + +It complements the original Finger-Fitts law [6] which predicts movement time with the variance of observed touch point distribution. Because it uses nominal parameters only for prediction, the Finger-Fitts- $W$ model can answer "what if" questions without obtaining the variance of touch point distribution. Our evaluation shows the Finger-Fitts- $W$ model outperforms Fitts’ law (which also uses nominal parameters only for prediction) in model fitness, measured by ${R}^{2}$ values, cross-validation RMSE, and information criteria. Overall, our investigation shows that the Finger-Fitts- $W$ model is a valid model for modeling one-dimensional touch pointing. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Zd5GbiuwX_t/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Zd5GbiuwX_t/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..f9de59ec795895e5b6e5b3b3ffab8181d8548ff9 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Zd5GbiuwX_t/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,391 @@ +# Design and Investigation of One-Handed Interaction Techniques for Single-Touch Gestures + +Category: Research + +![01963ead-0611-7479-8053-f4f3b388f5e6_0_218_380_1365_632_0.jpg](images/01963ead-0611-7479-8053-f4f3b388f5e6_0_218_380_1365_632_0.jpg) + +Figure 1: Our two techniques. In Force Cursor (FC, top), the cursor mode is triggered by a user's swipe from the bezel (a); then, the user moves the cursor by dragging the finger (b); a touch-down event is issued at the cursor position when the user increases the force (c), and a touch-up event is issued when decreasing the force (d); the cursor mode ends when the user releases the finger from the screen (e). In Event Forward Cursor (EFC, bottom), the trigger of the cursor mode (f) and the method of moving the cursor (g) are the same as in FC (a and b, respectively); when the finger is released from the screen in the cursor mode (h), the mode switches to the forward mode, and then, when the user performs a gesture anywhere on the screen in the forward mode, the gesture is forwarded to the cursor position; in this figure, a tap, i.e., a touch-down event (i) and touch-up event (j) is forwarded; although the forward mode ends when the user releases the finger from the screen, to enable single-touch gestures that combine touch-up events such as double-tap or tap-and-hold, if the screen is touched again within the double-tap waiting time (0.25 seconds) after the user releases the finger from the screen, the forward mode continues. + +## Abstract + +Many one-handed interaction techniques have been proposed to interact with a smartphone with only one hand. However, these techniques are all designed for selecting (tapping) unreachable targets, and their performance of other single-touch gestures such as a double-tap, swipe, and drag has not been investigated. In our research, we design two one-handed interaction techniques, Force Cursor (FC) and Event Forward Cursor (EFC), each of which enables a user to perform all single-touch gestures. FC is a cursor technique that enables a user to issue touch events using force; EFC is a cursor technique that enables a user to issue touch events by a two-step operation. We conducted a user study to investigate single-touch gesture performance of one-handed interaction techniques: FC, EFC, and the contents shrinking technique. The result shows that the success rate of one-handed interaction techniques varies depending on the gesture and that EFC has a high success rate independently of the gestures. These results clarified the importance of investigating the performance of single-touch gestures of one-handed interaction techniques. + +Index Terms: Human-centered computing-Interaction techniques; Human-centered computing-Gestural input + +## 1 INTRODUCTION + +Many users tend to use only one hand to interact with their smart-phones (i.e., holding a smartphone in one hand and using only the thumb) $\left\lbrack {{23},{37}}\right\rbrack$ . The possible reason for this is the fact that only the other hand can be used when a user holds an umbrella or baggage in one hand [22]. However, when interacting with the smartphone with only one hand, it is difficult for a user to reach the thumb to all parts of the smartphone's screen without changing the grasping posture because the thumb’s reach is limited [5,32]. Therefore, the user needs to interact with the smartphone while changing the grasping posture as appropriate. However, changing the grasping posture makes the user's grasp of the smartphone unstable, which hinders the user’s comfortable interaction with the smartphone [12, 13] and may cause the device to fall. + +This problem has been well known in the HCI field; thus, to enable one-handed interaction on a smartphone without changing the grasping posture, HCI researchers have proposed many one-handed interaction techniques (e.g., $\left\lbrack {8,{25},{28}}\right\rbrack$ ). However, most of them are designed to allow a user to select (tap) unreachable targets. Therefore, while they have shown that these techniques improve a user's target selection (tap) performance compared to the condition without one-handed interaction techniques, the performance of other single-touch gestures (e.g., a double-tap, swipe, drag) have not been investigated. Also, many techniques do not support single-touch gestures other than a tap. Note that these single-touch gestures, including a tap, are frequently used when a user uses a smart-phone [3]. For example, a double-tap is used for the skip function in video viewing applications; swipes from the bottom and top bezels are used to return to the home screen and to access an information center, respectively; a drag is used for moving an icon on the home screen and selecting multiple photos. Thus, enabling a user to perform such single-touch gestures is important in a one-handed interaction technique. Furthermore, it is important to investigate the single-touch gesture performance of the one-handed interaction technique. + +In our research, we designed two techniques, Force Cursor (FC, Figure 1 top) and Event Forward Cursor (EFC, Figure 1 bottom), for enabling a user to perform single-touch gestures on an unreachable area on a smartphone. Both techniques use a cursor for allowing a user to interact with the smartphone while keeping a stable grasp since the technique using a cursor can stabilize the grasp of the smartphone [9]. The reason why single-touch gestures cannot be performed in previous one-handed interaction techniques using a cursor (hereafter cursor techniques) is that they are designed so that a tap is performed at the cursor position when the user releases the finger from the screen. By contrast, we designed our cursor techniques so that a user can issue touch events (i.e., touch-down events, touch-move events, and touch-up events) at the cursor position. Specifically, FC is a cursor technique that allows the user to issue touch events by changing the force (Figure 1 top). On the other hand, EFC is a cursor technique that involves two steps of operation; the first is for determining the touch event position; the second is for performing single-touch gestures (Figure 1 bottom). + +Moreover, to investigate the single-touch gesture performance of one-handed interaction techniques, we conducted a user study with four techniques: our two techniques (FC and EFC), a content shrinking technique (One-handed Mode [41] (OM), which is a technique that allows a user to perform all single-touch gestures), and a direct thumb-touch technique, where a user uses a smartphone without any one-handed interaction technique. Based on the results of the user study, we discuss the performance of each one-handed interaction technique. + +The main contributions of this paper are the followings: 1) the design of two one-handed interaction techniques (FC and EFC), each of which enables a user to perform all single-touch gestures while keeping the grasp of the smartphone stable, and 2) the single-touch gesture performance of one-handed interaction techniques, which has not been investigated before. The results show that OM is a fast technique for performing gestures, FC is a technique that can stabilize the user's grasp of the smartphone, and EFC is a technique with a high success rate regardless of the gestures to be performed. It is also found that the success rate in OM and FC varies greatly depending on the gestures to be performed, indicating the importance of investigating the single-touch gesture performance. + +## 2 RELATED WORK + +Many one-handed interaction techniques have been proposed to facilitate one-handed interaction on smartphones. We categorize these techniques into three: screen transformation techniques, proxy region techniques, and cursor techniques. + +### 2.1 Screen Transformation Techniques + +These techniques enable a user to move the contents shown on the screen (hereafter the contents), shrink the contents, or both. + +On the iPhone 6 and later, a function called "Reachability" has been introduced [1]. With Reachability, a user can move the contents down by double-tapping the home button or swiping down on the bottom edge of the screen. On the iPhone 6 and later, a user can move the contents down by double-tapping the home button or swiping down on the bottom edge of the screen. Similarly, Palm-Touch [31] can be used to move the contents down by touching the screen with the palm. In Telekinetic Thumb [20], a user can move the contents to the lower right of the screen by performing a pull gesture above the screen. Sliding Screen [25] is triggered by a swipe from the smartphone's bezel or a touch with a large contact area; when a user drags the thumb, the contents are moved point-symmetrically or in the direction of the thumb's movement. MovingScreen [45] is similarly designed to move the contents in a point-symmetrical manner in the direction of the thumb's movement and is triggered by a swipe from the bezel; it differs in that the movement's speed changes in proportionally to the swipe distance on the bezel where it is triggered. TiltSlide [8] is also a technique to move the contents in the same way and triggered by tilting the smartphone. In IndexAccess [18] and the technique proposed by Le et al. [30], a touchpad is attached to the back of the smartphone; the index finger's movement on the touchpad moves the contents. In these techniques, after the contents are moved, a part of the contents is hidden outside the screen; and thus, dragging a content (e.g., icon) from the area beyond the reach of the thumb to the hidden area is not supported, and vice versa. + +Galaxy's One-handed Mode [41] (OM) is triggered by a triple-tap of the home button or a swipe from the four corners of the screen. It shrinks the contents and places them near the thumb; by default, the contents shrink to two-thirds of their size. Similarly, TiltReduction [41] is also a technique to shrink the contents and triggered by tilting the smartphone. These techniques that shrink the contents can be adapted so that a user can perform all single-touch gestures, although the single-touch gesture performance has not yet been investigated. Thus, we investigate the performance. + +### 2.2 Proxy Region Techniques + +In the proxy region techniques, a user can use a different area of the screen or around the screen as an alternative area to operate the unreachable area. + +ThumbSpace [24] is triggered by a drag on the screen; it then displays a popup view that miniaturized all contents of the screen. Although this technique solves the problem that the thumb cannot reach all parts of the smartphone's screen without changing the grasping posture, it reduces the size of any target because it shrinks the all contents of the screen, which may lead to the fat finger problem [43] and occlusion (i.e., a small target is occluded by the thumb). TapTap [40] is triggered when a user touches the screen; it then displays a popup on the center of the screen, which shows an enlarged view of the area around the touched position on the screen. However, it is difficult to use this technique for an area too far from the thumb's reach because the user needs to touch around the area that the user wants to interact with. Hasan et al. [15] proposed the technique that uses the in-air space above the screen, which allows the user to interact with unreachable targets by using the three-dimensional movements of the thumb. + +In the technique proposed by Lochtefeld [34], a touchpad is attached to the back of a smartphone, which allows a user to operate an unreachable target by touching the target from the back of the device using the index finger; this design utilizes the fact that the reachable area with one-handed interaction can be extended by 15 percent by using the index finger on the back of the smart-phone [52]. However, these techniques [15, 34] require an additional device. + +### 2.3 Cursor Techniques + +In cursor techniques, a user can use the cursor for selecting an unreachable target instead of touching the target directly. + +Most of the previous cursor techniques switch to the mode to use the cursor (cursor mode) when a user performs a predetermined gesture as a trigger, and then the cursor appears under the finger, and the user can move the cursor according to the distance of the finger movement by dragging the finger while in the cursor mode. For example, TiltCursor [8] is triggered by tilting the smartphone, BezelCursor [33] is triggered by a swipe from the bezel, Extendible Cursor [25] is triggered by a swipe from the bezel or a touch with a large contact area, ExtendedThumb [28] is triggered by a double-tap, and MagStick [40] is triggered by touching the screen. + +Unlike these techniques, in CornerSpace and BezelSpace [53], the cursor appears in places other than under the finger. Cor-nerSpace [53] displays the popup of shrunk all contents of the screen when a user swipes from the bezel. Then, when the user touches on the popup, the cursor appears at the position corresponding to the touched position on the screen. On the other hand, in BezelSpace [53], when a user swipes from the bezel, buttons representing the four corners and center of the screen appears, and then the cursor appears at the corresponding position when the user presses the button. + +2D-Dragger [44], ForceRay [9], and HeadReach [46] are techniques that use a different method of moving the cursor. In 2D-Dragger [44], when a user moves the finger, the cursor is moved to the nearest target in the direction of the finger's movement. In ForceRay [9], the cursor moves away from the finger when a user increases the force, and the cursor moves in the direction of the finger when the force is decreased. In HeadReach [46], a method of moving the cursor combining the direction of the face and dragging a finger is used. + +Although the triggers, the initial cursor position, and the method of moving the cursor are different in these previous cursor techniques, the mechanism to issue touch events (i.e., selecting target) is the same in all of them: when a user releases the finger from the screen during the cursor mode, a tap is performed (i.e., both touchdown and touch-up events are issued simultaneously) at the cursor position. This approach makes it impossible for a user to perform single-touch gestures other than a tap. On the other hand, our two techniques (FC and EFC) enable a user to perform all single-touch gestures using a cursor. [14] is a previous study of the same concept as ours. However, no experiments were conducted with participants other than the authors. + +## 3 DESIGN OF OUR TECHNIQUES + +We designed two one-handed interaction techniques that enable a user to perform all single-touch gestures using a cursor. Since previous studies $\left\lbrack {8,9,{25}}\right\rbrack$ show the high performance of the cursor technique triggered by a swipe from the bezel, we adopted a swipe from the bezel as the trigger to switch to the cursor mode in both techniques (Figure 1a, f). Moreover, we designed both techniques so that the cursor moves in the same direction as the finger movement (Figure 1b, g), as in [8, 33]. + +### 3.1 Force Cursor (FC) + +In FC, the following operations are required to issue touch events at the cursor position. Firstly, the user performs a swipe from the bezel to switch to the cursor mode (Figure 1a), then, the user drags the finger to move the cursor to the desired position (Figure 1b); the cursor movement distance is calculated by multiplying the thumb movement distance by the control-display ratio. + +A touch-down event is issued when the user increases the force above a threshold (Figure 1c). A touch-up event is issued when the user decreases the force below the threshold after the touch-down event is issued. The cursor mode continues until the user releases the finger from the screen, allowing the user to continuously use the cursor to perform a gesture to an unreachable area. Since this design allows the user to issue touch-down, touch-move, and touch-up events by controlling the force, all single-touch gestures can be performed. For example, a tap is performed by increasing the force and then decreasing the force (i.e., clicking using force [51]), a double-tap is performed by quickly repeating the click twice, and a swipe or drag is performed by moving the finger while the force is applied. + +We conducted a demonstration to confirm if users can use FC and found the following problems with FC based on the comments from the demonstration participants. Firstly, it is not possible to know how much force the user is currently applying without feedback. Secondly, it is difficult to perform double-tap using a cursor with force (i.e., to quickly repeat a click using force twice). Thirdly, it is difficult to make large movements of the finger while applying high force (i.e., performing a drag using the cursor). Therefore, we added the following three functions to solve these problems. + +![01963ead-0611-7479-8053-f4f3b388f5e6_2_976_147_637_192_0.jpg](images/01963ead-0611-7479-8053-f4f3b388f5e6_2_976_147_637_192_0.jpg) + +Figure 2: Circular bar that represents the current applied force displayed around the cursor of FC. The bar is blue when the force is below the threshold, and red when it is above. + +#### 3.1.1 Visual Force Feedback + +Previous studies $\left\lbrack {7,{35},{48}}\right\rbrack$ have shown that continuous visual feedback is effective in a technique using force. Therefore, we display a circular bar to provide continuous visual feedback to a user (Figure 2). The circular bar is displayed in blue when the current applied force is below the threshold (Figure 2a) and in red when it is above the threshold (Figure 2b). + +#### 3.1.2 Double-Tap Assistance + +In the demonstration, the reason why novices often failed to double-tap using FC was that they changed the force before the force threshold was crossed, i.e., before the touch-up event of the first tap was issued, the user was attempting to issue a touch-down event for the second tap. Therefore, we implemented a function to enable a user to perform a double-tap using the cursor without being aware of the threshold. In this function, a user first increases the force above the threshold, then decreases the force, increases it again, and finally decreases the force below the threshold to perform a double tap (i.e., the user only needs to cross the force threshold when increasing the force to issue the touch-down event of the first tap and when decreasing the force to issue the touch-up event of the second tap). + +#### 3.1.3 Drag Assistance + +It is difficult to move a finger with increased force because of the increased frictional force between the finger and the screen [16, 17]. In order to solve this problem, we implemented a function that, when the force is increased to the maximum detectable value and dwelled for 1.0 seconds, then, a touch-move event is issued continuously at the cursor position, independent of the force, until the finger is released from the screen. Therefore, the user can perform a drag using the cursor with low force; this function of fixing the force state is the same as that of force lock [17]. + +### 3.2 Event Forward Cursor (EFC) + +In EFC, in the same way as in FC, when a user performs a swipe from the bezel, the cursor mode is triggered (Figure 1f), and then, the user drags the finger to move the cursor to the desired location (Figure 1g). In EFC, when the user releases the finger from the screen during the cursor mode, the mode switches to forward mode (Figure 1h); the cursor is yellow while in the forward mode so that the user knows the current mode. While in the forward mode, all touch events are forwarded to the cursor position. That is, touch-down events are issued at the cursor position when the finger touches the screen, then touch-move events are issued at the cursor position until the user releases the finger from the screen, and touch-up events are issued at the cursor position when the user releases the finger from the screen. Although the forward mode ends when the user releases the finger from the screen, to enable single-touch gestures that combine touch-up events such as double-tap or tap-and-hold, if the screen is touched again within the double-tap waiting time (0.25 seconds) after the user releases the finger from the screen, the forward mode continues. + +## 4 USER STUDY: INVESTIGATION OF THE PERFORMANCE OF SINGLE-TOUCH GESTURES + +To investigate the single-touch gesture performance of one-handed interaction techniques, we conducted a user study with 8 participants $({21} - {24}$ years old, $\mathrm{M} = {22.38},\mathrm{{SD}} = {1.06};7$ males; who have the iPhone that can detect force; and usually interact with their smartphones with the right hand). + +For the safety of the participants, we conducted this user study remotely via video call. + +### 4.1 Setup + +#### 4.1.1 Apparatus + +Participants used their own iPhones; the iPhone's force sensitivity is set to 'firm'. Three iPhone XS and one iPhone X (screen size: ${135.10}\mathrm{\;{mm}} \times {62.39}\mathrm{\;{mm}}$ ), and one iPhone 8, two iPhone 7, and one iPhone 6s (screen size: ${103.94}\mathrm{\;{mm}} \times {58.44}\mathrm{\;{mm}}$ ) were used in this user study. Since the implementation of the experimental application uses "pt" as a unit of length and the actual size of a pt varies slightly depending on the iPhone, in this section, we use pt as a unit of length. In the case of the iPhone XS and iPhone X, the $1\mathrm{{pt}} \simeq$ ${0.17}\mathrm{\;{mm}}$ , and in the case of the iPhone 8, iPhone 7 and iPhone 6s, the $1\mathrm{{pt}} \simeq {0.16}\mathrm{\;{mm}}$ . + +#### 4.1.2 Techniques + +We used FC, EFC, and One-Handed Mode (OM, same as Sam-sung's [41]) as one-handed interaction techniques that enable a user to perform all single-touch gestures. Other one-handed interaction techniques, such as Apple's Reachability [1] or cursor techniques (e.g., $\left\lbrack {{25},{33}}\right\rbrack$ ), were excluded from this user study because they do not support all single-touch gestures. In addition, as a baseline, we used Direct Touch (DT), i.e., without a one-handed interaction technique. That is, there are four different techniques (DT, OM, FC, and EFC) used in this user study. + +We unified triggers of OM, FC, and EFC to the swipe from the bezel to avoid the effect of different triggers. For FC and EFC, the size of the cursors was $9\mathrm{{pt}}$ , and the cursor’s control-display ratio was set to be three times, that is, the cursor moves three times the distance of the finger movement. + +In One-Handed Mode (OM), when a user swipes from the bezel, the contents are shrunk; swipe again to restore the contents to their original size. In this user study, the contents is shrunk to $2/3$ of its original size and moved to the lower right corner of the screen (Figure 3e), just like the standard Galaxy setting. + +For our implementation of Force Cursor (FC), we used the force readings provided by the iPhone's force-sensitive touchscreen. According to Apple's documentation [2], with the force sensitivity set to "firm", force is capable of measuring with unitless value from 0 to $\frac{480}{72}\left( { \simeq {4.0}\mathrm{\;N}\left\lbrack {36}\right\rbrack }\right)$ ; with values around ${1.0}\left( { \simeq {0.60}\mathrm{\;N}\left\lbrack {36}\right\rbrack }\right)$ being the force applied by ordinary touch. We set the force threshold for issuing touch-down and touch-up events in FC to ${3.0}\left( { \simeq {1.8}\mathrm{\;N}}\right)$ . + +#### 4.1.3 Targets + +18 targets were placed on an invisible ${15} \times 7$ grid, as shown in Figure $3\mathrm{a}$ ; the size of the grid varies with the size of the screen. The target size was set to two different values: ${60}\mathrm{{pt}} \times {60}\mathrm{{pt}}$ (Large, Figure 3a, b, d) and 30 pt $\times {30}$ pt (Small, Figure 3c). The target was placed at the top-left corner of the grid as a starting point. In addition, if part of the target protrudes from the screen, the target was moved to the center of the screen by the amount of the protrusion. During the task, only the current target is displayed in red, and other targets are not displayed (Figure 3b, c, d). + +![01963ead-0611-7479-8053-f4f3b388f5e6_3_923_147_723_393_0.jpg](images/01963ead-0611-7479-8053-f4f3b388f5e6_3_923_147_723_393_0.jpg) + +Figure 3: Targets and screenshots of the user study. a: 18 targets placed on the screen. b: A target for tap and double-tap sessions; the target size is Large. c: A target for swipe sessions; the target size is Small and the swipe direction is down. d: A target for drag sessions; the target size is Large. e: The screen when $\mathrm{{OH}}$ is used; the contents are shrunk to $2/3$ of its original size. + +The location and size of the target were based on the experiments of [9]. However, while [9] did not place targets at hand, we place targets on the entire screen (include at hand) since different sizes of smartphones change the unreachable area of the user [32]. + +#### 4.1.4 Single-Touch Gestures + +We used the commonly used four single-touch gestures: tap (Tap), swipe (Swipe), double-tap (DTap), and drag (Drag). We set up a session for each gesture. + +In a session of Tap, a target is displayed in red (Figure 3b) and participants perform a tap on a target. The tap is performed when both the touch-down event and the touch-up event are issued on the same target. + +In a session of DTap, a target is displayed in red (Figure 3b), participants perform a double-tap on the displayed target. A double-tap is a gesture that the user touches the same target again within the double-tap waiting time after performing a tap and then releases the finger from the screen on the target. In this user study, the waiting time of the double-tap was set to 0.25 seconds: this is the same as the default of the iPhone. + +In a session of Swipe, participants swipe the target in the direction of the arrow displayed on the target (Figure 3c). The direction of a swipe was randomly selected from four directions, up, down, right, and left when the target was updated. However, because a swipe toward the screen bezel on the target that is in contact with the screen bezel cannot be recognized, directions toward the screen bezel were removed from the selection (e.g., one direction was randomly selected from the top, left, and bottom directions for the rightmost target). + +In a session of Drag, two targets were displayed (Figure 3d), and participants dragged a target labeled ' 1 ' (target1) to a target labeled ' 2' (target2). Target2 was randomly selected from 17 targets other than target1. + +### 4.2 Task and Procedure + +Before starting this user study, we initiated a video call with the participants to explain this user study. Participants sat on a chair and interacted with their iPhones with their right hands and only use the thumb. + +We recorded 4 techniques $\times 4$ gestures $\times {18}$ targets $\times$ 2target sizes $\times 2$ repetitions $= 1,{152}$ trials per participant. + +![01963ead-0611-7479-8053-f4f3b388f5e6_4_149_147_722_675_0.jpg](images/01963ead-0611-7479-8053-f4f3b388f5e6_4_149_147_722_675_0.jpg) + +Figure 4: The results of Time [s]. The top is a table summarizing the results of TECHNIQUE, TECHNIQUE $\times$ GESTURE, and TECHNIQUE $\times$ SIZE, and the bottom is a bar graph showing the results of TECHNIQUE $\times$ GESTURE. Pairs that do not share a letter of Group are significantly different. Whiskers denote 95% CI. + +The techniques and gestures were counterbalanced using a Latin Square. The display order of the targets was random. Target size was also selected in random order, however, out of the 16 combinations of the 4 techniques $\times 4$ gestures,8 of the 16 started with small targets, and the remaining 8 with large targets. Before starting a new gesture session with each technique, participants performed the gesture to 18 targets as a practice. Then, the gesture was performed to 18 targets, repeated twice at the same size, and repeated twice again at the other size. Successful or unsuccessful performance of the gesture was informed to the participant in different sounds. When the gesture was successfully performed, the next target was displayed, and when the gesture failed, the same target was displayed again. + +After the completion of the $2 \times {18}$ trials at both sizes, participants began the next gesture session. Then, after 4 gesture sessions were completed, the next technique was presented to the participants. To know the subjective evaluation, we asked participants to answer the questionnaire of System Usability Scale [6] (SUS) for the technique after all gesture sessions for the technique were completed. + +The participants took breaks as needed to avoid hand fatigue. The user study took about two hours, and the participants received \$33.1 as a reward. + +### 4.3 Result + +The independent variables were TECHNIQUE (DT, OM, FC, and EFC), GESTURE (Tap, DTap, Swipe, and Drag), and SIZE (Large and Small). The dependent variables are trial completion time (Time), success rate (Accuracy), the jerk (Jerk) and angular acceleration (Angular Acceleration) used to evaluate the stability of the smartphone, and the score of SUS (SUS Score). Jerk is used to evaluate smooth motion [29] and angular acceleration is used to evaluate the vibration state [54]. For the analysis, we used a repeated-measures ANOVAs. Since the purpose of this user study is to investigate the single-touch gesture performance of one-handed interaction techniques, we only describe the main effect of the TECHNIQUE and the related interaction effects. + +![01963ead-0611-7479-8053-f4f3b388f5e6_4_923_153_726_666_0.jpg](images/01963ead-0611-7479-8053-f4f3b388f5e6_4_923_153_726_666_0.jpg) + +Figure 5: The results of Accuracy [%]. The top is a table summarizing the results of TECHNIQUE, TECHNIQUE $\times$ GESTURE, and TECHNIQUE $\times$ SIZE, and the bottom is a bar graph showing the results of TECHNIQUE $\times$ GESTURE. Pairs that do not share a letter of Group are significantly different. Whiskers denote 95% CI. + +Because this user study conducted remotely and participants used their own smartphones, there were two different sizes of smart-phones. However, there were no significant effect of smartphone size (Time: $\mathrm{F}\left( {1,6}\right) = {3.35},\mathrm{p} = {0.12}$ ; Accuracy: $\mathrm{F}\left( {1,6}\right) = {2.86}$ , p $= {0.14}$ ; Jerk: $\mathrm{F}\left( {1,6}\right) = {0.14},\mathrm{p} = {0.72}$ ; Angular Acceleration: $\mathrm{F}\left( {1,6}\right) = {2.23},\mathrm{p} = {0.19})$ . + +#### 4.3.1 Time + +Time results are shown in Figure 4. The reason why Tap was slower than DTap and Swipe is that there was a waiting time of 0.25 seconds to judge whether a double-tap is performed or not after a tap is performed. On the other hand, a double-tap is confirmed immediately after the second tap is performed. Inherently, a tap takes longer than a double-tap [21]. + +TECHNIQUE had a significant main effects on Time $\left( {{F}_{3.9425} = }\right.$ ${25.05}, p < {.001})$ . Tukey’s HSD test was also significant (p $< {.05}$ between OM and FC, $\mathrm{p} < {.01}$ between DT and FC, and others $\mathrm{p} <$ .001). There was also significant TECHNIQUE $\times$ GESTURE interaction effect $\left( {{F}_{9.9425} = {3.26}, p < {.01}}\right)$ . Tukey’s HSD test was also significant $(\mathrm{p} < {.05}$ between OM’s DTap and OM’s Swipe, OM’s Drag and FC's DTap, FC's DTap and EFC's DTap, and EFC's Tap and EFC's Swipe, p <.01 between DT's Tap and DT's Swipe, DT's Tap and OM's Swipe, and DT's Swipe and OM's DTap, DT's Drag and FC's Swipe, DT's Drag and EFC's DTap, OM's Tap and OM's Swipe, and FC’s DTap and FC’s Swipe, and others p $< {.001}$ ). As shown Figure 4, for all gestures, Time was $\mathrm{{DT}} \simeq \mathrm{{OM}} < \mathrm{{EFC}} \leq \mathrm{{FC}}$ . In other words, the technique of touching the target directly with a finger is faster than the technique of operating the target indirectly with a cursor; this is similar to the results shown by Chang et al. [8] in a tap-only experiment. In addition, there was also significant TECHNIQUE $\times$ SIZE interaction effect $\left( {{F}_{3.9425} = {3.52}, p < {.05}}\right)$ . Tukey’s HSD test was also significant (p < .01 between DT’s Small and OM’s Small, and others $\mathrm{p} < {.001}$ ). As expected, in Time, Large was faster than Small for all Technique. + +![01963ead-0611-7479-8053-f4f3b388f5e6_5_151_146_720_673_0.jpg](images/01963ead-0611-7479-8053-f4f3b388f5e6_5_151_146_720_673_0.jpg) + +Figure 6: The results of Jerk $\left\lbrack \frac{m}{.3}\right\rbrack$ . The top is a table summarizing the results of TECHNIQUE, TECHNIQUE $\times$ GESTURE, and TECHNIQUE $\times$ SIZE, and the bottom is a bar graph showing the results of TECHNIQUE $\times$ GESTURE. Pairs that do not share a letter of Group are significantly different. Whiskers denote 95% CI. + +#### 4.3.2 Accuracy + +Accuracy results are shown in Figure 5. TECHNIQUE had a significant main effects on Accuracy $\left( {{F}_{3,{473}} = {8.24}, p < {.001}}\right)$ . Tukey’s HSD test was significant (p < .05 between OM and FC; p < .01 between DT and FC; and p < .001 between FC and EFC). There was also significant TECHNIQUE $\times$ GESTURE interaction effect $\left( {{F}_{9,{473}} = {5.92}, p < {.001}}\right)$ . Tukey’s HSD test was also significant (p < .05 between DT ’s DTap and OM’s DTap, DT’s Swipe and FC's DTap, DT's Drag and FC's Tap, OM's Tap and FC's DTap, OM's Tap and FC's Drag, OM's DTap and EFC's Tap, OM's DTap and EFC's Swipe, OM's Swipe and FC's Tap, FC's DTap and FC's Swipe, FC's Swipe and FC's Drag, and FC's Drag and EFC’s Drag; $\mathrm{p} < {.01}$ between DT’s Tap and DT’s Drag, DT’s Tap and OM's Swipe, DT's DTap and FC's DTap, DT's DTap and FC's Drag, DT's Swipe and OM's Drag, DT's Drag and EFC's DTap, OM's DTap and FC's Tap, OM's Swipe and EFC's DTap, FC's DTap and EFC's Drag, FC's DTap and EFC's Drag, FC's DTap and EFC's Swipe, and FC's Drag and EFC's Swipe, and FC's Drag and EFC’s Tap; and others p < .001 ). As shown in Figure 5, there was no significant difference between TECHNIQUE in Tap. However, there were significant differences across TECHNIQUE in other gestures. Although EFC had a higher success rate for all gestures, OM and FC had a lower success rate for DTap and Drag. In addition, there was also significant TECHNIQUE $\times$ SIZE interaction effect $\left( {{F}_{3,{473}} = {24.62}, p < {.001}}\right)$ . Tukey’s HSD test was also significant $(\mathrm{p} < {.05}$ between DT’s Small and FC’s Large, DT’s Large and EFC's Small, OM's Large and EFC's Small, and FC's Large and EFC's Large; $\mathrm{p} < {.01}$ between FC's Small and EFC's Small, and EFC’s Small and EFC’s Large; and others p < .001). As expected, Large had a higher success rate than Small for all techniques. The difference in success rates between SIZE (Large and Small) was smaller for techniques using a cursor (FC: 7.64%, EFC: 6.07%) and larger for other two techniques (DT: 9.72%, OM: 21.18%); this result may be due to the fact that the techniques of touching the target directly with a finger are susceptible to the fat finger problem [43] and occlusion. + +DT OM FC EFC + +Mean Group Mean CI Group Mean Group Mear CI Group + +0.46 $\pm {0.04}$ 0.29 $\pm {0.03}$ 0.17 $\pm {0.01}$ A 0.21 $\pm {0.02}$ B + +0.36 $\pm {0.07}$ 0.23 $\pm {0.06}$ 0.13 $\pm {0.03}$ A 0.19 $\pm {0.06}$ C, D + +DTap 0.51 $\pm {0.11}$ 0.32 $\pm {0.09}$ G 0.17 $\pm {0.04}$ B, C 0.24 $\pm {0.05}$ E + +0.53 $\pm {0.12}$ 0.34 $\pm {0.09}$ 工 0.18 $\pm {0.05}$ B, C 0.21 $\pm {0.06}$ D + +Drag 0.45 $\pm {0.10}$ 0.27 $\pm {0.08}$ 0.19 $\pm {0.05}$ B, D 0.18 $\pm {0.06}$ B + +0.44 $\pm {0.04}$ 0.27 $\pm {0.04}$ 0.15 $\pm {0.02}$ A 0.20 $\pm {0.02}$ B, C + +0.49 $\pm {0.06}$ 0.32 $\pm {0.06}$ E 0.18 $\pm {0.02}$ B 0.22 $\pm {0.03}$ C + +![01963ead-0611-7479-8053-f4f3b388f5e6_5_925_437_723_379_0.jpg](images/01963ead-0611-7479-8053-f4f3b388f5e6_5_925_437_723_379_0.jpg) + +Figure 7: The results of Angular Acceleration $\left\lbrack \frac{rad}{{s}^{2}}\right\rbrack$ . The top is a table summarizing the results of TECHNIQUE, TECHNIQUE $\times$ GESTURE, and TECHNIQUE $\times$ SIZE, and the bottom is a bar graph showing the results of TECHNIQUE $\times$ GESTURE. Pairs that do not share a letter of Group are significantly different. Whiskers denote 95% CI. + +#### 4.3.3 Stability of the Smartphone (Jerk and Angular Acceleration) + +Jerk results are shown in Figure 6. TECHNIQUE had a significant main effects on Jerk $\left( {{F}_{3.9425} = {64.06}, p < {.001}}\right)$ . Tukey’s HSD test was significant $\left( {\mathrm{p} < {.05}\text{between FC and EFC; and others}\mathrm{p} < {.001}}\right)$ . There was also significant TECHNIQUE $\times$ GESTURE interaction effect $\left( {{F}_{9,{9425}} = {51.91}, p < {.001}}\right)$ . Tukey’s HSD test was also significant (p < .05 between OM’s DTap and OM’s Swipe, FC’s DTap and EFC's Tap; p < .01 between FC's Drag and EFC's Swipe, EFC's Swipe and EFC’s DTap; and others p < .001 ). In Jerk, FC < EFC $< \mathrm{{OM}} < \mathrm{{DT}}$ for all gestures except Drag, and $\mathrm{{FC}} \simeq \mathrm{{EFC}} < \mathrm{{OM}}$ $<$ DT for Drag. In addition, there was also significant TECHNIQUE $\times$ SIZE interaction effect $\left( {{F}_{3.9425} = {13.44}, p < {.001}}\right)$ . Tukey’s HSD test was also significant $(\mathrm{p} < {.01}$ between FC’s Large and EFC’s Small, EFC's Small and EFC's Large; and others p< .001). In all techniques, Large had significantly lower jerk than Small. + +Angular Acceleration results are shown in Figure 7. TECHNIQUE also had a significant main effects on Angular Acceleration $\left( {{F}_{3.9425} = {37.46}, p < {.001}}\right)$ . Tukey’s HSD test was also significant $\left( {\mathrm{p} < {.05}\text{between}\mathrm{{FC}}\text{and}\mathrm{{EFC}}\text{; and others}\mathrm{p} < {.001}}\right)$ . There was also significant TECHNIQUE $\times$ GESTURE interaction effect $\left( {{F}_{9.9425} = {56.12}, p < {.001}}\right)$ . Tukey’s HSD test was also significant (p < .05 between DT's DTap and DT's Swipe, FC's Swipe and EFC's Swipe; $\mathrm{p} < {.01}$ between OM's DTap and OM's Drag, FC's DTap and EFC's Swipe, EFC's Tap and EFC's Drag, and EFC's DTap and EFC’s Swipe; and others $\mathrm{p} < {.001}$ ). As with Jerk, in Angular Acceleration, $\mathrm{{FC}} < \mathrm{{EFC}} < \mathrm{{OM}} < \mathrm{{DT}}$ for all gestures except Drag, and $\mathrm{{FC}} \simeq \mathrm{{EFC}} < \mathrm{{OM}} < \mathrm{{DT}}$ for Drag. In addition, there was also significant TECHNIQUE $\times$ SIZE interaction effect $\left( {{F}_{3.9425} = {23.79}, p < {.001}}\right)$ . Tukey’s HSD test was also significant (p < .01 between FC's Large and EFC's Small, EFC's Small and EFC's Large; and others p< .001). Large had significantly lower angular acceleration than Small for all techniques except EFC. + +![01963ead-0611-7479-8053-f4f3b388f5e6_6_172_149_660_266_0.jpg](images/01963ead-0611-7479-8053-f4f3b388f5e6_6_172_149_660_266_0.jpg) + +Figure 8: The results of SUS Score. Whiskers denote 95% CI. + +In summary, FC has the smallest movement of smartphones, followed by EFC, OM, DT. The reason for this may be that the time the thumb touches the screen in the techniques using a cursor (FC and EFC) is longer than in the other two techniques (DT and OM). In particular, since the thumb always touches the screen in FC, the smartphone was more stable than in EFC. + +#### 4.3.4 SUS Score + +The result of SUS Score is shown in Figure 8. As a result of ANOVA, SUS Score did not have a significant main effect on TECHNIQUE. In terms of average value, DT was the highest, followed by EFC, OM, then FC. The reason why DT has the highest average value is that SUS has a high score for familiar techniques [42]; DT was the same technique that the participants usually use when using the smartphone and was a familiar technique. Although the average values of OM and EFC were almost the same, the average value of $\mathrm{{FC}}$ was slightly lower than these. This is thought to be because FC was inferior to other techniques in terms of Time and Accuracy. + +## 5 DISCUSSIONS + +In this section, we discuss the performance of one-handed interaction techniques based on the results of the user study. + +### 5.1 The Need to Investigate Single-Touch Gesture Per- formance + +As shown in Figure 4, Figure 6, and Figure 7, the differences in Time, Jerk, and Angular Acceleration across TECHNIQUE did not vary within GESTURE; that is, in Time, $\mathrm{{DT}} < \mathrm{{OM}} < \mathrm{{FC}} \simeq$ EFC, and in Jerk and Angular Acceleration, $\mathrm{{FC}} < \mathrm{{EFC}} <$ $\mathrm{{OM}} < \mathrm{{DT}}$ . However, as shown in Figure 5, although the success rate was higher for all techniques in Tap, the success rates of DT, OM and FC were lower for in the other gestures, depending on the gestures. In addition, there were comments of participants that Drag in OM was difficult, or Swipe was easy but DTap was difficult in FC, or DTap in EFC was easier. These results suggest that the performance of the one-handed interaction technique varies depending on the performed gestures. Therefore, we think that it is important to investigate the single-touch gesture performance of one-handed interaction techniques. + +### 5.2 Selecting a Suitable One-handed Interaction Tech- nique + +In summary, the results of the user study show that OM is the best technique with a high success rate and fast performing gestures when the target is large and some smartphone movement is allowed. On the other hand, when the target size is small, the success rate of all gestures (especially those other than a tap) is quite low. Although FC had a lower success rate than EFC, the movement of the smartphone was the smallest. Therefore, FC is considered to be a suitable technique for stable the grip of the smartphone. EFC takes more time to perform gestures than OM, however, it has a high success rate regardless of the target size and gestures, and it can stabilize the grip of the smartphone more than DT and OM. In other words, EFC is considered to be suitable for careful manipulation and for performing gestures on small unreachable targets. + +Table 1: Time [s] of the trials where the trigger was performed. CI denote 95% CI. + +
OMFCEFC
GestureMeanCIMeanCIMeanCI
Tap3.820.414.140.672.280.24
DTap3.810.594.010.621.840.25
Swipe3.280.363.310.441.950.29
Drag5.091.623.870.653.100.50
+ +Table 2: Accuracy [%] of the trials where the trigger was performed. CI denote 95% CI. + +
OMFCEFC
GestureMeanCIMeanCIMeanCI
Tap85.634.9380.273.5292.532.81
DTap73.134.4968.035.7494.622.41
Swipe85.004.3272.564.5492.012.78
Drag70.985.1974.593.0790.972.77
+ +However, the performance of the one-handed interaction technique varies greatly depending on the situation. Therefore, we think that it is important to allow the user to choose the one-handed interaction technique to be used according to the situation, for example, to enable both OM and EFC to be used; the introduction of multiple one-handed interaction techniques can be easily accomplished by assigning a separate trigger to each. + +### 5.3 Effects of Number of Performed Triggers + +In the user study, all gestures were performed using each technique. However, each technique requires the user to perform trigger at different times: OM allows the user to keep the contents shrunk until the user performs the trigger again; FC allows the user to manipulate the cursor until the user removes the finger from the screen after performing the trigger once; EFC, on the other hand, requires the user to perform the trigger each time a gesture is performed to with the cursor. Therefore, the results of the user study may be influenced by the number of performed trigger. + +To analyze the effect of the number of performed trigger, we extract only the trials where one or more triggers were used. The results of Time and Accuracy are shown in Table 1 and Table 2. Jerk and Angular Acceleration did not vary much whether the trigger was performed; in other words, it is almost identical to Figure 6 and Figure 7. + +Based on this result, EFC might be the best technique if the user needs to perform the trigger frequently. However, because the number of trials for which the trigger was performed varies greatly depending on the technique (OM: 142 times, FC: 791 times, EFC: 2,304 times), this result may be not robust. In the user study, in OM, participants continued the task with reduced contents after the first trigger was performed. However, given the actual usage environment, we expect the number of the performed trigger to increase because the user resizes the contents to its original size to select a small button, type a letter, or enjoy the displayed content (videos and texts). Therefore, We need to investigate the performance of the technique in real usage environments. + +### 5.4 Causes of Gesture Failure + +In OM, the error was caused by the difficulty of pointing due to the small size of the target (i.e., fat finger problem [43] and occlusion). Particularly in gestures other than Tap, participants were strongly influenced by small targets. Therefore, the error rate may be reduced by combining OM and techniques to select small targets $\left\lbrack {4,{38}}\right\rbrack$ or techniques to improve the accuracy of touch $\left\lbrack {{19},{39}}\right\rbrack$ . + +In FC, many participants said that changing force caused the cursor to move, resulting in an error. In particular, in DTap, the participants had to change the force quickly and repeatedly, which caused the cursor to move, and the success rate was low. In addition, an error occurred because the participants unintentionally applied force when moving the cursor, causing a touch event to issue. These may be improved by introducing a filter that separates cursor movement from changes in force. Corsten et al. [11] found that the performance of the technique using force improves with long-term training, so it is possible that it may improve with more training. + +In EFC, participants commented that the cursor was moved unintentionally when they released their finger from the screen to decide where to forward the gesture, resulting in an error. The problem that the touch position changes when the user releases the finger from the screen has been investigated by Xu et al. [50]. Since the cursor was moved three times the distance the finger moved in the user study, the effect of this problem was likely to be stronger. Therefore, in EFC, it may be possible to increase the success rate by taking advantage of the state just before the user releases the finger from the screen, as in [10]. + +## 6 LIMITATIONS AND FUTURE WORK + +In this section, we discuss the limitations of the results of the user study and discuss our future work. + +### 6.1 Participants' Individual Attribute and Number of Participants + +The participants in the user study were all young and familiar with the interaction with smartphones. However, a wide range of people, from small children to the elderly, novices to experts use smart-phones. In particular, elderly people are known to have difficulty with force control [26] and pointing at small targets using their thumbs $\left\lbrack {{27},{49}}\right\rbrack$ . That is, age may affect the performance of FC and $\mathrm{{OH}}$ . These are suggesting that the performance of each technique may vary with individual attributes. In addition, because the number of participants is eight, the results of the user study might be not robust. Therefore, we need to conduct an additional user study with more participants of different individual attributes. + +### 6.2 Effects of Long-Term Training + +In the user study, each participant performed each gesture 108 times in each technique, including practice. However, if the participant trained more, the performance might be change. In particular, Corsten et al. [11] found that the performance of the techniques using force improves with practice. In addition, the performance of other techniques (OM and EFC) might also change as the user becomes an expert. This indicates the need for long-term experiments to accurately determine the performance of one-handed interaction techniques. + +### 6.3 Use in Different Environments + +We conducted the user study with participants seated in a chair. However, smartphones are used in various situations, such as while walking, riding a train, and lying down. It is known that the accuracy of pointing with the thumb is reduced [37] and the user's force resolution is reduced [47] while walking. In addition, the motion of the smartphone changes according to the user's body posture [13]. Therefore, body posture may impact performance in techniques that have a large smartphone movement. Moreover, because the situation of using a smartphone may also affect the performance of one-handed interaction techniques, we need to investigate it. + +## 7 CONCLUSION + +In this paper, to enable a user to perform all single-touch gestures to an unreachable area, we designed Force Cursor (FC) and Event Forward Cursor (EFC). FC is a technique to issue touch events (touchdown events when the force is increased and touch-up events when the force is decreased) at the cursor position using force. On the other hand, EFC is a cursor technique that consists of two steps of operation; the first is for determining the touch event position; the second is for performing single-touch gestures. Furthermore, we conducted the user study to investigate the single-touch gesture performance of three one-handed interaction techniques: contents shrinking technique (OM), FC, and EFC. From the results of the user study, although the time to perform a gesture and the stability of the smartphone did not vary greatly depending on the performing gesture, the success rate varied with the performing gesture: EFC had a high success rate regardless of the gesture, while OM and FC had a low success rate except for a tap. In addition, we found that both of the two techniques we designed to enable the user to interact with the smartphone with a stable grip. + +## REFERENCES + +[1] Apple Inc. Reachability - iphone user guide, 2014. https://help.apple.com/iphone/11/?lang=en#/iph66e10a71c.(accessed 2020-09-17). + +[2] Apple Inc. Force Touch - Apple Developer, 2019. https:// developer.apple.com/macos/force-touch/. (accessed 2020- 09-17). + +[3] Apple Inc. Gestures - User Interaction - iOS - Human Interface Guidelines - Apple Developer, 2020. https://developer.apple.com/design/human-interface-guidelines/ios/ user-interaction/gestures/. (accessed 2020-09-17). + +[4] P. Baudisch and G. Chu. Back-of-device interaction allows creating very small touch devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '09, p. 1923-1932. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1518701.1518995 + +[5] J. Bergstrom-Lehtovirta and A. Oulasvirta. Modeling the Functional Area of the Thumb on Mobile Touchscreen Surfaces. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems, CHI '14, pp. 1991-2000. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288.2557354 + +[6] J. Brooke. SUS : A Quick and Dirty Usability Scale. Usability Evaluation in Industry, pp. 189-194, 1996. Taylor and Francis. + +[7] J. Cechanowicz, P. Irani, and S. Subramanian. Augmenting the mouse with pressure sensitive input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, pp. 1385- 1394. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1240624.1240835 + +[8] Y. Chang, S. L'Yi, K. Koh, and J. Seo. Understanding Users' Touch Behavior on Large Mobile Touch-Screens and Assisted Targeting by Tilting Gesture. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, pp. 1499-1508. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123.2702425 + +[9] C. Corsten, M. Lahaye, J. Borchers, and S. Voelker. Forceray: Extending thumb reach via force input stabilizes device grip for mobile touch input. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 212:1-212:12. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10. 1145/3290605.3300442 + +[10] C. Corsten, S. Voelker, and J. Borchers. Release, don't wait! reliable force input confirmation with quick release. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, ISS '17, p. 246-251. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3132272.3134116 + +[11] C. Corsten, S. Voelker, A. Link, and J. Borchers. Use the force picker, luke: Space-efficient value input on force-sensitive mobile touchscreens. In Proceedings of the 2018 CHI Conference on Human + +Factors in Computing Systems, CHI '18, pp. 661:1-661:12. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10. 1145/3173574.3174235 + +[12] R. Eardley, A. Roudaut, S. Gill, and S. J. Thompson. Understanding grip shifts: How form factors impact hand movements on mobile phones. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pp. 4680-4691. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/ 3025453.3025835 + +[13] R. Eardley, A. Roudaut, S. Gill, and S. J. Thompson. Investigating How Smartphone Movement is Affected by Body Posture. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, pp. 202:1-202:8. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3173574.3173776 + +[14] K. Hakka, T. Isomoto, and B. Shizuki. One-Handed Interaction Technique for Single-Touch Gesture Input on Large Smartphones. In Symposium on Spatial User Interaction, SUI '19, pp. 21:1-21:2. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10. 1145/3357251.3358750 + +[15] K. Hasan, J. Kim, D. Ahlström, and P. Irani. Thumbs-up: 3d spatial thumb-reachable space for one-handed thumb interaction on smart-phones. In Proceedings of the 2016 Symposium on Spatial User Interaction, SUI '16, pp. 103-106. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2983310.2985755 + +[16] S. Heo and G. Lee. Force gestures: Augmenting touch screen gestures with normal and tangential forces. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, pp. 621-626. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/2047196.2047278 + +[17] S. Heo and G. Lee. Forcedrag: Using pressure as a touch input modifier. In Proceedings of the 24th Australian Computer-Human Interaction Conference, OzCHI '12, pp. 204-207. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2414536. 2414572 + +[18] S. Hidaka, T. Baba, and P. Haimes. IndexAccess: A GUI Movement System by Back-of-Device Interaction for One-Handed Operation on a Large Screen Smartphone. International Journal of Asia Digital Art and Design Association, 20(2):41-47, 2016. Asia Digital Art and Design Association. doi: 10.20668/adada. 20.2-41 + +[19] C. Holz and P. Baudisch. The generalized perceived input point model and how to double touch accuracy by extracting fingerprints. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ' 10, p. 581-590. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1753326.1753413 + +[20] I. Hwang, E. Rozner, and C. Yoo. Telekinetic thumb summons out-of-reach touch interface beneath your thumbtip (demo). In Proceedings of the 17th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys '19, p. 661-662. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/ 3307334.3328571 + +[21] K. Ikematsu, K. Tsubouchi, and S. Yamanaka. Predictaps: Latency reduction technique for single-taps based on recognition for single-tap or double-tap. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA '20, p. 1-9. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10. 1145/3334480.3382933 + +[22] A. Karlson, B. Bederson, and J. Contreras-Vidal. Understanding One-Handed Use of Mobile Devices. Handbook of Research on User Interface Design and Evaluation for Mobile Technology, pp. 86-101, 2008. doi: 10.4018/978-1-59904-871-0.ch006 + +[23] A. K. Karlson and B. B. Bederson. Studies in One-Handed Mobile Design: Habit, Desire and Agility. Technical report, 2006. + +[24] A. K. Karlson and B. B. Bederson. One-handed Touchscreen Input for Legacy Applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, pp. 1399-1408. Association for Computing Machinery, New York, NY, USA, 2008. doi: 10.1145/1357054.1357274 + +[25] S. Kim, J. Yu, and G. Lee. Interaction Techniques for Unreachable Objects on the Touchscreen. In Proceedings of the 24th Australian Computer-Human Interaction Conference, OzCHI '12, pp. 295-298. + +Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2414536.2414585 + +[26] H. Kinoshita and P. R. Francis. A Comparison of Prehension Force + +Control in Young and Elderly Individuals. European Journal of Applied Physiology and Occupational Physiology, 74(5):450-460, Nov 1996. Springer. doi: 10.1007/BF02337726 + +[27] M. Kobayashi, A. Hiyama, T. Miura, C. Asakawa, M. Hirose, and T. Ifukube. Elderly user evaluation of mobile touchscreen interactions. In Proceedings of the 13th IFIP TC 13 International Conference on Human-Computer Interaction - Volume Part I, INTERACT'11, p. 83-99. Springer-Verlag, Berlin, Heidelberg, 2011. + +[28] J. Lai and D. Zhang. ExtendedThumb: A Target Acquisition Approach for One-Handed Interaction With Touch-Screen Mobile Phones. IEEE Transactions on Human-Machine Systems, 45(3):362-370, 2015. IEEE. doi: 10.1109/THMS.2014.2377205 + +[29] E. Lank, Y.-C. N. Cheng, and J. Ruiz. Endpoint Prediction Using Motion Kinematics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, pp. 637-646. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10 .1145/1240624.1240724 + +[30] H. V. Le, P. Bader, T. Kosch, and N. Henze. Investigating Screen Shifting Techniques to Improve One-Handed Smartphone Usage. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction, NordiCHI '16, pp. 27:1-27:10. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2971485. 2971562 + +[31] H. V. Le, T. Kosch, P. Bader, S. Mayer, and N. Henze. PalmTouch: Using the Palm As an Additional Input Modality on Commodity Smart-phones. In Proceedings of the 36th Annual ACM Conference on Human Factors in Computing Systems, CHI ' 18, pp. 360:1-360:13. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3173574.3173934 + +[32] H. V. Le, S. Mayer, P. Bader, and N. Henze. Fingers' Range and Comfortable Area for One-Handed Smartphone Interaction Beyond the Touchscreen. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, pp. 31:1-31:12. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10. 1145/3173574.3173605 + +[33] A. Li, H. Fu, and Z. Kening. BezelCursor: Bezel-Initiated Cursor for One-Handed Target Acquisition on Mobile Touch Screens. International Journal of Mobile Human Computer Interaction, 8:1-22, 2016. IGI Global. + +[34] M. Löchtefeld, C. Hirtz, and S. Gehring. Evaluation of hybrid front-and back-of-device interaction on mobile devices. In Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, MUM '13, pp. 17:1-17:4. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2541831.2541865 + +[35] S. Mizobuchi, S. Terasaki, T. Keski-Jaskari, J. Nousiainen, M. Ryyna-nen, and M. Silfverberg. Making an impression: Force-controlled pen input for handheld devices. In CHI '05 Extended Abstracts on Human Factors in Computing Systems, CHI EA '05, pp. 1661-1664. Association for Computing Machinery, New York, NY, USA, 2005. doi: 10. 1145/1056808.1056991 + +[36] R. K. Nelson. Exploring Apple's 3D touch, 2015. https://medium.com/@rknla/exploring-apple-s-3d-touch-f598@ef45af5.(accessed 2020-09-17). + +[37] A. Ng, S. A. Brewster, and J. H. Williamson. Investigating the Effects of Encumbrance on One- and Two- Handed Interactions with Mobile Devices. CHI '14, pp. 1981-1990. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288.2557312 + +[38] S. Oney, C. Harrison, A. Ogan, and J. Wiese. Zoomboard: A diminutive querty soft keyboard using iterative zooming for ultra-small devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, p. 2799-2802. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654 .2481387 + +[39] S. Rogers, J. Williamson, C. Stewart, and R. Murray-Smith. Angle-pose: Robust, precise capacitive touch tracking via 3d orientation estimation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, p. 2575-2584. Association for Com- + +puting Machinery, New York, NY, USA, 2011. doi: 10.1145/1978942 .1979318 + +[40] A. Roudaut, S. Huot, and E. Lecolinet. TapTap and MagStick: Improving One-handed Target Acquisition on Small Touch-screens. In + +Proceedings of the Working Conference on Advanced Visual Interfaces, AVI '08, pp. 146-153. Association for Computing Machinery, New York, NY, USA, 2008. doi: 10.1145/1385569.1385594 + +[41] Samsung Inc. How Do I Use the Reduce Screen Size of One-Handed Operation on Note4 Samsung Support HK_EN, 2019. https://www.samsung.com/au/support/mobile-devices/one-hand-mode/.(accessed 2020-09-17). + +[42] J. Sauro. SUStisfied little-known system usability scale facts user experience magazine, 2011. http://uxpamagazine.org/ sustified/. (accessed 2020-12-18). + +[43] K. A. Siek, Y. Rogers, and K. H. Connelly. Fat Finger Worries: How Older and Younger Users Physically Interact with PDAs. In Proceedings of the 2005 IFIP TC13 International Conference on Human-Computer Interaction, INTERACT' 05, pp. 267-280. Springer-Verlag, Berlin, Heidelberg, 2005. doi: 10.1007/11555261_24 + +[44] Q. Su, O. K.-C. Au, P. Xu, H. Fu, and C.-L. Tai. 2d-dragger: Unified touch-based target acquisition with constant effective width. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ' 16, pp. 170-179. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2935334.2935339 + +[45] H.-R. Tsai, D.-Y. Huang, C.-H. Hsieh, L.-T. Huang, and Y.-P. Hung. MovingScreen: Selecting Hard-To-Reach Targets with Automatic Comfort Zone Calibration on Mobile Devices. In Proceedings of the 18th Interactional Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '16, pp. 651-658. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10. 1145/2957265.2961835 + +[46] S. Voelker, S. Hueber, C. Corsten, and C. Remy. HeadReach: Using head tracking to increase reachability on mobile touch devices. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, pp. 1-12. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376868 + +[47] G. Wilson, S. A. Brewster, M. Halvey, A. Crossan, and C. Stewart. The effects of walking, feedback and control method on pressure-based interaction. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI '11, pp. 147-156. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/2037373.2037397 + +[48] G. Wilson, C. Stewart, and S. A. Brewster. Pressure-based menu selection for mobile devices. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI '10, pp. 181-190. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1851600. 1851631 + +[49] J. Xiong and S. Muraki. Thumb performance of elderly users on smartphone touchscreen. SpringerPlus, 5:1218-1227, 2016. Springer. + +[50] W. Xu, J. Liu, C. Yu, and Y. Shi. Digging unintentional displacement for one-handed thumb use on touchscreen-based mobile devices. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '12, p. 261-270. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2371574.2371613 + +[51] S. Yong, E. J. Lee, R. Peiris, L. Chan, and J. Nam. Forceclicks: Enabling efficient button interaction with single finger touch. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, TEI '17, pp. 489-493. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10. 1145/3024969.3025081 + +[52] H. Yoo, J. Yoon, and H. Ji. Index Finger Zone: Study on Touchable Area Expandability Using Thumb and Index Finger. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, MobileHCI '15, pp. 803- 810. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2786567.2793704 + +[53] N.-H. Yu, D.-Y. Huang, J.-J. Hsu, and Y.-P. Hung. Rapid Selection of + +Hard-To-Access Targets by Thumb on Mobile Touch-screens. In Proceedings of the 15th Interactional Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '13, pp. 400-403. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2493190.2493202 + +[54] H. Zhao and H. Feng. A novel angular acceleration sensor based on the electromagnetic induction principle and investigation of its calibration tests. Sensors, 13(8):10370-10385, Aug 2013. MDPI. doi: doi:10. 3390/s130810370 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Zd5GbiuwX_t/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Zd5GbiuwX_t/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..69c7fcaa71cebe006d64c0e20a0fa41a0dddb72b --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Zd5GbiuwX_t/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,307 @@ +§ DESIGN AND INVESTIGATION OF ONE-HANDED INTERACTION TECHNIQUES FOR SINGLE-TOUCH GESTURES + +Category: Research + + < g r a p h i c s > + +Figure 1: Our two techniques. In Force Cursor (FC, top), the cursor mode is triggered by a user's swipe from the bezel (a); then, the user moves the cursor by dragging the finger (b); a touch-down event is issued at the cursor position when the user increases the force (c), and a touch-up event is issued when decreasing the force (d); the cursor mode ends when the user releases the finger from the screen (e). In Event Forward Cursor (EFC, bottom), the trigger of the cursor mode (f) and the method of moving the cursor (g) are the same as in FC (a and b, respectively); when the finger is released from the screen in the cursor mode (h), the mode switches to the forward mode, and then, when the user performs a gesture anywhere on the screen in the forward mode, the gesture is forwarded to the cursor position; in this figure, a tap, i.e., a touch-down event (i) and touch-up event (j) is forwarded; although the forward mode ends when the user releases the finger from the screen, to enable single-touch gestures that combine touch-up events such as double-tap or tap-and-hold, if the screen is touched again within the double-tap waiting time (0.25 seconds) after the user releases the finger from the screen, the forward mode continues. + +§ ABSTRACT + +Many one-handed interaction techniques have been proposed to interact with a smartphone with only one hand. However, these techniques are all designed for selecting (tapping) unreachable targets, and their performance of other single-touch gestures such as a double-tap, swipe, and drag has not been investigated. In our research, we design two one-handed interaction techniques, Force Cursor (FC) and Event Forward Cursor (EFC), each of which enables a user to perform all single-touch gestures. FC is a cursor technique that enables a user to issue touch events using force; EFC is a cursor technique that enables a user to issue touch events by a two-step operation. We conducted a user study to investigate single-touch gesture performance of one-handed interaction techniques: FC, EFC, and the contents shrinking technique. The result shows that the success rate of one-handed interaction techniques varies depending on the gesture and that EFC has a high success rate independently of the gestures. These results clarified the importance of investigating the performance of single-touch gestures of one-handed interaction techniques. + +Index Terms: Human-centered computing-Interaction techniques; Human-centered computing-Gestural input + +§ 1 INTRODUCTION + +Many users tend to use only one hand to interact with their smart-phones (i.e., holding a smartphone in one hand and using only the thumb) $\left\lbrack {{23},{37}}\right\rbrack$ . The possible reason for this is the fact that only the other hand can be used when a user holds an umbrella or baggage in one hand [22]. However, when interacting with the smartphone with only one hand, it is difficult for a user to reach the thumb to all parts of the smartphone's screen without changing the grasping posture because the thumb’s reach is limited [5,32]. Therefore, the user needs to interact with the smartphone while changing the grasping posture as appropriate. However, changing the grasping posture makes the user's grasp of the smartphone unstable, which hinders the user’s comfortable interaction with the smartphone [12, 13] and may cause the device to fall. + +This problem has been well known in the HCI field; thus, to enable one-handed interaction on a smartphone without changing the grasping posture, HCI researchers have proposed many one-handed interaction techniques (e.g., $\left\lbrack {8,{25},{28}}\right\rbrack$ ). However, most of them are designed to allow a user to select (tap) unreachable targets. Therefore, while they have shown that these techniques improve a user's target selection (tap) performance compared to the condition without one-handed interaction techniques, the performance of other single-touch gestures (e.g., a double-tap, swipe, drag) have not been investigated. Also, many techniques do not support single-touch gestures other than a tap. Note that these single-touch gestures, including a tap, are frequently used when a user uses a smart-phone [3]. For example, a double-tap is used for the skip function in video viewing applications; swipes from the bottom and top bezels are used to return to the home screen and to access an information center, respectively; a drag is used for moving an icon on the home screen and selecting multiple photos. Thus, enabling a user to perform such single-touch gestures is important in a one-handed interaction technique. Furthermore, it is important to investigate the single-touch gesture performance of the one-handed interaction technique. + +In our research, we designed two techniques, Force Cursor (FC, Figure 1 top) and Event Forward Cursor (EFC, Figure 1 bottom), for enabling a user to perform single-touch gestures on an unreachable area on a smartphone. Both techniques use a cursor for allowing a user to interact with the smartphone while keeping a stable grasp since the technique using a cursor can stabilize the grasp of the smartphone [9]. The reason why single-touch gestures cannot be performed in previous one-handed interaction techniques using a cursor (hereafter cursor techniques) is that they are designed so that a tap is performed at the cursor position when the user releases the finger from the screen. By contrast, we designed our cursor techniques so that a user can issue touch events (i.e., touch-down events, touch-move events, and touch-up events) at the cursor position. Specifically, FC is a cursor technique that allows the user to issue touch events by changing the force (Figure 1 top). On the other hand, EFC is a cursor technique that involves two steps of operation; the first is for determining the touch event position; the second is for performing single-touch gestures (Figure 1 bottom). + +Moreover, to investigate the single-touch gesture performance of one-handed interaction techniques, we conducted a user study with four techniques: our two techniques (FC and EFC), a content shrinking technique (One-handed Mode [41] (OM), which is a technique that allows a user to perform all single-touch gestures), and a direct thumb-touch technique, where a user uses a smartphone without any one-handed interaction technique. Based on the results of the user study, we discuss the performance of each one-handed interaction technique. + +The main contributions of this paper are the followings: 1) the design of two one-handed interaction techniques (FC and EFC), each of which enables a user to perform all single-touch gestures while keeping the grasp of the smartphone stable, and 2) the single-touch gesture performance of one-handed interaction techniques, which has not been investigated before. The results show that OM is a fast technique for performing gestures, FC is a technique that can stabilize the user's grasp of the smartphone, and EFC is a technique with a high success rate regardless of the gestures to be performed. It is also found that the success rate in OM and FC varies greatly depending on the gestures to be performed, indicating the importance of investigating the single-touch gesture performance. + +§ 2 RELATED WORK + +Many one-handed interaction techniques have been proposed to facilitate one-handed interaction on smartphones. We categorize these techniques into three: screen transformation techniques, proxy region techniques, and cursor techniques. + +§ 2.1 SCREEN TRANSFORMATION TECHNIQUES + +These techniques enable a user to move the contents shown on the screen (hereafter the contents), shrink the contents, or both. + +On the iPhone 6 and later, a function called "Reachability" has been introduced [1]. With Reachability, a user can move the contents down by double-tapping the home button or swiping down on the bottom edge of the screen. On the iPhone 6 and later, a user can move the contents down by double-tapping the home button or swiping down on the bottom edge of the screen. Similarly, Palm-Touch [31] can be used to move the contents down by touching the screen with the palm. In Telekinetic Thumb [20], a user can move the contents to the lower right of the screen by performing a pull gesture above the screen. Sliding Screen [25] is triggered by a swipe from the smartphone's bezel or a touch with a large contact area; when a user drags the thumb, the contents are moved point-symmetrically or in the direction of the thumb's movement. MovingScreen [45] is similarly designed to move the contents in a point-symmetrical manner in the direction of the thumb's movement and is triggered by a swipe from the bezel; it differs in that the movement's speed changes in proportionally to the swipe distance on the bezel where it is triggered. TiltSlide [8] is also a technique to move the contents in the same way and triggered by tilting the smartphone. In IndexAccess [18] and the technique proposed by Le et al. [30], a touchpad is attached to the back of the smartphone; the index finger's movement on the touchpad moves the contents. In these techniques, after the contents are moved, a part of the contents is hidden outside the screen; and thus, dragging a content (e.g., icon) from the area beyond the reach of the thumb to the hidden area is not supported, and vice versa. + +Galaxy's One-handed Mode [41] (OM) is triggered by a triple-tap of the home button or a swipe from the four corners of the screen. It shrinks the contents and places them near the thumb; by default, the contents shrink to two-thirds of their size. Similarly, TiltReduction [41] is also a technique to shrink the contents and triggered by tilting the smartphone. These techniques that shrink the contents can be adapted so that a user can perform all single-touch gestures, although the single-touch gesture performance has not yet been investigated. Thus, we investigate the performance. + +§ 2.2 PROXY REGION TECHNIQUES + +In the proxy region techniques, a user can use a different area of the screen or around the screen as an alternative area to operate the unreachable area. + +ThumbSpace [24] is triggered by a drag on the screen; it then displays a popup view that miniaturized all contents of the screen. Although this technique solves the problem that the thumb cannot reach all parts of the smartphone's screen without changing the grasping posture, it reduces the size of any target because it shrinks the all contents of the screen, which may lead to the fat finger problem [43] and occlusion (i.e., a small target is occluded by the thumb). TapTap [40] is triggered when a user touches the screen; it then displays a popup on the center of the screen, which shows an enlarged view of the area around the touched position on the screen. However, it is difficult to use this technique for an area too far from the thumb's reach because the user needs to touch around the area that the user wants to interact with. Hasan et al. [15] proposed the technique that uses the in-air space above the screen, which allows the user to interact with unreachable targets by using the three-dimensional movements of the thumb. + +In the technique proposed by Lochtefeld [34], a touchpad is attached to the back of a smartphone, which allows a user to operate an unreachable target by touching the target from the back of the device using the index finger; this design utilizes the fact that the reachable area with one-handed interaction can be extended by 15 percent by using the index finger on the back of the smart-phone [52]. However, these techniques [15, 34] require an additional device. + +§ 2.3 CURSOR TECHNIQUES + +In cursor techniques, a user can use the cursor for selecting an unreachable target instead of touching the target directly. + +Most of the previous cursor techniques switch to the mode to use the cursor (cursor mode) when a user performs a predetermined gesture as a trigger, and then the cursor appears under the finger, and the user can move the cursor according to the distance of the finger movement by dragging the finger while in the cursor mode. For example, TiltCursor [8] is triggered by tilting the smartphone, BezelCursor [33] is triggered by a swipe from the bezel, Extendible Cursor [25] is triggered by a swipe from the bezel or a touch with a large contact area, ExtendedThumb [28] is triggered by a double-tap, and MagStick [40] is triggered by touching the screen. + +Unlike these techniques, in CornerSpace and BezelSpace [53], the cursor appears in places other than under the finger. Cor-nerSpace [53] displays the popup of shrunk all contents of the screen when a user swipes from the bezel. Then, when the user touches on the popup, the cursor appears at the position corresponding to the touched position on the screen. On the other hand, in BezelSpace [53], when a user swipes from the bezel, buttons representing the four corners and center of the screen appears, and then the cursor appears at the corresponding position when the user presses the button. + +2D-Dragger [44], ForceRay [9], and HeadReach [46] are techniques that use a different method of moving the cursor. In 2D-Dragger [44], when a user moves the finger, the cursor is moved to the nearest target in the direction of the finger's movement. In ForceRay [9], the cursor moves away from the finger when a user increases the force, and the cursor moves in the direction of the finger when the force is decreased. In HeadReach [46], a method of moving the cursor combining the direction of the face and dragging a finger is used. + +Although the triggers, the initial cursor position, and the method of moving the cursor are different in these previous cursor techniques, the mechanism to issue touch events (i.e., selecting target) is the same in all of them: when a user releases the finger from the screen during the cursor mode, a tap is performed (i.e., both touchdown and touch-up events are issued simultaneously) at the cursor position. This approach makes it impossible for a user to perform single-touch gestures other than a tap. On the other hand, our two techniques (FC and EFC) enable a user to perform all single-touch gestures using a cursor. [14] is a previous study of the same concept as ours. However, no experiments were conducted with participants other than the authors. + +§ 3 DESIGN OF OUR TECHNIQUES + +We designed two one-handed interaction techniques that enable a user to perform all single-touch gestures using a cursor. Since previous studies $\left\lbrack {8,9,{25}}\right\rbrack$ show the high performance of the cursor technique triggered by a swipe from the bezel, we adopted a swipe from the bezel as the trigger to switch to the cursor mode in both techniques (Figure 1a, f). Moreover, we designed both techniques so that the cursor moves in the same direction as the finger movement (Figure 1b, g), as in [8, 33]. + +§ 3.1 FORCE CURSOR (FC) + +In FC, the following operations are required to issue touch events at the cursor position. Firstly, the user performs a swipe from the bezel to switch to the cursor mode (Figure 1a), then, the user drags the finger to move the cursor to the desired position (Figure 1b); the cursor movement distance is calculated by multiplying the thumb movement distance by the control-display ratio. + +A touch-down event is issued when the user increases the force above a threshold (Figure 1c). A touch-up event is issued when the user decreases the force below the threshold after the touch-down event is issued. The cursor mode continues until the user releases the finger from the screen, allowing the user to continuously use the cursor to perform a gesture to an unreachable area. Since this design allows the user to issue touch-down, touch-move, and touch-up events by controlling the force, all single-touch gestures can be performed. For example, a tap is performed by increasing the force and then decreasing the force (i.e., clicking using force [51]), a double-tap is performed by quickly repeating the click twice, and a swipe or drag is performed by moving the finger while the force is applied. + +We conducted a demonstration to confirm if users can use FC and found the following problems with FC based on the comments from the demonstration participants. Firstly, it is not possible to know how much force the user is currently applying without feedback. Secondly, it is difficult to perform double-tap using a cursor with force (i.e., to quickly repeat a click using force twice). Thirdly, it is difficult to make large movements of the finger while applying high force (i.e., performing a drag using the cursor). Therefore, we added the following three functions to solve these problems. + + < g r a p h i c s > + +Figure 2: Circular bar that represents the current applied force displayed around the cursor of FC. The bar is blue when the force is below the threshold, and red when it is above. + +§ 3.1.1 VISUAL FORCE FEEDBACK + +Previous studies $\left\lbrack {7,{35},{48}}\right\rbrack$ have shown that continuous visual feedback is effective in a technique using force. Therefore, we display a circular bar to provide continuous visual feedback to a user (Figure 2). The circular bar is displayed in blue when the current applied force is below the threshold (Figure 2a) and in red when it is above the threshold (Figure 2b). + +§ 3.1.2 DOUBLE-TAP ASSISTANCE + +In the demonstration, the reason why novices often failed to double-tap using FC was that they changed the force before the force threshold was crossed, i.e., before the touch-up event of the first tap was issued, the user was attempting to issue a touch-down event for the second tap. Therefore, we implemented a function to enable a user to perform a double-tap using the cursor without being aware of the threshold. In this function, a user first increases the force above the threshold, then decreases the force, increases it again, and finally decreases the force below the threshold to perform a double tap (i.e., the user only needs to cross the force threshold when increasing the force to issue the touch-down event of the first tap and when decreasing the force to issue the touch-up event of the second tap). + +§ 3.1.3 DRAG ASSISTANCE + +It is difficult to move a finger with increased force because of the increased frictional force between the finger and the screen [16, 17]. In order to solve this problem, we implemented a function that, when the force is increased to the maximum detectable value and dwelled for 1.0 seconds, then, a touch-move event is issued continuously at the cursor position, independent of the force, until the finger is released from the screen. Therefore, the user can perform a drag using the cursor with low force; this function of fixing the force state is the same as that of force lock [17]. + +§ 3.2 EVENT FORWARD CURSOR (EFC) + +In EFC, in the same way as in FC, when a user performs a swipe from the bezel, the cursor mode is triggered (Figure 1f), and then, the user drags the finger to move the cursor to the desired location (Figure 1g). In EFC, when the user releases the finger from the screen during the cursor mode, the mode switches to forward mode (Figure 1h); the cursor is yellow while in the forward mode so that the user knows the current mode. While in the forward mode, all touch events are forwarded to the cursor position. That is, touch-down events are issued at the cursor position when the finger touches the screen, then touch-move events are issued at the cursor position until the user releases the finger from the screen, and touch-up events are issued at the cursor position when the user releases the finger from the screen. Although the forward mode ends when the user releases the finger from the screen, to enable single-touch gestures that combine touch-up events such as double-tap or tap-and-hold, if the screen is touched again within the double-tap waiting time (0.25 seconds) after the user releases the finger from the screen, the forward mode continues. + +§ 4 USER STUDY: INVESTIGATION OF THE PERFORMANCE OF SINGLE-TOUCH GESTURES + +To investigate the single-touch gesture performance of one-handed interaction techniques, we conducted a user study with 8 participants $({21} - {24}$ years old, $\mathrm{M} = {22.38},\mathrm{{SD}} = {1.06};7$ males; who have the iPhone that can detect force; and usually interact with their smartphones with the right hand). + +For the safety of the participants, we conducted this user study remotely via video call. + +§ 4.1 SETUP + +§ 4.1.1 APPARATUS + +Participants used their own iPhones; the iPhone's force sensitivity is set to 'firm'. Three iPhone XS and one iPhone X (screen size: ${135.10}\mathrm{\;{mm}} \times {62.39}\mathrm{\;{mm}}$ ), and one iPhone 8, two iPhone 7, and one iPhone 6s (screen size: ${103.94}\mathrm{\;{mm}} \times {58.44}\mathrm{\;{mm}}$ ) were used in this user study. Since the implementation of the experimental application uses "pt" as a unit of length and the actual size of a pt varies slightly depending on the iPhone, in this section, we use pt as a unit of length. In the case of the iPhone XS and iPhone X, the $1\mathrm{{pt}} \simeq$ ${0.17}\mathrm{\;{mm}}$ , and in the case of the iPhone 8, iPhone 7 and iPhone 6s, the $1\mathrm{{pt}} \simeq {0.16}\mathrm{\;{mm}}$ . + +§ 4.1.2 TECHNIQUES + +We used FC, EFC, and One-Handed Mode (OM, same as Sam-sung's [41]) as one-handed interaction techniques that enable a user to perform all single-touch gestures. Other one-handed interaction techniques, such as Apple's Reachability [1] or cursor techniques (e.g., $\left\lbrack {{25},{33}}\right\rbrack$ ), were excluded from this user study because they do not support all single-touch gestures. In addition, as a baseline, we used Direct Touch (DT), i.e., without a one-handed interaction technique. That is, there are four different techniques (DT, OM, FC, and EFC) used in this user study. + +We unified triggers of OM, FC, and EFC to the swipe from the bezel to avoid the effect of different triggers. For FC and EFC, the size of the cursors was $9\mathrm{{pt}}$ , and the cursor’s control-display ratio was set to be three times, that is, the cursor moves three times the distance of the finger movement. + +In One-Handed Mode (OM), when a user swipes from the bezel, the contents are shrunk; swipe again to restore the contents to their original size. In this user study, the contents is shrunk to $2/3$ of its original size and moved to the lower right corner of the screen (Figure 3e), just like the standard Galaxy setting. + +For our implementation of Force Cursor (FC), we used the force readings provided by the iPhone's force-sensitive touchscreen. According to Apple's documentation [2], with the force sensitivity set to "firm", force is capable of measuring with unitless value from 0 to $\frac{480}{72}\left( { \simeq {4.0}\mathrm{\;N}\left\lbrack {36}\right\rbrack }\right)$ ; with values around ${1.0}\left( { \simeq {0.60}\mathrm{\;N}\left\lbrack {36}\right\rbrack }\right)$ being the force applied by ordinary touch. We set the force threshold for issuing touch-down and touch-up events in FC to ${3.0}\left( { \simeq {1.8}\mathrm{\;N}}\right)$ . + +§ 4.1.3 TARGETS + +18 targets were placed on an invisible ${15} \times 7$ grid, as shown in Figure $3\mathrm{a}$ ; the size of the grid varies with the size of the screen. The target size was set to two different values: ${60}\mathrm{{pt}} \times {60}\mathrm{{pt}}$ (Large, Figure 3a, b, d) and 30 pt $\times {30}$ pt (Small, Figure 3c). The target was placed at the top-left corner of the grid as a starting point. In addition, if part of the target protrudes from the screen, the target was moved to the center of the screen by the amount of the protrusion. During the task, only the current target is displayed in red, and other targets are not displayed (Figure 3b, c, d). + + < g r a p h i c s > + +Figure 3: Targets and screenshots of the user study. a: 18 targets placed on the screen. b: A target for tap and double-tap sessions; the target size is Large. c: A target for swipe sessions; the target size is Small and the swipe direction is down. d: A target for drag sessions; the target size is Large. e: The screen when $\mathrm{{OH}}$ is used; the contents are shrunk to $2/3$ of its original size. + +The location and size of the target were based on the experiments of [9]. However, while [9] did not place targets at hand, we place targets on the entire screen (include at hand) since different sizes of smartphones change the unreachable area of the user [32]. + +§ 4.1.4 SINGLE-TOUCH GESTURES + +We used the commonly used four single-touch gestures: tap (Tap), swipe (Swipe), double-tap (DTap), and drag (Drag). We set up a session for each gesture. + +In a session of Tap, a target is displayed in red (Figure 3b) and participants perform a tap on a target. The tap is performed when both the touch-down event and the touch-up event are issued on the same target. + +In a session of DTap, a target is displayed in red (Figure 3b), participants perform a double-tap on the displayed target. A double-tap is a gesture that the user touches the same target again within the double-tap waiting time after performing a tap and then releases the finger from the screen on the target. In this user study, the waiting time of the double-tap was set to 0.25 seconds: this is the same as the default of the iPhone. + +In a session of Swipe, participants swipe the target in the direction of the arrow displayed on the target (Figure 3c). The direction of a swipe was randomly selected from four directions, up, down, right, and left when the target was updated. However, because a swipe toward the screen bezel on the target that is in contact with the screen bezel cannot be recognized, directions toward the screen bezel were removed from the selection (e.g., one direction was randomly selected from the top, left, and bottom directions for the rightmost target). + +In a session of Drag, two targets were displayed (Figure 3d), and participants dragged a target labeled ' 1 ' (target1) to a target labeled ' 2' (target2). Target2 was randomly selected from 17 targets other than target1. + +§ 4.2 TASK AND PROCEDURE + +Before starting this user study, we initiated a video call with the participants to explain this user study. Participants sat on a chair and interacted with their iPhones with their right hands and only use the thumb. + +We recorded 4 techniques $\times 4$ gestures $\times {18}$ targets $\times$ 2target sizes $\times 2$ repetitions $= 1,{152}$ trials per participant. + + < g r a p h i c s > + +Figure 4: The results of Time [s]. The top is a table summarizing the results of TECHNIQUE, TECHNIQUE $\times$ GESTURE, and TECHNIQUE $\times$ SIZE, and the bottom is a bar graph showing the results of TECHNIQUE $\times$ GESTURE. Pairs that do not share a letter of Group are significantly different. Whiskers denote 95% CI. + +The techniques and gestures were counterbalanced using a Latin Square. The display order of the targets was random. Target size was also selected in random order, however, out of the 16 combinations of the 4 techniques $\times 4$ gestures,8 of the 16 started with small targets, and the remaining 8 with large targets. Before starting a new gesture session with each technique, participants performed the gesture to 18 targets as a practice. Then, the gesture was performed to 18 targets, repeated twice at the same size, and repeated twice again at the other size. Successful or unsuccessful performance of the gesture was informed to the participant in different sounds. When the gesture was successfully performed, the next target was displayed, and when the gesture failed, the same target was displayed again. + +After the completion of the $2 \times {18}$ trials at both sizes, participants began the next gesture session. Then, after 4 gesture sessions were completed, the next technique was presented to the participants. To know the subjective evaluation, we asked participants to answer the questionnaire of System Usability Scale [6] (SUS) for the technique after all gesture sessions for the technique were completed. + +The participants took breaks as needed to avoid hand fatigue. The user study took about two hours, and the participants received $33.1 as a reward. + +§ 4.3 RESULT + +The independent variables were TECHNIQUE (DT, OM, FC, and EFC), GESTURE (Tap, DTap, Swipe, and Drag), and SIZE (Large and Small). The dependent variables are trial completion time (Time), success rate (Accuracy), the jerk (Jerk) and angular acceleration (Angular Acceleration) used to evaluate the stability of the smartphone, and the score of SUS (SUS Score). Jerk is used to evaluate smooth motion [29] and angular acceleration is used to evaluate the vibration state [54]. For the analysis, we used a repeated-measures ANOVAs. Since the purpose of this user study is to investigate the single-touch gesture performance of one-handed interaction techniques, we only describe the main effect of the TECHNIQUE and the related interaction effects. + + < g r a p h i c s > + +Figure 5: The results of Accuracy [%]. The top is a table summarizing the results of TECHNIQUE, TECHNIQUE $\times$ GESTURE, and TECHNIQUE $\times$ SIZE, and the bottom is a bar graph showing the results of TECHNIQUE $\times$ GESTURE. Pairs that do not share a letter of Group are significantly different. Whiskers denote 95% CI. + +Because this user study conducted remotely and participants used their own smartphones, there were two different sizes of smart-phones. However, there were no significant effect of smartphone size (Time: $\mathrm{F}\left( {1,6}\right) = {3.35},\mathrm{p} = {0.12}$ ; Accuracy: $\mathrm{F}\left( {1,6}\right) = {2.86}$ , p $= {0.14}$ ; Jerk: $\mathrm{F}\left( {1,6}\right) = {0.14},\mathrm{p} = {0.72}$ ; Angular Acceleration: $\mathrm{F}\left( {1,6}\right) = {2.23},\mathrm{p} = {0.19})$ . + +§ 4.3.1 TIME + +Time results are shown in Figure 4. The reason why Tap was slower than DTap and Swipe is that there was a waiting time of 0.25 seconds to judge whether a double-tap is performed or not after a tap is performed. On the other hand, a double-tap is confirmed immediately after the second tap is performed. Inherently, a tap takes longer than a double-tap [21]. + +TECHNIQUE had a significant main effects on Time $\left( {{F}_{3.9425} = }\right.$ ${25.05},p < {.001})$ . Tukey’s HSD test was also significant (p $< {.05}$ between OM and FC, $\mathrm{p} < {.01}$ between DT and FC, and others $\mathrm{p} <$ .001). There was also significant TECHNIQUE $\times$ GESTURE interaction effect $\left( {{F}_{9.9425} = {3.26},p < {.01}}\right)$ . Tukey’s HSD test was also significant $(\mathrm{p} < {.05}$ between OM’s DTap and OM’s Swipe, OM’s Drag and FC's DTap, FC's DTap and EFC's DTap, and EFC's Tap and EFC's Swipe, p <.01 between DT's Tap and DT's Swipe, DT's Tap and OM's Swipe, and DT's Swipe and OM's DTap, DT's Drag and FC's Swipe, DT's Drag and EFC's DTap, OM's Tap and OM's Swipe, and FC’s DTap and FC’s Swipe, and others p $< {.001}$ ). As shown Figure 4, for all gestures, Time was $\mathrm{{DT}} \simeq \mathrm{{OM}} < \mathrm{{EFC}} \leq \mathrm{{FC}}$ . In other words, the technique of touching the target directly with a finger is faster than the technique of operating the target indirectly with a cursor; this is similar to the results shown by Chang et al. [8] in a tap-only experiment. In addition, there was also significant TECHNIQUE $\times$ SIZE interaction effect $\left( {{F}_{3.9425} = {3.52},p < {.05}}\right)$ . Tukey’s HSD test was also significant (p < .01 between DT’s Small and OM’s Small, and others $\mathrm{p} < {.001}$ ). As expected, in Time, Large was faster than Small for all Technique. + + < g r a p h i c s > + +Figure 6: The results of Jerk $\left\lbrack \frac{m}{.3}\right\rbrack$ . The top is a table summarizing the results of TECHNIQUE, TECHNIQUE $\times$ GESTURE, and TECHNIQUE $\times$ SIZE, and the bottom is a bar graph showing the results of TECHNIQUE $\times$ GESTURE. Pairs that do not share a letter of Group are significantly different. Whiskers denote 95% CI. + +§ 4.3.2 ACCURACY + +Accuracy results are shown in Figure 5. TECHNIQUE had a significant main effects on Accuracy $\left( {{F}_{3,{473}} = {8.24},p < {.001}}\right)$ . Tukey’s HSD test was significant (p < .05 between OM and FC; p < .01 between DT and FC; and p < .001 between FC and EFC). There was also significant TECHNIQUE $\times$ GESTURE interaction effect $\left( {{F}_{9,{473}} = {5.92},p < {.001}}\right)$ . Tukey’s HSD test was also significant (p < .05 between DT ’s DTap and OM’s DTap, DT’s Swipe and FC's DTap, DT's Drag and FC's Tap, OM's Tap and FC's DTap, OM's Tap and FC's Drag, OM's DTap and EFC's Tap, OM's DTap and EFC's Swipe, OM's Swipe and FC's Tap, FC's DTap and FC's Swipe, FC's Swipe and FC's Drag, and FC's Drag and EFC’s Drag; $\mathrm{p} < {.01}$ between DT’s Tap and DT’s Drag, DT’s Tap and OM's Swipe, DT's DTap and FC's DTap, DT's DTap and FC's Drag, DT's Swipe and OM's Drag, DT's Drag and EFC's DTap, OM's DTap and FC's Tap, OM's Swipe and EFC's DTap, FC's DTap and EFC's Drag, FC's DTap and EFC's Drag, FC's DTap and EFC's Swipe, and FC's Drag and EFC's Swipe, and FC's Drag and EFC’s Tap; and others p < .001 ). As shown in Figure 5, there was no significant difference between TECHNIQUE in Tap. However, there were significant differences across TECHNIQUE in other gestures. Although EFC had a higher success rate for all gestures, OM and FC had a lower success rate for DTap and Drag. In addition, there was also significant TECHNIQUE $\times$ SIZE interaction effect $\left( {{F}_{3,{473}} = {24.62},p < {.001}}\right)$ . Tukey’s HSD test was also significant $(\mathrm{p} < {.05}$ between DT’s Small and FC’s Large, DT’s Large and EFC's Small, OM's Large and EFC's Small, and FC's Large and EFC's Large; $\mathrm{p} < {.01}$ between FC's Small and EFC's Small, and EFC’s Small and EFC’s Large; and others p < .001). As expected, Large had a higher success rate than Small for all techniques. The difference in success rates between SIZE (Large and Small) was smaller for techniques using a cursor (FC: 7.64%, EFC: 6.07%) and larger for other two techniques (DT: 9.72%, OM: 21.18%); this result may be due to the fact that the techniques of touching the target directly with a finger are susceptible to the fat finger problem [43] and occlusion. + +DT OM FC EFC + +Mean Group Mean CI Group Mean Group Mear CI Group + +0.46 $\pm {0.04}$ 0.29 $\pm {0.03}$ 0.17 $\pm {0.01}$ A 0.21 $\pm {0.02}$ B + +0.36 $\pm {0.07}$ 0.23 $\pm {0.06}$ 0.13 $\pm {0.03}$ A 0.19 $\pm {0.06}$ C, D + +DTap 0.51 $\pm {0.11}$ 0.32 $\pm {0.09}$ G 0.17 $\pm {0.04}$ B, C 0.24 $\pm {0.05}$ E + +0.53 $\pm {0.12}$ 0.34 $\pm {0.09}$ 工 0.18 $\pm {0.05}$ B, C 0.21 $\pm {0.06}$ D + +Drag 0.45 $\pm {0.10}$ 0.27 $\pm {0.08}$ 0.19 $\pm {0.05}$ B, D 0.18 $\pm {0.06}$ B + +0.44 $\pm {0.04}$ 0.27 $\pm {0.04}$ 0.15 $\pm {0.02}$ A 0.20 $\pm {0.02}$ B, C + +0.49 $\pm {0.06}$ 0.32 $\pm {0.06}$ E 0.18 $\pm {0.02}$ B 0.22 $\pm {0.03}$ C + + < g r a p h i c s > + +Figure 7: The results of Angular Acceleration $\left\lbrack \frac{rad}{{s}^{2}}\right\rbrack$ . The top is a table summarizing the results of TECHNIQUE, TECHNIQUE $\times$ GESTURE, and TECHNIQUE $\times$ SIZE, and the bottom is a bar graph showing the results of TECHNIQUE $\times$ GESTURE. Pairs that do not share a letter of Group are significantly different. Whiskers denote 95% CI. + +§ 4.3.3 STABILITY OF THE SMARTPHONE (JERK AND ANGULAR ACCELERATION) + +Jerk results are shown in Figure 6. TECHNIQUE had a significant main effects on Jerk $\left( {{F}_{3.9425} = {64.06},p < {.001}}\right)$ . Tukey’s HSD test was significant $\left( {\mathrm{p} < {.05}\text{ between FC and EFC; and others }\mathrm{p} < {.001}}\right)$ . There was also significant TECHNIQUE $\times$ GESTURE interaction effect $\left( {{F}_{9,{9425}} = {51.91},p < {.001}}\right)$ . Tukey’s HSD test was also significant (p < .05 between OM’s DTap and OM’s Swipe, FC’s DTap and EFC's Tap; p < .01 between FC's Drag and EFC's Swipe, EFC's Swipe and EFC’s DTap; and others p < .001 ). In Jerk, FC < EFC $< \mathrm{{OM}} < \mathrm{{DT}}$ for all gestures except Drag, and $\mathrm{{FC}} \simeq \mathrm{{EFC}} < \mathrm{{OM}}$ $<$ DT for Drag. In addition, there was also significant TECHNIQUE $\times$ SIZE interaction effect $\left( {{F}_{3.9425} = {13.44},p < {.001}}\right)$ . Tukey’s HSD test was also significant $(\mathrm{p} < {.01}$ between FC’s Large and EFC’s Small, EFC's Small and EFC's Large; and others p< .001). In all techniques, Large had significantly lower jerk than Small. + +Angular Acceleration results are shown in Figure 7. TECHNIQUE also had a significant main effects on Angular Acceleration $\left( {{F}_{3.9425} = {37.46},p < {.001}}\right)$ . Tukey’s HSD test was also significant $\left( {\mathrm{p} < {.05}\text{ between }\mathrm{{FC}}\text{ and }\mathrm{{EFC}}\text{ ; and others }\mathrm{p} < {.001}}\right)$ . There was also significant TECHNIQUE $\times$ GESTURE interaction effect $\left( {{F}_{9.9425} = {56.12},p < {.001}}\right)$ . Tukey’s HSD test was also significant (p < .05 between DT's DTap and DT's Swipe, FC's Swipe and EFC's Swipe; $\mathrm{p} < {.01}$ between OM's DTap and OM's Drag, FC's DTap and EFC's Swipe, EFC's Tap and EFC's Drag, and EFC's DTap and EFC’s Swipe; and others $\mathrm{p} < {.001}$ ). As with Jerk, in Angular Acceleration, $\mathrm{{FC}} < \mathrm{{EFC}} < \mathrm{{OM}} < \mathrm{{DT}}$ for all gestures except Drag, and $\mathrm{{FC}} \simeq \mathrm{{EFC}} < \mathrm{{OM}} < \mathrm{{DT}}$ for Drag. In addition, there was also significant TECHNIQUE $\times$ SIZE interaction effect $\left( {{F}_{3.9425} = {23.79},p < {.001}}\right)$ . Tukey’s HSD test was also significant (p < .01 between FC's Large and EFC's Small, EFC's Small and EFC's Large; and others p< .001). Large had significantly lower angular acceleration than Small for all techniques except EFC. + + < g r a p h i c s > + +Figure 8: The results of SUS Score. Whiskers denote 95% CI. + +In summary, FC has the smallest movement of smartphones, followed by EFC, OM, DT. The reason for this may be that the time the thumb touches the screen in the techniques using a cursor (FC and EFC) is longer than in the other two techniques (DT and OM). In particular, since the thumb always touches the screen in FC, the smartphone was more stable than in EFC. + +§ 4.3.4 SUS SCORE + +The result of SUS Score is shown in Figure 8. As a result of ANOVA, SUS Score did not have a significant main effect on TECHNIQUE. In terms of average value, DT was the highest, followed by EFC, OM, then FC. The reason why DT has the highest average value is that SUS has a high score for familiar techniques [42]; DT was the same technique that the participants usually use when using the smartphone and was a familiar technique. Although the average values of OM and EFC were almost the same, the average value of $\mathrm{{FC}}$ was slightly lower than these. This is thought to be because FC was inferior to other techniques in terms of Time and Accuracy. + +§ 5 DISCUSSIONS + +In this section, we discuss the performance of one-handed interaction techniques based on the results of the user study. + +§ 5.1 THE NEED TO INVESTIGATE SINGLE-TOUCH GESTURE PER- FORMANCE + +As shown in Figure 4, Figure 6, and Figure 7, the differences in Time, Jerk, and Angular Acceleration across TECHNIQUE did not vary within GESTURE; that is, in Time, $\mathrm{{DT}} < \mathrm{{OM}} < \mathrm{{FC}} \simeq$ EFC, and in Jerk and Angular Acceleration, $\mathrm{{FC}} < \mathrm{{EFC}} <$ $\mathrm{{OM}} < \mathrm{{DT}}$ . However, as shown in Figure 5, although the success rate was higher for all techniques in Tap, the success rates of DT, OM and FC were lower for in the other gestures, depending on the gestures. In addition, there were comments of participants that Drag in OM was difficult, or Swipe was easy but DTap was difficult in FC, or DTap in EFC was easier. These results suggest that the performance of the one-handed interaction technique varies depending on the performed gestures. Therefore, we think that it is important to investigate the single-touch gesture performance of one-handed interaction techniques. + +§ 5.2 SELECTING A SUITABLE ONE-HANDED INTERACTION TECH- NIQUE + +In summary, the results of the user study show that OM is the best technique with a high success rate and fast performing gestures when the target is large and some smartphone movement is allowed. On the other hand, when the target size is small, the success rate of all gestures (especially those other than a tap) is quite low. Although FC had a lower success rate than EFC, the movement of the smartphone was the smallest. Therefore, FC is considered to be a suitable technique for stable the grip of the smartphone. EFC takes more time to perform gestures than OM, however, it has a high success rate regardless of the target size and gestures, and it can stabilize the grip of the smartphone more than DT and OM. In other words, EFC is considered to be suitable for careful manipulation and for performing gestures on small unreachable targets. + +Table 1: Time [s] of the trials where the trigger was performed. CI denote 95% CI. + +max width= + +X 2|c|OM 2|c|FC 2|c|EFC + +1-7 +Gesture Mean CI Mean CI Mean CI + +1-7 +Tap 3.82 0.41 4.14 0.67 2.28 0.24 + +1-7 +DTap 3.81 0.59 4.01 0.62 1.84 0.25 + +1-7 +Swipe 3.28 0.36 3.31 0.44 1.95 0.29 + +1-7 +Drag 5.09 1.62 3.87 0.65 3.10 0.50 + +1-7 + +Table 2: Accuracy [%] of the trials where the trigger was performed. CI denote 95% CI. + +max width= + +X 2|c|OM 2|c|FC 2|c|EFC + +1-7 +Gesture Mean CI Mean CI Mean CI + +1-7 +Tap 85.63 4.93 80.27 3.52 92.53 2.81 + +1-7 +DTap 73.13 4.49 68.03 5.74 94.62 2.41 + +1-7 +Swipe 85.00 4.32 72.56 4.54 92.01 2.78 + +1-7 +Drag 70.98 5.19 74.59 3.07 90.97 2.77 + +1-7 + +However, the performance of the one-handed interaction technique varies greatly depending on the situation. Therefore, we think that it is important to allow the user to choose the one-handed interaction technique to be used according to the situation, for example, to enable both OM and EFC to be used; the introduction of multiple one-handed interaction techniques can be easily accomplished by assigning a separate trigger to each. + +§ 5.3 EFFECTS OF NUMBER OF PERFORMED TRIGGERS + +In the user study, all gestures were performed using each technique. However, each technique requires the user to perform trigger at different times: OM allows the user to keep the contents shrunk until the user performs the trigger again; FC allows the user to manipulate the cursor until the user removes the finger from the screen after performing the trigger once; EFC, on the other hand, requires the user to perform the trigger each time a gesture is performed to with the cursor. Therefore, the results of the user study may be influenced by the number of performed trigger. + +To analyze the effect of the number of performed trigger, we extract only the trials where one or more triggers were used. The results of Time and Accuracy are shown in Table 1 and Table 2. Jerk and Angular Acceleration did not vary much whether the trigger was performed; in other words, it is almost identical to Figure 6 and Figure 7. + +Based on this result, EFC might be the best technique if the user needs to perform the trigger frequently. However, because the number of trials for which the trigger was performed varies greatly depending on the technique (OM: 142 times, FC: 791 times, EFC: 2,304 times), this result may be not robust. In the user study, in OM, participants continued the task with reduced contents after the first trigger was performed. However, given the actual usage environment, we expect the number of the performed trigger to increase because the user resizes the contents to its original size to select a small button, type a letter, or enjoy the displayed content (videos and texts). Therefore, We need to investigate the performance of the technique in real usage environments. + +§ 5.4 CAUSES OF GESTURE FAILURE + +In OM, the error was caused by the difficulty of pointing due to the small size of the target (i.e., fat finger problem [43] and occlusion). Particularly in gestures other than Tap, participants were strongly influenced by small targets. Therefore, the error rate may be reduced by combining OM and techniques to select small targets $\left\lbrack {4,{38}}\right\rbrack$ or techniques to improve the accuracy of touch $\left\lbrack {{19},{39}}\right\rbrack$ . + +In FC, many participants said that changing force caused the cursor to move, resulting in an error. In particular, in DTap, the participants had to change the force quickly and repeatedly, which caused the cursor to move, and the success rate was low. In addition, an error occurred because the participants unintentionally applied force when moving the cursor, causing a touch event to issue. These may be improved by introducing a filter that separates cursor movement from changes in force. Corsten et al. [11] found that the performance of the technique using force improves with long-term training, so it is possible that it may improve with more training. + +In EFC, participants commented that the cursor was moved unintentionally when they released their finger from the screen to decide where to forward the gesture, resulting in an error. The problem that the touch position changes when the user releases the finger from the screen has been investigated by Xu et al. [50]. Since the cursor was moved three times the distance the finger moved in the user study, the effect of this problem was likely to be stronger. Therefore, in EFC, it may be possible to increase the success rate by taking advantage of the state just before the user releases the finger from the screen, as in [10]. + +§ 6 LIMITATIONS AND FUTURE WORK + +In this section, we discuss the limitations of the results of the user study and discuss our future work. + +§ 6.1 PARTICIPANTS' INDIVIDUAL ATTRIBUTE AND NUMBER OF PARTICIPANTS + +The participants in the user study were all young and familiar with the interaction with smartphones. However, a wide range of people, from small children to the elderly, novices to experts use smart-phones. In particular, elderly people are known to have difficulty with force control [26] and pointing at small targets using their thumbs $\left\lbrack {{27},{49}}\right\rbrack$ . That is, age may affect the performance of FC and $\mathrm{{OH}}$ . These are suggesting that the performance of each technique may vary with individual attributes. In addition, because the number of participants is eight, the results of the user study might be not robust. Therefore, we need to conduct an additional user study with more participants of different individual attributes. + +§ 6.2 EFFECTS OF LONG-TERM TRAINING + +In the user study, each participant performed each gesture 108 times in each technique, including practice. However, if the participant trained more, the performance might be change. In particular, Corsten et al. [11] found that the performance of the techniques using force improves with practice. In addition, the performance of other techniques (OM and EFC) might also change as the user becomes an expert. This indicates the need for long-term experiments to accurately determine the performance of one-handed interaction techniques. + +§ 6.3 USE IN DIFFERENT ENVIRONMENTS + +We conducted the user study with participants seated in a chair. However, smartphones are used in various situations, such as while walking, riding a train, and lying down. It is known that the accuracy of pointing with the thumb is reduced [37] and the user's force resolution is reduced [47] while walking. In addition, the motion of the smartphone changes according to the user's body posture [13]. Therefore, body posture may impact performance in techniques that have a large smartphone movement. Moreover, because the situation of using a smartphone may also affect the performance of one-handed interaction techniques, we need to investigate it. + +§ 7 CONCLUSION + +In this paper, to enable a user to perform all single-touch gestures to an unreachable area, we designed Force Cursor (FC) and Event Forward Cursor (EFC). FC is a technique to issue touch events (touchdown events when the force is increased and touch-up events when the force is decreased) at the cursor position using force. On the other hand, EFC is a cursor technique that consists of two steps of operation; the first is for determining the touch event position; the second is for performing single-touch gestures. Furthermore, we conducted the user study to investigate the single-touch gesture performance of three one-handed interaction techniques: contents shrinking technique (OM), FC, and EFC. From the results of the user study, although the time to perform a gesture and the stability of the smartphone did not vary greatly depending on the performing gesture, the success rate varied with the performing gesture: EFC had a high success rate regardless of the gesture, while OM and FC had a low success rate except for a tap. In addition, we found that both of the two techniques we designed to enable the user to interact with the smartphone with a stable grip. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/_VoSnnpUDkC/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/_VoSnnpUDkC/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..bcddba5fb1bce0a7c57931f6ec8874994d084f7f --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/_VoSnnpUDkC/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,332 @@ +# Generating Adversarial Examples for Robust Deception against Image Transfer and Reloading + +Category: Research + +## Abstract + +Adversarial examples play an irreplaceable role in evaluating deep learning models' security and robustness. It is necessary and important to understand the effectiveness of adversarial examples to utilize them for model improvement. In this paper, we explore the impact of input transformation on adversarial examples. First, we discover a new phenomenon. Reloading an adversarial example from the disk or transferring it to another platform can deactivate its malicious functionality. The reason is that reloading or transferring images can reduce the pixel precision, which will counter the perturbation added by the adversary. We validate this finding on different mainstream adversarial attacks. Second, we propose a novel Confidence Iteration method, which can generate more robust adversarial examples. The key idea is to set the confidence threshold and add the pixel loss caused by image reloading or transferring into the calculation. We integrate our solution with different existing adversarial approaches. Experiments indicate that such integration can significantly increase the success rate of adversarial attacks. + +Keywords: Adversarial Examples, Robustness, Reloading + +Index Terms: Computing methdologies-Computer vision problems; Neural networks-Security and privacy-Software and application security; + +## 1 INTRODUCTION + +DNNs are well known to be vulnerable to adversarial attacks [1]. The adversarial algorithm can add small but carefully crafted perturbations to an input, which can mislead the DNN to give incorrect output with high confidence. Extensive work has been done towards attacking supervised DNN applications across various domains such as image [2-5], audio [6-8], and natural language processing [9, 10]. Since DNNs are widely adopted in different AI tasks, the adversarial attacks can bring significant security threats and damage our everyday lives. Moreover, researchers have demonstrated the possibility of adversarial attacks in the physical world $\left\lbrack {{11},{12}}\right\rbrack$ , proving that the attacks are realistic and severe. + +In addition to attacking DNN models, generating powerful and robust adversarial examples also has very positive meanings. First, adversarial examples can be used to test and evaluate the robustness and security of DNN models. The more sophisticated and stealthy the adversarial examples are, the more convincing their evaluation results will be. Second, generating adversarial examples can also help defeat such adversarial attacks. One promising defense is adversarial training [2], where adversarial examples will be included in the training dataset to train a model that is resistant to those adversarial examples. Obviously, if we inject more powerful adversarial examples into the training set, the model will be more robust. + +In this paper, we explore and evaluate the effectiveness of adversarial examples with transformation. Guo et al. [13] studied the image transformation (cropping-scaling, bit-depth reduction, compression) as a defense against adversarial attacks; Dziugaite et al. [14] conducted comprehensive evaluations on the effectiveness of adversarial examples with JPG compression. Unlike the above work that actively transforms the images, we consider cases where images are passively transformed due to reloading or transferring. We discover that an image will lose certain precision when it is reloaded from the disk, or transferred to a different platform. Such precision reduction in an adversarial example can counter the adversarial perturbation, making the attack ineffective. We evaluate adversarial examples' effectiveness with different mainstream methods and find that most of the methods will fail after the image is reloaded or transferred. + +To generate robust adversarial examples against image reloading or transferring, we propose a novel approach, Confidence Iteration (CI). Generally, our CI approach dynamically checks the generated examples' confidence score to evaluate its effectiveness after being reloaded or transferred. By doing so it can filter out the less qualified adversarial examples. + +Our approach has several advantages. First, it is generic and can be integrated with existing adversarial attacks for enhancement because it can be called outside of the adversarial algorithm. Second, the adversarial examples generated by our approach have higher success rates before and after they are reloaded or transferred. Third, the adversarial examples generated by our approach have a lower detection rate by state-of-the-art defense solutions. We expect that our solution can help researchers better understand, evaluate and improve DNN models' resistance against various adversarial examples. + +In summary, we make the following contributions: + +- we are the first to find that the adversarial examples can be ineffective after being reloaded or transferred. We confirm our findings through comprehensive evaluations; + +- We propose an effective method, Confidence Iteration, to generate more robust adversarial examples, which can maintain high attack performance under image transformation. + +The rest of the paper is organized as follows: Section 2 gives the background and related work about adversarial attacks and defenses. Section 3 describes and evaluates the adversarial examples' effectiveness after image reloading and transferring. We introduce our approach in Section 4 and evaluate it in Section 5. Section 6 concludes the paper. + +## 2 RELATED WORKS + +In this section, we give a brief background about attack and defense techniques of adversarial examples. We also introduct the resistance of adversarial examples against input transformation. + +### 2.1 Adversarial Attack Techniques + +An adversary carefully crafts adversarial examples by adding imperceptible and human unnoticeable modifications to the original clean input. The target model will then predict this adversarial example as one attacker-specific label (targeted attack), or arbitrary incorrect labels (untargeted attack). Most adversarial attacks require that the ${L}_{p}$ norm of the added modifications cannot exceed a threshold parameter $\varepsilon$ . Different adversarial attack techniques have been proposed. We will describe six common attack methods below. + +Fast Gradient Sign Method (FGSM) [2]. The intuition of FGSM is that the adversary can modify the input such that the change direction is completely consistent with the change direction of the gradient, making the loss function increase at the fastest speed. Such changes can cause the greatest impact on the classification results, making the neural network misclassify the modified input. + +Basic Iterative Method (BIM) [15]. This is a simple extension of FGSM. The basic idea of BIM is to apply FGSM for several iterations, with a small step size for each iteration. The number of iterations is determined by $\min \left( {\varepsilon + 4,{1.25\varepsilon }}\right)$ . + +DeepFool [16]. Deepfool is based on the assumption that models are fully linear. There is a polyhedron that can separate individual classes. The DeepFool attack searches for adversarial examples with minimal perturbations within a specific region using the L2 distance. Therefore, one big advantage of DeepFool is that it can automatically determine the optimal perturbation threshold $\varepsilon$ . + +Decision-Based Attack [17]. The decision-based attack starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. It is a method that only relies on the model's final decision. A perturbation is sampled from a proposal distribution at each step, which reduces the distance of the perturbed image towards the original input. They find progressively smaller adversarial perturbations according to a given adversarial criterion. The decision-based attack finally generates an adversarial example with little disturbance near the classification boundary. + +HopSkipJump Attack [18]. HopSkipJump Attack is an algorithm based on a novel estimate of the gradient direction using binary information at the decision boundary. Different from Decision-Based Attacks, which need a large number of model queries, Hop-SkipJump Attack requires significantly fewer model queries and generation time. What is more, in HopSkipJump Attack, the perturbations are used to estimate a gradient direction to handle the inefficiency in Boundary Attack. + +Projected Gradient Descent(PGD) [19]. Their PGD attack consists of initializing the search for an adversarial example at a random point within the allowed norm ball, then running several iterations of the basic iterative method [15] to find an adversarial example. + +### 2.2 Adversarial Example Defense Techniques. + +Existing approaches for defeating adversarial examples mainly fall into two categories, as described below. + +Adversarial Training. Szegedy et al. [2] proposed that by training the neural network with the mixed dataset of adversarial examples and original clean samples, the new model will be resistant to adversarial examples. However, Moosavi-Dezfooli et al. [20] showed that an adversary can still generate new examples to fool the defense model. + +Adversarial Example Detection. Instead of enhancing the models, these approaches aim to detect adversarial examples. One typical solution is de-noising. Mustafa A et al. [21] proposed the wavelet reconstruction algorithm to map the adversarial examples outside of the manifold region to the natural images' manifold region through a deep image reconstruction network. It can restore the normal discriminability of the classifier effectively. Hinton et al. [22] adopted this reconstruction process of capsule network to detect adversarial examples automatically. + +### 2.3 Transformation and Distortion of Adversarial Exam- ples. + +Most neural networks trained for image classification are trained on images that have undergone JPG compression, containing the original data subspace. + +Dziugaite et al. [14] find that perturbations of natural images (by adding scaled white noise or randomly corrupting a small number of pixels) are almost certain to move an image out of the JPG subspace and, therefore, out of the data subspace. Adversarial examples can, therefore, induce the classification network to give wrong classification results. However, when the degree of disturbance is small, the pixel disturbance value superimposed on the original image by the adversarial example is also small, which means that these disturbance values are not robust to image compression, storage, and transmission. The pixel loss is the reason why image transformation or distortion can defeat adversarial examples.Obviously, how to keep pixel perturbation is the solution to this problem. + +## 3 TRANSFERRING AND RELOADING OF ADVERSARIAL EX- AMPLES + +We study different popular image formats and approaches of adversarial attacks and conclude that image transferring and reloading can significantly reduce adversarial attacks' success rate. + +### 3.1 Root Cause + +There are two reasons that image transferring and reloading can deactivate adversarial examples. First, in an adversarial image generated using existing approaches, each pixel is usually represented as a float value. When we store the image into the disk, the pixels will be converted into int type to save space. Such accuracy loss can make the adversarial example ineffective when we reload it from the disk. We find that the mainstream image formats (BMP, JPEG, and PNG) all perform such pixel conversion. Second, when we transfer an image to a different platform via networks, the image is usually compressed to save the network traffic. For instance, we use the WeChat application to send pictures from a smartphone to a laptop and find that the application will compress the pictures with an ${80}\%$ compression rate by default. + +Although such conversion and compression types have a human unnoticeable impact on the images, they can significantly affect adversarial attacks' success rate. The adversary's goal is to find the smallest perturbation that causes the model to classify the image into an attack-specific category. Common techniques usually move the original clean samples towards the classification boundary and stop when the samples just cross the boundary to make sure that the added perturbation is small. So the adversarial examples have very high precision requirements for their pixel values. The small changes caused by image reloading or transferring can move the adversarial images to classes different from the one the adversary desires, making the adversarial examples ineffective. Here, we use Figure 1 to directly illustrate the adverse effects of image reloading and image format transformation on the adversarial effect of the adversarial example. Below we conduct a set of experiments to validate those effects empirically. + +![01963eb2-d575-7633-8761-d17bdd727eb1_1_1114_1376_344_336_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_1_1114_1376_344_336_0.jpg) + +Figure 1: Red dots represent data, and the gray line represents the hyperplane that can separate individual classes. The gray dots represent the inner boundary of the adversarial examples. The green dot represents a specific adversarial example. The yellow dot represents that reloading can project this adversarial example back into the original sample space. + +### 3.2 Experiments + +#### 3.2.1 Impact of image reloading. + +We first empirically check the effectiveness of adversarial examples after being saved and reloaded. + +![01963eb2-d575-7633-8761-d17bdd727eb1_2_170_150_682_171_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_2_170_150_682_171_0.jpg) + +Figure 2: Pixel values before and after saving/reloading + +Precision loss. We generate a $3 \times 3$ image, and add each pixel value with a random perturbation $q$ between 0 and 1 . Then we save the image into three different formats (JPG, BMP, PNG) and then reload it into the memory. All the operations are done under windows10. + +Figure 2 shows the pixel values of the original image (2a) and reloaded JPG (2b), PNG (2c) and BMP (2d) images, respectively. We observe that each image format has precision loss due to the type conversion from float to int: JPG format directly discards the decimals. In contrast, PNG and BMP formats round off the decimals. Although such estimation does not cause visual-perceptible effects to the image, it can affect the results of adversarial attacks, as these attacks require precise pixel-level perturbations. We demonstrate the effects below. + +Effectiveness of adversarial examples. We measure the performance of adversarial examples after being reloaded or transferred. We select six commonly used approaches of adversarial attacks: Decision-Based Attack [17], HopSkipJump Attack [18], Deepfool [16], BIM [15], FGSM [2], and PGD [19]. For each approach, we generate some adversarial examples. Decision-Based Attack, HopSkipJump Attack and PGD use ResNet50 classifier. Deepfool uses ResNet34 classifier. BIM and FGSM use the VGG11 classifier. Furthermore, all adversarial examples are tested with the classifier used at the time of generation. + +We find that all the six adversarial attack methods measure the classification number and confidence of adversarial examples at the time of generation to judge whether the adversarial attack is successful. In fact, the classification number and confidence at this time are not true, because the model does not classify the real image at this time. They all use models(for example, ResNet50) to classify the generated Numpy array instead of the real picture itself. It means, so far, they have not generated the image form of the adversarial examples. To test the effectiveness of the adversarial examples, we use cv2.imwrite and plt.savefig to download the adversarial examples locally. Next, we use the same model(for example, ResNet50) to load the adversarial examples saved locally. In this paper, we refer to the above behavior as "Reloading." + +We also find that when images are transmitted through instant messaging software, companies compress them to save bandwidth, which results in a loss of pixels in the image, which is detrimental to the adversarial examples generated by subtle perturbations. For example, when we use WeChat to send an image to a friend, our friend can see the compressed image with only a small amount of traffic. Instead of clicking the "download the original" button, we save the compressed image locally and use the above model to categorize it. The above process is referred to as "Transferring" in this paper. + +We use Figure 3 and Figure 4 to illustrate adversarial examples' confidence values after being reloaded and transferred. Different colors represent different classification Numbers, the height of the column represents confidence, and each block represents six algorithms from left to the right: Decision-Based Attack [17], HopSkipJump Attack [18], DeepFool [16], BIM [15], FGSM [2], and PGD [19]. We can see that the initial image can be correctly classified with high confidence in all six algorithms. Besides, all the adversarial examples generated by the algorithms can be classified into other categories, which means that the six algorithms' adversarial examples have the adversarial ability to deceive classification models into giving false results. + +![01963eb2-d575-7633-8761-d17bdd727eb1_2_957_151_667_396_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_2_957_151_667_396_0.jpg) + +Figure 3: Classification number and confidence of adversarial examples after being reloaded and transferred. + +![01963eb2-d575-7633-8761-d17bdd727eb1_2_954_626_667_387_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_2_954_626_667_387_0.jpg) + +Figure 4: Classification number and confidence of adversarial examples after being reloaded and transferred for another picture. + +Surprisingly enough, we find that regardless of adversarial examples are saved in JPG, PNG, or BMP, most of them could be classified as the original clean image when they are reloaded or transferred. Some even had high confidence. As reflected in the image, the image after Reloading or Transferring is classified as the original clean image with the same color. + +We hope to use more straightforward data to show you this phenomenon. As a result, Table 1 and Table 2 are two experimental results of another two groups of Reloading and Transferring. The data in the table represents the classification number and confidence (data in brackets). We can find that many of the adversarial examples generated by the six kinds of adversarial attacks cannot maintain their attack ability after being reloaded or transferred. After being reloaded or transferred, the adversarial examples will be classified as original clean samples' labels (such as 90 and 129) by the classifier. In order to verify that the adversarial examples with high confidence also have Reloading and Transferring problems, we conduct the following experiments with results in Table 3: + +We can find that the adversarial examples of Picture $1 \sim$ Picture4 with high confidence as shown in Figure 5, after being reloaded or transferred, a large part of them are classified as original clean samples in Table 3, proving that the adversarial examples with high confidence also have Reloading and Transferring problems. + +All data in Tables 1 to 3 are all derived from the ResNet50 model. + +Cross Validation. Instead of using the same model to verify adversarial examples' effectiveness, we conduct two sets of cross-validation experiments. One set uses the Reloaded images, and the other uses the Transferred images. The classification number of the initial clean image is 129 . The classification numbers of their adversarial examples generated by the six adversarial algorithms are no longer 129, which means that the adversarial attack is successful(not shown in Table 4). We feed the two sets of adversarial examples generated by algorithm A into algorithm B after they are Reloaded or Transferred, to cross-verify the adversarial effectiveness of adversarial examples after being Reloaded or Transferred. Table 4 shows their classification Numbers and Confidence in other algorithms. + +![01963eb2-d575-7633-8761-d17bdd727eb1_3_214_391_695_1414_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_3_214_391_695_1414_0.jpg) + +Figure 5: Adversarial Examples generated from Picture1~Picture4 + +Obviously, no matter after Reloaded or Transferred, the adversarial examples lose their effectiveness in their own and other adversarial algorithms. After WeChat transmission, due to the existence of image compression during the transmission process, four new items in the table are classified to be recovered as clean samples. + +Multiple attacks In this section, we use the existing adversarial examples as input and conduct other adversarial attacks. The new adversarial examples after reloaded are tested for effectiveness. The results are shown in Table 5. + +As shown in Table 5, even if we send the generated adversarial examples into another generation algorithm again, the problem that reloading and transferring results in the decrease of the adversarial effectiveness also exists and is very serious. In Table 5, we see that in addition to an item that failed to generate the adversarial example across models and an item misclassified as classification number 533, other adversarial examples are all classified as the initial clean sample's classification number 129. + +The above chart synoptically shows that Reloading and Transferring will significantly reduce the effectiveness of the adversarial attack. This is true for single attacks, cross attacks, and multiple attacks. + +### 3.3 Spectrum Analysis. + +Next, spectrum analysis is performed on the adversarial examples used in Table 1 and Table 2. + +The spectrum analysis results are shown in Figure 6. From left to right are the initial images, adversarial examples generated by BIM, FGSM and Deepfool algorithms. We can find that the Deepfool algorithm can retain the original clean sample's original appearance to the greatest extent. In contrast, FGSM algorithm introduces more noise points, which is reflected in the spectrum map, that is, FGSM algorithm generates a more uniform distribution of the spectrum map with more low-frequency components. This is why the adversarial examples generated by the FGSM algorithm have better resistance to Reloading and Transferring loss in Table 1 and Table 2. + +The results of the wavelet transform spectrum diagram of the original picture and adversarial examples of BIM, FGSM, and Deepfool are shown in Figure 7 from left to right. Obviously, in the wavelet domain, the original clean image is closest to the adversarial example generated by Deepfool, both in low and high-frequency components, which means that Deepfool's algorithm can counter the attack with minimal perturbation and is least likely to maintain its antagonism at the same time. FGSM algorithm exerts a large disturbance, so the high and low-frequency components in the wavelet domain are quite different from the original clean image, maintaining the antagonism relatively well. + +## 4 A ROBUST APPROACH TO GENERATING ADVERSARIAL EXAMPLES + +As discussed in Section 3, adversarial examples generated from existing techniques will become ineffective after being reloaded or transferred. In this section, we propose an efficient and robust approach, Confidence Iteration (CI), to produce adversarial examples that are resistant to the processes of Reloading or Transferring. CI is generic: it can be integrated with all existing adversarial example techniques to improve their robustness while maintaining their advantages. + +Our CI approach's intuition is that an adversarial example's confidence score of the attacker-specific classification number reflects this example's resistance against input reloading or transferring. We use one existing technique (e.g., FGSM, BIM) to generate an adversarial example and save it in the disk locally, and measure its confidence score of the target class. (This actually involves reloading the image.) If the confidence score is higher than a threshold, we will accept this image. Otherwise, we continue to iterate, save it locally (or transform it through WeChat transmission), and measure the target class's reloading confidence score until it meets the confidence requirement or exceeds the iteration number threshold. When the confidence value $c$ meets the expected requirement $p$ , the adversarial example image saved to the hard disk at this time has some resistance to the pixel value's variant. Besides, multiple gradient rise caused by multiple iterations will keep the pixel values change with consistent direction. That is to say, after many iterations, the fractional parts of some pixel values will be promoted to the integer part, can no longer be discarded. To measure if an adversarial example is effective after image distortion, we adopt the wavelet reconstruction algorithm [21]. As the name implies, we first process adversarial examples through the wavelet denoising algorithm. Then, we send the denoised image into ESRGAN, A super-resolution reconstructed network. Some adversarial examples with weak attack ability will be classified as initial clean samples after being processed by this algorithm, which means that their attack ability has been lost. By detecting the adversarial examples processed by the wavelet reconstruction algorithm, we could measure the generated adversarial examples' robustness and effectiveness. + +Table 1: Classification number and confidence of an adversarial example after being reloaded and transferred + +
Classification number(confidence)Attack DecisionHopSkipJumpDeepfoolBIMFGSMPGD
Original images90(74.060%)90(74.060%)90(99.811%)90(99.582%)90(97.312%)90(100.000%)
adversarial images852 (15.062%)84(48.441%)95(49.315%)95(61.163%735(44.672%)318 (100.000%)
JPGreloading90(72.291%)90(69.921%90(52.677%)90(46.958%84(99.217%)90(99.651%
transferring${90}\left( {{63.671}\% }\right)$90(93.686%90(52.985%90(47.276%84(99.402%90(96.650%
PNGreloading84(52.540%84(83.981%)90(43.454%90(45.934%84(99.421%)90(94.402%
transferring90(82.835%90(50.656%90(80.671%90(36.895%84(89.627%90(99.985%
BMPreloading84(52.540%84(83.981%)90(43.454%90(45.934%84(99.421%90(94.402%
transferring90(82.835%)90(50.656%)90(80.671%)90(36.895%)84(89.627%90(99.985%
+ +Table 2: Classification number and confidence of another adversarial example after being reloaded and transferred + +
Classification number(confidence)Attack DecisionHopSkipJumpDeepfoolBIMFGSMPGD
Original images129(89.531%)129(89.531%)129 (86.374%)129(71.917%)129(91.494%)129 (98.182%)
adversarial images852 (12.363%)132 (36.282%)128 (48.604%)128(98.746%915 (5.642%)128 (97.858%)
JPGreloading132 (35.742 %129(65.183%)129(60.726%129 (87.825%132 (51.324%)129 (81.000%
transferring132(34.461%129 (58.947%)129 (88.792%129 (85.496%132 (30.130%129 (98.601%
PNGreloading132.63.513%129(64.022%129 (53.670%)129-85.081%132 (53.185%128 (38.533%)
transferring129 (36.472%)129(77.169%129(81.671%)129 (81.244%129(41.192%129 (89.833%)
BMPreloading132(53.513%)129(64.022%)129 (53.670%)129 (85.081%)132 (53.185%128 (38.533%)
transferring129 (36.472%)129(77.169%)129 (81.671%)129 (81.244%)129 (41.192%)129 (89.833%)
+ +![01963eb2-d575-7633-8761-d17bdd727eb1_4_168_891_696_375_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_4_168_891_696_375_0.jpg) + +Figure 6: Spectrum analysis of pictures in Table 1 and Table 2 + +![01963eb2-d575-7633-8761-d17bdd727eb1_4_934_1011_709_844_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_4_934_1011_709_844_0.jpg) + +Figure 7: Wavelet transform spectrum diagram of original picture and adversaral examples of BIM, FGSM and Deepfool from left to right + +Table 3: Classification number and confidence of adversarial examples generated from Picture1 $\sim$ Picture4 after being reloaded and transferred + +
FGSMPicture1Picture2Picture3Picture4
Original images106(94.478%)288(90.196%)173(92.451%)376(99.613%)
adversarial images343(84.336%)293 (95.005%)104(86.118%)371 (69.347%)
JPGreloading106(99.904%)288(49.574%104(28.730%)371 (34.062%)
transferring106(99.953%)288 (54.895%)104(31.623%)371 (33.070%)
PNGreloading106(99.685%)608(26.309%)173(49.878%)376(36.097%)
transferring${106}\left( {{99.807}\% }\right)$390 (47.548%${173}\left( {{47.880}\% }\right)$371 (66.135%)
BMPreloading106(99.685%)608(26.309%)173(49.878%)376(36.097%)
transferring${106}\left( {{99.807}\% }\right)$390(47.548%)173(47.880%)371 (66.135%)
+ +Table 4: Classification number and confidence of adversarial examples after being reloaded and transferred using Cross-Validation + +
Classification number(confidence)Original clean imageDeepfoolBIMFGSM
Deepfoolreloading129(89.16%)129(72.14%)129 (86.31%)128(57.74%)
transferring129(91.25%)128(77.49%)129 (89.12%)129(67.91%)
BIMreloading128(72.14%)128(77.30%)129(60.48%)129(65.53%)
transferring129(91.81%)128 (78.98%)141 (47.95%)129 (85.49%
FGSMreloading132(82.26%)129(40.96%)129(14.19%)129 (58.89%)
transferring129 (65.64%)129(42.91%)${129}\left( {{15.98}\% }\right)$129 (88.35%
PGDreloading129(60.72%129 (87.82%)129(89.18%)129 (81.00%)
transferring129(66.12%)${129}\left( {{68.06}\% }\right)$129(89.11%)${129}\left( {{98.60}\% }\right)$
+ +Algorithm 1 Confidence Iteration + +--- + +Input: A classifier $f$ with loss function $J$ ;a real example $\mathbf{x}$ and + +ground-truth label $y$ ; + +Input: The size of perturbation $\varepsilon$ ;iterations limit number ${T}_{\max }$ and + +confidence limit value $p$ ; + + Output: iterations number $T$ ; An adversarial example ${\mathbf{x}}^{ * }$ with + + ${\mathbf{x}}^{ * } - \mathbf{x}{\parallel }_{\infty } \leq {T\varepsilon }$ + + $T = 0$ + + ${\mathbf{x}}_{T}^{ * } = \mathbf{x};$ + + Save ${\mathbf{x}}_{T}^{ * }$ as a picture ${\mathbf{x}}_{T}^{\text{real }}$ on your local hard drive (or transform + + it through WeChat transmission) + + Input ${\mathbf{x}}_{T}^{\text{real }}$ to $f$ and obtain the confidence $c$ and the gradient + + ${\nabla }_{\mathbf{x}}J\left( {{\mathbf{x}}_{T}^{\text{real }},{y}_{\text{true }}}\right)$ ; + + while $\left( {T \leq {T}_{\max }}\right)$ and $\left( {c \leq p}\right)$ do + + ${\mathbf{x}}^{ * } = {\mathbf{x}}_{T}^{\text{real }} + \varepsilon \cdot {\nabla }_{\mathbf{x}}J\left( {{\mathbf{x}}_{T}^{\text{real }},{y}_{\text{true }}}\right) ;$ + + $T = T + 1$ ; + + ${\mathbf{x}}_{T}^{ * } = {\mathbf{x}}^{ * };$ + + Save ${\mathbf{x}}_{T}^{ * }$ as a picture ${\mathbf{x}}_{T}^{\text{real }}$ on your local hard drive (or trans- + + form it through WeChat transmission) + + Reload ${\mathbf{x}}_{T}^{\text{real }}$ to $f$ and obtain the confidence $c$ and the gradient + + ${\nabla }_{\mathbf{x}}J\left( {{\mathbf{x}}_{T}^{\text{real }},{y}_{\text{true }}}\right)$ ; + + end while + +--- + +Algorithm 1 summarizes our CI approach. We first input the clean image, generate its adversarial example, and then save the adversarial example locally. The local adversarial example is then reloaded into the classification model and judges whether the adversarial attack can succeed. On the premise of the adversarial attack's success, we give the confidence value $c$ of the adversarial attack, which is obtained by reloading the adversarial example in the hard disk into the classification model. Then we compare the expected confidence threshold $p$ with the current confidence $c$ . If the current confidence $c$ is less than the expected confidence threshold $p$ and the current iteration number $T$ is less than the iteration number threshold ${T}_{\max }$ . We will run the Confidence Iteration algorithm, save the generated adversarial example locally, and compare $\mathrm{c}$ and $\mathrm{p}$ . The whole process will not stop until $c$ is greater than $p$ or $T$ equals ${T}_{\max }$ . + +It is precisely because the CI algorithm has a download, reload, and confidence judgment process. We can apply it to the backend of any adversarial example generation algorithm to enhance the adversarial example's robustness against reloading and transferring. + +## 5 EVALUATION + +In this section, we conduct experiments to validate the effectiveness of our proposed approach. + +### 5.1 Configurations + +Dataset. To better reflect the real-world setting, we implement a crawler to crawl some images from websites instead of using existing image datasets. We consider the Inception v3 model [23] and restrict the scope of crawled images to the categories recognized by this model. We filter out the crawled images that the Inception v3 model cannot correctly recognize and finally establish a dataset consisting of around 1300 clean images with the correct labels. + +Implementations. We consider two adversarial example techniques: FGSM and BIM. Our CI approach is generic and can be applied to other techniques as well. We select VGG11 [24] as the target model. We implement these techniques with the CI algorithm using the PyTorch library. We set the iteration upper limit ${T}_{\max }$ as 6, and the confidence threshold $p$ as ${70}\%$ . + +Metrics. We adopt two metrics to evaluate the effectiveness of adversarial examples: (1) success rate is defined in Equation 1a. ${N}_{s}$ is the number of adversarial examples which can be misclassified by the target model $f$ while its clean images can be classified truly, and ${N}_{f}$ is the number of adversarial examples that can be predicted as corresponding clean image's label; (2) average confidence score is defined in Equation 1b. ${p}_{i}$ is the confidence score from the target model with the highest false classification confidence. (Here we do not consider the adversarial example that can be classified as its clean sample's label by the model.) + +$$ +{P}_{adv} = \frac{{N}_{s}}{{N}_{s} + {N}_{f}} \tag{1a} +$$ + +$$ +{C}_{\text{ave }} = \frac{1}{{N}_{s}}\mathop{\sum }\limits_{{i = 1}}^{{N}_{s}}{p}_{i} \tag{1b} +$$ + +Table 5: Classification number and confidence of adversarial examples after multiple attacks + +
A image with classification number 129+0+Deepfool+BIM+FGSM+PGD
Deepfool129 (60.72%)129(71.88%)129(95.91%)Unsuccessful generation129(92.27%)
BIM129 (89.82%)129(92.32%)129(99.37%129(98.65%129 (90.22%)
FGSM129(89.18%)129 (72.24%)533 (31.53%)129(71.53%)129 (55.72%)
PGD129 (81.00%)129 (82.52%)129 (88.24%)129(99.71%)129 (55.72%)
+ +### 5.2 Results and Analysis + +Adversarial example generation. We first show the generated adversarial examples using FGSM, CI-FGSM, BIM, and CI-BIM, as shown in Figure 8. We can see that similar to FGSM and BIM. Our CI-FGSM and CI-BIM can also produce adversarial examples with imperceptible perturbations that can fool the target deep learning model. + +![01963eb2-d575-7633-8761-d17bdd727eb1_6_196_1065_636_936_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_6_196_1065_636_936_0.jpg) + +Figure 8: Adversarial examples generated by FGSM, CI-FGSM, BIM, and CI-BIM + +Attack effects after image reloading. Using different approaches, we generate a large amount of adversarial images, save them to the local disk. Then we reload them and feed them into the target model for prediction. We measure the success rates and average confidence scores of the four algorithms in Table 6. FGSM has a lower success rate and confidence score as it adopts striding perturbation. In contrast, BIM has higher attack performance. For our CI-BIM, although the confidence score is slightly lower than BIM. But the success rate is much higher than that of BIM. Our CI approach is more efficient when $\varepsilon$ is smaller. + +Different parameters can lead to different effects of the CI approach. Figure 9 demonstrates the adversarial success rate of adversarial examples from CI algorithm with different threshold $p$ . We can see that by increasing $p$ , the attack performance can be significantly improved. To boost the adversarial attacks, conventional approaches require setting a large disturbance hyper-parameter $\varepsilon$ (e.g.,16), and large number of iterations $T$ (e.g.,10). To achieve the same effects, our CI approach only needs to increase the threshold while keeping smaller values of $\varepsilon \left( {{0.05} \sim {0.2}}\right)$ and $T$ (e.g.,6) to achieve similar attack effects. + +Resistance against Detection of Adversarial Examples. In addition to defeating input transformation, our CI-approach is better at evading adversarial example detection. We use the wavelet reconstruction algorithm [21] as the defense method to measure the performance of different adversarial algorithms. After being processed by the wavelet reconstruction algorithm, the adversarial examples with weak attack capabilities will be identified as the initial clean image's label by the classification model. As the name implies, we first process adversarial examples through a wavelet denoising algorithm. Then, we send the denoised image into ESRGAN, A super-resolution reconstructed network. By detecting the adversarial examples processed by the wavelet denoising algorithm, we could measure the generated adversarial examples' robustness. We set the parameter $\varepsilon$ as 0.1 and $\delta$ of the wavelet denoising algorithm from 0.01 to 0.1 . Figure 10 shows the comparison results. We can clearly see that although the attack performance of BIM is better than FGSM, the adversarial examples generated by the BIM algorithm are easier to be detected as adversarial examples under the same parameters. On the contrary, our CI method has high attack performance and is not easy to be detected as adversarial examples, especially when the detection parameter $\sigma$ is small. + +Application to other adversarial example techniques. In addition to BIM, our CI approach can be also applied to other adversarial attack algorithms to boost adversarial examples. Figure 11 shows the attack performance of FGSM with CI and its comparisons with simple FGSM. We can see that the CI approach can improve the attack performance of FGSM, which is more obvious when the parameter $\varepsilon$ is smaller. Simultaneously, the effect of CI-FGSM is much better than that of an ordinary BIM algorithm. CI-BIM algorithm has the best adversarial success rate among the four algorithms, which is also easy to understand. When the parameter $\varepsilon$ is small, the FGSM algorithm uses the small step length for perturbation superposition, BIM algorithm iterates these small step length perturbations, and CI-BIM algorithm iterates again for the iteration of these small step length perturbations on the premise of confidence satisfying the requirements $p$ . This is an iteration at different scales. In a sense, our method implements an adaptive step size attack. When the parameter $\varepsilon$ is relatively small, the adjustment range of the dynamic step length is larger, which means that our CI-BIM algorithm can find adversarial examples with high attack capabilities in a larger range. Essentially, the CI-BIM algorithm has higher adversarial attack performance because of its wider search domain for generating more robust adversarial examples. + +Table 6: the success rates and average confidence scores of adversarial examples + +
success rateconfidence score
$\varepsilon = {0.1}$$\varepsilon = {0.2}$$\varepsilon = {0.3}$$\varepsilon = {0.1}$$\varepsilon = {0.2}$$\varepsilon = {0.3}$
FGSM81.4%95.8%99.2%23.3%20.3%27.0%
CI-FGSM87.0%96.5%98.8%22.9%21.5%27.7%
BIM87.5%94.4%94.0%74.7%73.2%68.3%
CI-BIM95.5%98.9%99.3%57.8%62.9%63.7%
+ +## 6 CONCLUSION + +In this paper, we evaluate the effectiveness of adversarial examples after being reloaded or transferred. We discover that most mainstream adversarial attacks will fail with such input transformation. Then we propose a new solution, Confidence Iteration, to generate high-quality and robust adversarial examples. This solution can significantly facilitate other existing attacks, increasing the attack success rate and reducing the detection rate. Future work includes more evaluations on the integration with other attack techniques and leveraging Confidence Iteration to enhance DNN models via testing and adversarial training. + +## REFERENCES + +[1] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfel-low, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. + +[2] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. + +[3] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages $1 - 9,{2015}$ . + +[4] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pages 372-387. IEEE, 2016. + +[5] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39-57. IEEE, 2017. + +[6] Moustafa Alzantot, Bharathan Balaji, and Mani Srivastava. Did you hear that? adversarial examples against automatic speech recognition. arXiv preprint arXiv:1801.00554, 2018. + +[7] Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, Xiaofeng Wang, and Carl A Gunter. Commandersong: A systematic approach for practical adversarial voice recognition. In 27th $\{$ USENIX $\}$ Security Symposium (\{USENIX\} Security 18), pages 49-64, 2018. + +[8] Nicholas Carlini and David Wagner. Audio adversarial examples: Targeted attacks on speech-to-text. In 2018 IEEE Security and Privacy Workshops (SPW), pages 1-7. IEEE, 2018. + +[9] Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. Generating natural language adversarial examples. arXiv preprint arXiv:1804.07998, 2018. + +[10] Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. Textbugger: Generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271, 2018. + +[11] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016. + +[12] N. Papernot, P. McDaniel, and I. Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016. + +[13] Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117, 2017. + +[14] Gintare Karolina Dziugaite, Zoubin Ghahramani, and Daniel M Roy. A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv: 1608.00853, 2016. + +[15] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016. + +[16] S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574-2582, 2016. + +[17] W. Brendel, J. Rauber, and M. Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248, 2017. + +[18] J. Chen, M. I. Jordan, and M. J. Wainwright. Hopskipjumpattack: A query-efficient decision-based attack. arXiv preprint arXiv:1904.02144, 2019. + +[19] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. + +[20] S. M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1765-1773, 2017. + +[21] A. Mustafa, S. H. Khan, M. Hayat, J. Shen, and L. Shao. Image super-resolution as a defense against adversarial attacks. arXiv preprint arXiv:1901.01677, 2019. + +[22] N. Frosst, S. Sabour, and G. Hinton. Darccc: Detecting adversaries by reconstruction from class conditional capsules. arXiv preprint arXiv:1811.06969, 2018. + +[23] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826, 2016. + +[24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. + +![01963eb2-d575-7633-8761-d17bdd727eb1_8_304_293_543_444_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_8_304_293_543_444_0.jpg) + +Figure 9: Success rate of the adversarial examples generated by CI approach with different generation parameter $p$ + +![01963eb2-d575-7633-8761-d17bdd727eb1_8_303_871_551_440_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_8_303_871_551_440_0.jpg) + +Figure 10: Detection rate of the adversarial examples with wavelet reconstruction algorithm + +![01963eb2-d575-7633-8761-d17bdd727eb1_8_302_1443_550_443_0.jpg](images/01963eb2-d575-7633-8761-d17bdd727eb1_8_302_1443_550_443_0.jpg) + +Figure 11: Comparison of the adversarial success rate of FGSM, CI-FGSM, BIM and CI-BIM + diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/_VoSnnpUDkC/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/_VoSnnpUDkC/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..476b8d42dad3195b092fb7c8c1b7887bad53d6af --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/_VoSnnpUDkC/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,412 @@ +§ GENERATING ADVERSARIAL EXAMPLES FOR ROBUST DECEPTION AGAINST IMAGE TRANSFER AND RELOADING + +Category: Research + +§ ABSTRACT + +Adversarial examples play an irreplaceable role in evaluating deep learning models' security and robustness. It is necessary and important to understand the effectiveness of adversarial examples to utilize them for model improvement. In this paper, we explore the impact of input transformation on adversarial examples. First, we discover a new phenomenon. Reloading an adversarial example from the disk or transferring it to another platform can deactivate its malicious functionality. The reason is that reloading or transferring images can reduce the pixel precision, which will counter the perturbation added by the adversary. We validate this finding on different mainstream adversarial attacks. Second, we propose a novel Confidence Iteration method, which can generate more robust adversarial examples. The key idea is to set the confidence threshold and add the pixel loss caused by image reloading or transferring into the calculation. We integrate our solution with different existing adversarial approaches. Experiments indicate that such integration can significantly increase the success rate of adversarial attacks. + +Keywords: Adversarial Examples, Robustness, Reloading + +Index Terms: Computing methdologies-Computer vision problems; Neural networks-Security and privacy-Software and application security; + +§ 1 INTRODUCTION + +DNNs are well known to be vulnerable to adversarial attacks [1]. The adversarial algorithm can add small but carefully crafted perturbations to an input, which can mislead the DNN to give incorrect output with high confidence. Extensive work has been done towards attacking supervised DNN applications across various domains such as image [2-5], audio [6-8], and natural language processing [9, 10]. Since DNNs are widely adopted in different AI tasks, the adversarial attacks can bring significant security threats and damage our everyday lives. Moreover, researchers have demonstrated the possibility of adversarial attacks in the physical world $\left\lbrack {{11},{12}}\right\rbrack$ , proving that the attacks are realistic and severe. + +In addition to attacking DNN models, generating powerful and robust adversarial examples also has very positive meanings. First, adversarial examples can be used to test and evaluate the robustness and security of DNN models. The more sophisticated and stealthy the adversarial examples are, the more convincing their evaluation results will be. Second, generating adversarial examples can also help defeat such adversarial attacks. One promising defense is adversarial training [2], where adversarial examples will be included in the training dataset to train a model that is resistant to those adversarial examples. Obviously, if we inject more powerful adversarial examples into the training set, the model will be more robust. + +In this paper, we explore and evaluate the effectiveness of adversarial examples with transformation. Guo et al. [13] studied the image transformation (cropping-scaling, bit-depth reduction, compression) as a defense against adversarial attacks; Dziugaite et al. [14] conducted comprehensive evaluations on the effectiveness of adversarial examples with JPG compression. Unlike the above work that actively transforms the images, we consider cases where images are passively transformed due to reloading or transferring. We discover that an image will lose certain precision when it is reloaded from the disk, or transferred to a different platform. Such precision reduction in an adversarial example can counter the adversarial perturbation, making the attack ineffective. We evaluate adversarial examples' effectiveness with different mainstream methods and find that most of the methods will fail after the image is reloaded or transferred. + +To generate robust adversarial examples against image reloading or transferring, we propose a novel approach, Confidence Iteration (CI). Generally, our CI approach dynamically checks the generated examples' confidence score to evaluate its effectiveness after being reloaded or transferred. By doing so it can filter out the less qualified adversarial examples. + +Our approach has several advantages. First, it is generic and can be integrated with existing adversarial attacks for enhancement because it can be called outside of the adversarial algorithm. Second, the adversarial examples generated by our approach have higher success rates before and after they are reloaded or transferred. Third, the adversarial examples generated by our approach have a lower detection rate by state-of-the-art defense solutions. We expect that our solution can help researchers better understand, evaluate and improve DNN models' resistance against various adversarial examples. + +In summary, we make the following contributions: + + * we are the first to find that the adversarial examples can be ineffective after being reloaded or transferred. We confirm our findings through comprehensive evaluations; + + * We propose an effective method, Confidence Iteration, to generate more robust adversarial examples, which can maintain high attack performance under image transformation. + +The rest of the paper is organized as follows: Section 2 gives the background and related work about adversarial attacks and defenses. Section 3 describes and evaluates the adversarial examples' effectiveness after image reloading and transferring. We introduce our approach in Section 4 and evaluate it in Section 5. Section 6 concludes the paper. + +§ 2 RELATED WORKS + +In this section, we give a brief background about attack and defense techniques of adversarial examples. We also introduct the resistance of adversarial examples against input transformation. + +§ 2.1 ADVERSARIAL ATTACK TECHNIQUES + +An adversary carefully crafts adversarial examples by adding imperceptible and human unnoticeable modifications to the original clean input. The target model will then predict this adversarial example as one attacker-specific label (targeted attack), or arbitrary incorrect labels (untargeted attack). Most adversarial attacks require that the ${L}_{p}$ norm of the added modifications cannot exceed a threshold parameter $\varepsilon$ . Different adversarial attack techniques have been proposed. We will describe six common attack methods below. + +Fast Gradient Sign Method (FGSM) [2]. The intuition of FGSM is that the adversary can modify the input such that the change direction is completely consistent with the change direction of the gradient, making the loss function increase at the fastest speed. Such changes can cause the greatest impact on the classification results, making the neural network misclassify the modified input. + +Basic Iterative Method (BIM) [15]. This is a simple extension of FGSM. The basic idea of BIM is to apply FGSM for several iterations, with a small step size for each iteration. The number of iterations is determined by $\min \left( {\varepsilon + 4,{1.25\varepsilon }}\right)$ . + +DeepFool [16]. Deepfool is based on the assumption that models are fully linear. There is a polyhedron that can separate individual classes. The DeepFool attack searches for adversarial examples with minimal perturbations within a specific region using the L2 distance. Therefore, one big advantage of DeepFool is that it can automatically determine the optimal perturbation threshold $\varepsilon$ . + +Decision-Based Attack [17]. The decision-based attack starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. It is a method that only relies on the model's final decision. A perturbation is sampled from a proposal distribution at each step, which reduces the distance of the perturbed image towards the original input. They find progressively smaller adversarial perturbations according to a given adversarial criterion. The decision-based attack finally generates an adversarial example with little disturbance near the classification boundary. + +HopSkipJump Attack [18]. HopSkipJump Attack is an algorithm based on a novel estimate of the gradient direction using binary information at the decision boundary. Different from Decision-Based Attacks, which need a large number of model queries, Hop-SkipJump Attack requires significantly fewer model queries and generation time. What is more, in HopSkipJump Attack, the perturbations are used to estimate a gradient direction to handle the inefficiency in Boundary Attack. + +Projected Gradient Descent(PGD) [19]. Their PGD attack consists of initializing the search for an adversarial example at a random point within the allowed norm ball, then running several iterations of the basic iterative method [15] to find an adversarial example. + +§ 2.2 ADVERSARIAL EXAMPLE DEFENSE TECHNIQUES. + +Existing approaches for defeating adversarial examples mainly fall into two categories, as described below. + +Adversarial Training. Szegedy et al. [2] proposed that by training the neural network with the mixed dataset of adversarial examples and original clean samples, the new model will be resistant to adversarial examples. However, Moosavi-Dezfooli et al. [20] showed that an adversary can still generate new examples to fool the defense model. + +Adversarial Example Detection. Instead of enhancing the models, these approaches aim to detect adversarial examples. One typical solution is de-noising. Mustafa A et al. [21] proposed the wavelet reconstruction algorithm to map the adversarial examples outside of the manifold region to the natural images' manifold region through a deep image reconstruction network. It can restore the normal discriminability of the classifier effectively. Hinton et al. [22] adopted this reconstruction process of capsule network to detect adversarial examples automatically. + +§ 2.3 TRANSFORMATION AND DISTORTION OF ADVERSARIAL EXAM- PLES. + +Most neural networks trained for image classification are trained on images that have undergone JPG compression, containing the original data subspace. + +Dziugaite et al. [14] find that perturbations of natural images (by adding scaled white noise or randomly corrupting a small number of pixels) are almost certain to move an image out of the JPG subspace and, therefore, out of the data subspace. Adversarial examples can, therefore, induce the classification network to give wrong classification results. However, when the degree of disturbance is small, the pixel disturbance value superimposed on the original image by the adversarial example is also small, which means that these disturbance values are not robust to image compression, storage, and transmission. The pixel loss is the reason why image transformation or distortion can defeat adversarial examples.Obviously, how to keep pixel perturbation is the solution to this problem. + +§ 3 TRANSFERRING AND RELOADING OF ADVERSARIAL EX- AMPLES + +We study different popular image formats and approaches of adversarial attacks and conclude that image transferring and reloading can significantly reduce adversarial attacks' success rate. + +§ 3.1 ROOT CAUSE + +There are two reasons that image transferring and reloading can deactivate adversarial examples. First, in an adversarial image generated using existing approaches, each pixel is usually represented as a float value. When we store the image into the disk, the pixels will be converted into int type to save space. Such accuracy loss can make the adversarial example ineffective when we reload it from the disk. We find that the mainstream image formats (BMP, JPEG, and PNG) all perform such pixel conversion. Second, when we transfer an image to a different platform via networks, the image is usually compressed to save the network traffic. For instance, we use the WeChat application to send pictures from a smartphone to a laptop and find that the application will compress the pictures with an ${80}\%$ compression rate by default. + +Although such conversion and compression types have a human unnoticeable impact on the images, they can significantly affect adversarial attacks' success rate. The adversary's goal is to find the smallest perturbation that causes the model to classify the image into an attack-specific category. Common techniques usually move the original clean samples towards the classification boundary and stop when the samples just cross the boundary to make sure that the added perturbation is small. So the adversarial examples have very high precision requirements for their pixel values. The small changes caused by image reloading or transferring can move the adversarial images to classes different from the one the adversary desires, making the adversarial examples ineffective. Here, we use Figure 1 to directly illustrate the adverse effects of image reloading and image format transformation on the adversarial effect of the adversarial example. Below we conduct a set of experiments to validate those effects empirically. + + < g r a p h i c s > + +Figure 1: Red dots represent data, and the gray line represents the hyperplane that can separate individual classes. The gray dots represent the inner boundary of the adversarial examples. The green dot represents a specific adversarial example. The yellow dot represents that reloading can project this adversarial example back into the original sample space. + +§ 3.2 EXPERIMENTS + +§ 3.2.1 IMPACT OF IMAGE RELOADING. + +We first empirically check the effectiveness of adversarial examples after being saved and reloaded. + + < g r a p h i c s > + +Figure 2: Pixel values before and after saving/reloading + +Precision loss. We generate a $3 \times 3$ image, and add each pixel value with a random perturbation $q$ between 0 and 1 . Then we save the image into three different formats (JPG, BMP, PNG) and then reload it into the memory. All the operations are done under windows10. + +Figure 2 shows the pixel values of the original image (2a) and reloaded JPG (2b), PNG (2c) and BMP (2d) images, respectively. We observe that each image format has precision loss due to the type conversion from float to int: JPG format directly discards the decimals. In contrast, PNG and BMP formats round off the decimals. Although such estimation does not cause visual-perceptible effects to the image, it can affect the results of adversarial attacks, as these attacks require precise pixel-level perturbations. We demonstrate the effects below. + +Effectiveness of adversarial examples. We measure the performance of adversarial examples after being reloaded or transferred. We select six commonly used approaches of adversarial attacks: Decision-Based Attack [17], HopSkipJump Attack [18], Deepfool [16], BIM [15], FGSM [2], and PGD [19]. For each approach, we generate some adversarial examples. Decision-Based Attack, HopSkipJump Attack and PGD use ResNet50 classifier. Deepfool uses ResNet34 classifier. BIM and FGSM use the VGG11 classifier. Furthermore, all adversarial examples are tested with the classifier used at the time of generation. + +We find that all the six adversarial attack methods measure the classification number and confidence of adversarial examples at the time of generation to judge whether the adversarial attack is successful. In fact, the classification number and confidence at this time are not true, because the model does not classify the real image at this time. They all use models(for example, ResNet50) to classify the generated Numpy array instead of the real picture itself. It means, so far, they have not generated the image form of the adversarial examples. To test the effectiveness of the adversarial examples, we use cv2.imwrite and plt.savefig to download the adversarial examples locally. Next, we use the same model(for example, ResNet50) to load the adversarial examples saved locally. In this paper, we refer to the above behavior as "Reloading." + +We also find that when images are transmitted through instant messaging software, companies compress them to save bandwidth, which results in a loss of pixels in the image, which is detrimental to the adversarial examples generated by subtle perturbations. For example, when we use WeChat to send an image to a friend, our friend can see the compressed image with only a small amount of traffic. Instead of clicking the "download the original" button, we save the compressed image locally and use the above model to categorize it. The above process is referred to as "Transferring" in this paper. + +We use Figure 3 and Figure 4 to illustrate adversarial examples' confidence values after being reloaded and transferred. Different colors represent different classification Numbers, the height of the column represents confidence, and each block represents six algorithms from left to the right: Decision-Based Attack [17], HopSkipJump Attack [18], DeepFool [16], BIM [15], FGSM [2], and PGD [19]. We can see that the initial image can be correctly classified with high confidence in all six algorithms. Besides, all the adversarial examples generated by the algorithms can be classified into other categories, which means that the six algorithms' adversarial examples have the adversarial ability to deceive classification models into giving false results. + + < g r a p h i c s > + +Figure 3: Classification number and confidence of adversarial examples after being reloaded and transferred. + + < g r a p h i c s > + +Figure 4: Classification number and confidence of adversarial examples after being reloaded and transferred for another picture. + +Surprisingly enough, we find that regardless of adversarial examples are saved in JPG, PNG, or BMP, most of them could be classified as the original clean image when they are reloaded or transferred. Some even had high confidence. As reflected in the image, the image after Reloading or Transferring is classified as the original clean image with the same color. + +We hope to use more straightforward data to show you this phenomenon. As a result, Table 1 and Table 2 are two experimental results of another two groups of Reloading and Transferring. The data in the table represents the classification number and confidence (data in brackets). We can find that many of the adversarial examples generated by the six kinds of adversarial attacks cannot maintain their attack ability after being reloaded or transferred. After being reloaded or transferred, the adversarial examples will be classified as original clean samples' labels (such as 90 and 129) by the classifier. In order to verify that the adversarial examples with high confidence also have Reloading and Transferring problems, we conduct the following experiments with results in Table 3: + +We can find that the adversarial examples of Picture $1 \sim$ Picture4 with high confidence as shown in Figure 5, after being reloaded or transferred, a large part of them are classified as original clean samples in Table 3, proving that the adversarial examples with high confidence also have Reloading and Transferring problems. + +All data in Tables 1 to 3 are all derived from the ResNet50 model. + +Cross Validation. Instead of using the same model to verify adversarial examples' effectiveness, we conduct two sets of cross-validation experiments. One set uses the Reloaded images, and the other uses the Transferred images. The classification number of the initial clean image is 129 . The classification numbers of their adversarial examples generated by the six adversarial algorithms are no longer 129, which means that the adversarial attack is successful(not shown in Table 4). We feed the two sets of adversarial examples generated by algorithm A into algorithm B after they are Reloaded or Transferred, to cross-verify the adversarial effectiveness of adversarial examples after being Reloaded or Transferred. Table 4 shows their classification Numbers and Confidence in other algorithms. + + < g r a p h i c s > + +Figure 5: Adversarial Examples generated from Picture1P̃icture4 + +Obviously, no matter after Reloaded or Transferred, the adversarial examples lose their effectiveness in their own and other adversarial algorithms. After WeChat transmission, due to the existence of image compression during the transmission process, four new items in the table are classified to be recovered as clean samples. + +Multiple attacks In this section, we use the existing adversarial examples as input and conduct other adversarial attacks. The new adversarial examples after reloaded are tested for effectiveness. The results are shown in Table 5. + +As shown in Table 5, even if we send the generated adversarial examples into another generation algorithm again, the problem that reloading and transferring results in the decrease of the adversarial effectiveness also exists and is very serious. In Table 5, we see that in addition to an item that failed to generate the adversarial example across models and an item misclassified as classification number 533, other adversarial examples are all classified as the initial clean sample's classification number 129. + +The above chart synoptically shows that Reloading and Transferring will significantly reduce the effectiveness of the adversarial attack. This is true for single attacks, cross attacks, and multiple attacks. + +§ 3.3 SPECTRUM ANALYSIS. + +Next, spectrum analysis is performed on the adversarial examples used in Table 1 and Table 2. + +The spectrum analysis results are shown in Figure 6. From left to right are the initial images, adversarial examples generated by BIM, FGSM and Deepfool algorithms. We can find that the Deepfool algorithm can retain the original clean sample's original appearance to the greatest extent. In contrast, FGSM algorithm introduces more noise points, which is reflected in the spectrum map, that is, FGSM algorithm generates a more uniform distribution of the spectrum map with more low-frequency components. This is why the adversarial examples generated by the FGSM algorithm have better resistance to Reloading and Transferring loss in Table 1 and Table 2. + +The results of the wavelet transform spectrum diagram of the original picture and adversarial examples of BIM, FGSM, and Deepfool are shown in Figure 7 from left to right. Obviously, in the wavelet domain, the original clean image is closest to the adversarial example generated by Deepfool, both in low and high-frequency components, which means that Deepfool's algorithm can counter the attack with minimal perturbation and is least likely to maintain its antagonism at the same time. FGSM algorithm exerts a large disturbance, so the high and low-frequency components in the wavelet domain are quite different from the original clean image, maintaining the antagonism relatively well. + +§ 4 A ROBUST APPROACH TO GENERATING ADVERSARIAL EXAMPLES + +As discussed in Section 3, adversarial examples generated from existing techniques will become ineffective after being reloaded or transferred. In this section, we propose an efficient and robust approach, Confidence Iteration (CI), to produce adversarial examples that are resistant to the processes of Reloading or Transferring. CI is generic: it can be integrated with all existing adversarial example techniques to improve their robustness while maintaining their advantages. + +Our CI approach's intuition is that an adversarial example's confidence score of the attacker-specific classification number reflects this example's resistance against input reloading or transferring. We use one existing technique (e.g., FGSM, BIM) to generate an adversarial example and save it in the disk locally, and measure its confidence score of the target class. (This actually involves reloading the image.) If the confidence score is higher than a threshold, we will accept this image. Otherwise, we continue to iterate, save it locally (or transform it through WeChat transmission), and measure the target class's reloading confidence score until it meets the confidence requirement or exceeds the iteration number threshold. When the confidence value $c$ meets the expected requirement $p$ , the adversarial example image saved to the hard disk at this time has some resistance to the pixel value's variant. Besides, multiple gradient rise caused by multiple iterations will keep the pixel values change with consistent direction. That is to say, after many iterations, the fractional parts of some pixel values will be promoted to the integer part, can no longer be discarded. To measure if an adversarial example is effective after image distortion, we adopt the wavelet reconstruction algorithm [21]. As the name implies, we first process adversarial examples through the wavelet denoising algorithm. Then, we send the denoised image into ESRGAN, A super-resolution reconstructed network. Some adversarial examples with weak attack ability will be classified as initial clean samples after being processed by this algorithm, which means that their attack ability has been lost. By detecting the adversarial examples processed by the wavelet reconstruction algorithm, we could measure the generated adversarial examples' robustness and effectiveness. + +Table 1: Classification number and confidence of an adversarial example after being reloaded and transferred + +max width= + +X Classification number(confidence) Attack Decision HopSkipJump Deepfool BIM FGSM PGD + +1-8 +X Original images 90(74.060%) 90(74.060%) 90(99.811%) 90(99.582%) 90(97.312%) 90(100.000%) + +1-8 +2|c|adversarial images 852 (15.062%) 84(48.441%) 95(49.315%) 95(61.163% 735(44.672%) 318 (100.000%) + +1-8 +2*JPG reloading 90(72.291%) 90(69.921% 90(52.677%) 90(46.958% 84(99.217%) 90(99.651% + +2-8 + transferring ${90}\left( {{63.671}\% }\right)$ 90(93.686% 90(52.985% 90(47.276% 84(99.402% 90(96.650% + +1-8 +2*PNG reloading 84(52.540% 84(83.981%) 90(43.454% 90(45.934% 84(99.421%) 90(94.402% + +2-8 + transferring 90(82.835% 90(50.656% 90(80.671% 90(36.895% 84(89.627% 90(99.985% + +1-8 +2*BMP reloading 84(52.540% 84(83.981%) 90(43.454% 90(45.934% 84(99.421% 90(94.402% + +2-8 + transferring 90(82.835%) 90(50.656%) 90(80.671%) 90(36.895%) 84(89.627% 90(99.985% + +1-8 + +Table 2: Classification number and confidence of another adversarial example after being reloaded and transferred + +max width= + +X Classification number(confidence) Attack Decision HopSkipJump Deepfool BIM FGSM PGD + +1-8 +2|c|Original images 129(89.531%) 129(89.531%) 129 (86.374%) 129(71.917%) 129(91.494%) 129 (98.182%) + +1-8 +2|c|adversarial images 852 (12.363%) 132 (36.282%) 128 (48.604%) 128(98.746% 915 (5.642%) 128 (97.858%) + +1-8 +2*JPG reloading 132 (35.742 % 129(65.183%) 129(60.726% 129 (87.825% 132 (51.324%) 129 (81.000% + +2-8 + transferring 132(34.461% 129 (58.947%) 129 (88.792% 129 (85.496% 132 (30.130% 129 (98.601% + +1-8 +2*PNG reloading 132.63.513% 129(64.022% 129 (53.670%) 129-85.081% 132 (53.185% 128 (38.533%) + +2-8 + transferring 129 (36.472%) 129(77.169% 129(81.671%) 129 (81.244% 129(41.192% 129 (89.833%) + +1-8 +2*BMP reloading 132(53.513%) 129(64.022%) 129 (53.670%) 129 (85.081%) 132 (53.185% 128 (38.533%) + +2-8 + transferring 129 (36.472%) 129(77.169%) 129 (81.671%) 129 (81.244%) 129 (41.192%) 129 (89.833%) + +1-8 + + < g r a p h i c s > + +Figure 6: Spectrum analysis of pictures in Table 1 and Table 2 + + < g r a p h i c s > + +Figure 7: Wavelet transform spectrum diagram of original picture and adversaral examples of BIM, FGSM and Deepfool from left to right + +Table 3: Classification number and confidence of adversarial examples generated from Picture1 $\sim$ Picture4 after being reloaded and transferred + +max width= + +2|c|FGSM Picture1 Picture2 Picture3 Picture4 + +1-6 +2|c|Original images 106(94.478%) 288(90.196%) 173(92.451%) 376(99.613%) + +1-6 +2|c|adversarial images 343(84.336%) 293 (95.005%) 104(86.118%) 371 (69.347%) + +1-6 +2*JPG reloading 106(99.904%) 288(49.574% 104(28.730%) 371 (34.062%) + +2-6 + transferring 106(99.953%) 288 (54.895%) 104(31.623%) 371 (33.070%) + +1-6 +2*PNG reloading 106(99.685%) 608(26.309%) 173(49.878%) 376(36.097%) + +2-6 + transferring ${106}\left( {{99.807}\% }\right)$ 390 (47.548% ${173}\left( {{47.880}\% }\right)$ 371 (66.135%) + +1-6 +2*BMP reloading 106(99.685%) 608(26.309%) 173(49.878%) 376(36.097%) + +2-6 + transferring ${106}\left( {{99.807}\% }\right)$ 390(47.548%) 173(47.880%) 371 (66.135%) + +1-6 + +Table 4: Classification number and confidence of adversarial examples after being reloaded and transferred using Cross-Validation + +max width= + +2|c|Classification number(confidence) Original clean image Deepfool BIM FGSM + +1-6 +2*Deepfool reloading 129(89.16%) 129(72.14%) 129 (86.31%) 128(57.74%) + +2-6 + transferring 129(91.25%) 128(77.49%) 129 (89.12%) 129(67.91%) + +1-6 +2*BIM reloading 128(72.14%) 128(77.30%) 129(60.48%) 129(65.53%) + +2-6 + transferring 129(91.81%) 128 (78.98%) 141 (47.95%) 129 (85.49% + +1-6 +2*FGSM reloading 132(82.26%) 129(40.96%) 129(14.19%) 129 (58.89%) + +2-6 + transferring 129 (65.64%) 129(42.91%) ${129}\left( {{15.98}\% }\right)$ 129 (88.35% + +1-6 +2*PGD reloading 129(60.72% 129 (87.82%) 129(89.18%) 129 (81.00%) + +2-6 + transferring 129(66.12%) ${129}\left( {{68.06}\% }\right)$ 129(89.11%) ${129}\left( {{98.60}\% }\right)$ + +1-6 + +Algorithm 1 Confidence Iteration + +Input: A classifier $f$ with loss function $J$ ;a real example $\mathbf{x}$ and + +ground-truth label $y$ ; + +Input: The size of perturbation $\varepsilon$ ;iterations limit number ${T}_{\max }$ and + +confidence limit value $p$ ; + + Output: iterations number $T$ ; An adversarial example ${\mathbf{x}}^{ * }$ with + + ${\mathbf{x}}^{ * } - \mathbf{x}{\parallel }_{\infty } \leq {T\varepsilon }$ + + $T = 0$ + + ${\mathbf{x}}_{T}^{ * } = \mathbf{x};$ + + Save ${\mathbf{x}}_{T}^{ * }$ as a picture ${\mathbf{x}}_{T}^{\text{ real }}$ on your local hard drive (or transform + + it through WeChat transmission) + + Input ${\mathbf{x}}_{T}^{\text{ real }}$ to $f$ and obtain the confidence $c$ and the gradient + + ${\nabla }_{\mathbf{x}}J\left( {{\mathbf{x}}_{T}^{\text{ real }},{y}_{\text{ true }}}\right)$ ; + + while $\left( {T \leq {T}_{\max }}\right)$ and $\left( {c \leq p}\right)$ do + + ${\mathbf{x}}^{ * } = {\mathbf{x}}_{T}^{\text{ real }} + \varepsilon \cdot {\nabla }_{\mathbf{x}}J\left( {{\mathbf{x}}_{T}^{\text{ real }},{y}_{\text{ true }}}\right) ;$ + + $T = T + 1$ ; + + ${\mathbf{x}}_{T}^{ * } = {\mathbf{x}}^{ * };$ + + Save ${\mathbf{x}}_{T}^{ * }$ as a picture ${\mathbf{x}}_{T}^{\text{ real }}$ on your local hard drive (or trans- + + form it through WeChat transmission) + + Reload ${\mathbf{x}}_{T}^{\text{ real }}$ to $f$ and obtain the confidence $c$ and the gradient + + ${\nabla }_{\mathbf{x}}J\left( {{\mathbf{x}}_{T}^{\text{ real }},{y}_{\text{ true }}}\right)$ ; + + end while + +Algorithm 1 summarizes our CI approach. We first input the clean image, generate its adversarial example, and then save the adversarial example locally. The local adversarial example is then reloaded into the classification model and judges whether the adversarial attack can succeed. On the premise of the adversarial attack's success, we give the confidence value $c$ of the adversarial attack, which is obtained by reloading the adversarial example in the hard disk into the classification model. Then we compare the expected confidence threshold $p$ with the current confidence $c$ . If the current confidence $c$ is less than the expected confidence threshold $p$ and the current iteration number $T$ is less than the iteration number threshold ${T}_{\max }$ . We will run the Confidence Iteration algorithm, save the generated adversarial example locally, and compare $\mathrm{c}$ and $\mathrm{p}$ . The whole process will not stop until $c$ is greater than $p$ or $T$ equals ${T}_{\max }$ . + +It is precisely because the CI algorithm has a download, reload, and confidence judgment process. We can apply it to the backend of any adversarial example generation algorithm to enhance the adversarial example's robustness against reloading and transferring. + +§ 5 EVALUATION + +In this section, we conduct experiments to validate the effectiveness of our proposed approach. + +§ 5.1 CONFIGURATIONS + +Dataset. To better reflect the real-world setting, we implement a crawler to crawl some images from websites instead of using existing image datasets. We consider the Inception v3 model [23] and restrict the scope of crawled images to the categories recognized by this model. We filter out the crawled images that the Inception v3 model cannot correctly recognize and finally establish a dataset consisting of around 1300 clean images with the correct labels. + +Implementations. We consider two adversarial example techniques: FGSM and BIM. Our CI approach is generic and can be applied to other techniques as well. We select VGG11 [24] as the target model. We implement these techniques with the CI algorithm using the PyTorch library. We set the iteration upper limit ${T}_{\max }$ as 6, and the confidence threshold $p$ as ${70}\%$ . + +Metrics. We adopt two metrics to evaluate the effectiveness of adversarial examples: (1) success rate is defined in Equation 1a. ${N}_{s}$ is the number of adversarial examples which can be misclassified by the target model $f$ while its clean images can be classified truly, and ${N}_{f}$ is the number of adversarial examples that can be predicted as corresponding clean image's label; (2) average confidence score is defined in Equation 1b. ${p}_{i}$ is the confidence score from the target model with the highest false classification confidence. (Here we do not consider the adversarial example that can be classified as its clean sample's label by the model.) + +$$ +{P}_{adv} = \frac{{N}_{s}}{{N}_{s} + {N}_{f}} \tag{1a} +$$ + +$$ +{C}_{\text{ ave }} = \frac{1}{{N}_{s}}\mathop{\sum }\limits_{{i = 1}}^{{N}_{s}}{p}_{i} \tag{1b} +$$ + +Table 5: Classification number and confidence of adversarial examples after multiple attacks + +max width= + +A image with classification number 129 +0 +Deepfool +BIM +FGSM +PGD + +1-6 +Deepfool 129 (60.72%) 129(71.88%) 129(95.91%) Unsuccessful generation 129(92.27%) + +1-6 +BIM 129 (89.82%) 129(92.32%) 129(99.37% 129(98.65% 129 (90.22%) + +1-6 +FGSM 129(89.18%) 129 (72.24%) 533 (31.53%) 129(71.53%) 129 (55.72%) + +1-6 +PGD 129 (81.00%) 129 (82.52%) 129 (88.24%) 129(99.71%) 129 (55.72%) + +1-6 + +§ 5.2 RESULTS AND ANALYSIS + +Adversarial example generation. We first show the generated adversarial examples using FGSM, CI-FGSM, BIM, and CI-BIM, as shown in Figure 8. We can see that similar to FGSM and BIM. Our CI-FGSM and CI-BIM can also produce adversarial examples with imperceptible perturbations that can fool the target deep learning model. + + < g r a p h i c s > + +Figure 8: Adversarial examples generated by FGSM, CI-FGSM, BIM, and CI-BIM + +Attack effects after image reloading. Using different approaches, we generate a large amount of adversarial images, save them to the local disk. Then we reload them and feed them into the target model for prediction. We measure the success rates and average confidence scores of the four algorithms in Table 6. FGSM has a lower success rate and confidence score as it adopts striding perturbation. In contrast, BIM has higher attack performance. For our CI-BIM, although the confidence score is slightly lower than BIM. But the success rate is much higher than that of BIM. Our CI approach is more efficient when $\varepsilon$ is smaller. + +Different parameters can lead to different effects of the CI approach. Figure 9 demonstrates the adversarial success rate of adversarial examples from CI algorithm with different threshold $p$ . We can see that by increasing $p$ , the attack performance can be significantly improved. To boost the adversarial attacks, conventional approaches require setting a large disturbance hyper-parameter $\varepsilon$ (e.g.,16), and large number of iterations $T$ (e.g.,10). To achieve the same effects, our CI approach only needs to increase the threshold while keeping smaller values of $\varepsilon \left( {{0.05} \sim {0.2}}\right)$ and $T$ (e.g.,6) to achieve similar attack effects. + +Resistance against Detection of Adversarial Examples. In addition to defeating input transformation, our CI-approach is better at evading adversarial example detection. We use the wavelet reconstruction algorithm [21] as the defense method to measure the performance of different adversarial algorithms. After being processed by the wavelet reconstruction algorithm, the adversarial examples with weak attack capabilities will be identified as the initial clean image's label by the classification model. As the name implies, we first process adversarial examples through a wavelet denoising algorithm. Then, we send the denoised image into ESRGAN, A super-resolution reconstructed network. By detecting the adversarial examples processed by the wavelet denoising algorithm, we could measure the generated adversarial examples' robustness. We set the parameter $\varepsilon$ as 0.1 and $\delta$ of the wavelet denoising algorithm from 0.01 to 0.1 . Figure 10 shows the comparison results. We can clearly see that although the attack performance of BIM is better than FGSM, the adversarial examples generated by the BIM algorithm are easier to be detected as adversarial examples under the same parameters. On the contrary, our CI method has high attack performance and is not easy to be detected as adversarial examples, especially when the detection parameter $\sigma$ is small. + +Application to other adversarial example techniques. In addition to BIM, our CI approach can be also applied to other adversarial attack algorithms to boost adversarial examples. Figure 11 shows the attack performance of FGSM with CI and its comparisons with simple FGSM. We can see that the CI approach can improve the attack performance of FGSM, which is more obvious when the parameter $\varepsilon$ is smaller. Simultaneously, the effect of CI-FGSM is much better than that of an ordinary BIM algorithm. CI-BIM algorithm has the best adversarial success rate among the four algorithms, which is also easy to understand. When the parameter $\varepsilon$ is small, the FGSM algorithm uses the small step length for perturbation superposition, BIM algorithm iterates these small step length perturbations, and CI-BIM algorithm iterates again for the iteration of these small step length perturbations on the premise of confidence satisfying the requirements $p$ . This is an iteration at different scales. In a sense, our method implements an adaptive step size attack. When the parameter $\varepsilon$ is relatively small, the adjustment range of the dynamic step length is larger, which means that our CI-BIM algorithm can find adversarial examples with high attack capabilities in a larger range. Essentially, the CI-BIM algorithm has higher adversarial attack performance because of its wider search domain for generating more robust adversarial examples. + +Table 6: the success rates and average confidence scores of adversarial examples + +max width= + +2*X 3|c|success rate 3|c|confidence score + +2-7 + $\varepsilon = {0.1}$ $\varepsilon = {0.2}$ $\varepsilon = {0.3}$ $\varepsilon = {0.1}$ $\varepsilon = {0.2}$ $\varepsilon = {0.3}$ + +1-7 +FGSM 81.4% 95.8% 99.2% 23.3% 20.3% 27.0% + +1-7 +CI-FGSM 87.0% 96.5% 98.8% 22.9% 21.5% 27.7% + +1-7 +BIM 87.5% 94.4% 94.0% 74.7% 73.2% 68.3% + +1-7 +CI-BIM 95.5% 98.9% 99.3% 57.8% 62.9% 63.7% + +1-7 + +§ 6 CONCLUSION + +In this paper, we evaluate the effectiveness of adversarial examples after being reloaded or transferred. We discover that most mainstream adversarial attacks will fail with such input transformation. Then we propose a new solution, Confidence Iteration, to generate high-quality and robust adversarial examples. This solution can significantly facilitate other existing attacks, increasing the attack success rate and reducing the detection rate. Future work includes more evaluations on the integration with other attack techniques and leveraging Confidence Iteration to enhance DNN models via testing and adversarial training. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/bK03tED1vj5/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/bK03tED1vj5/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..a2721c71f8d12da851eeeb635e5fa8fe2fb6ffd0 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/bK03tED1vj5/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,471 @@ +# Accountability-Aware Design of Voice User Interfacesfor Home Appliances + +Anonymous Author(s) + +## Abstract + +The availability of voice-user interfaces (VUIs) has grown dramatically in recent years. As more capable systems invite higher expectations, the conversational interactions that VUIs support introduce ambiguity in accountability: who (user or system) is understood to be responsible for the outcome of user-delegated tasks. When misconstrued, impact ranges from inconvenience to deadly harm. This project explores how users' accountability perceptions and expectations can be managed in voice interaction with smart home appliances. To explore links between degree of automation, system accountability and user satisfaction, we identified key design factors for VUI design through an exploratory study, articulated them in video prototypes of four new VUI mechanisms showing a user commanding an advanced appliance and encountering a problem, and deployed them in a second study. We found that participants perceived automated systems as more accountable, and were also more satisfied with them. + +Index Terms: Human-centered computing-Interface design prototyping; Auditory feedback; User Interface design; + +## 1 INTRODUCTION + +Advances in artificial intelligence (AI) are changing how users interact with software agents. AI-infused systems vary in the level of automation they present to users: they can recommend options, make decisions, communicate with other agents, and adapt to their environments [6,62]. A Voice User Interface (VUI) is a type of user interface that relies on speech recognition to communicate with users, usually with a conversational style [15] that resembles natural verbal intercourse, rather than manual clicking or typing. + +Assistant-type VUIs are growing in popularity on personal devices. The majority of Americans own a smartphone (81%) or tablet (52%), which today come equipped with Siri, Google Assistant, or equivalent VUIs [61]. 1 billion devices worldwide running Windows 10 [43] provide access to Cortana, Microsoft's voice assistant. Access is different from use, but devices that exclusively accept voice input, e.g., Amazon Echo and Google Home are on the rise: over 100 million Alexa-enabled units had been sold as of January 2019 [9]. It is clear that many consumers are newly choosing and trying voice interaction in their everyday life [2]. With this prevalence, we can go beyond ${1}^{\text{st }}$ -order traits of this modality (hands-free, natural language) to examine factors such as social consequences. + +Often, VUI technologies are used for requests for information or to trigger applications [7]: failure is annoying but not dire. However, as voice recognition technology improves and systems are capable of greater automation, users hold VUI systems to human-like standards of behavior. During VUI conversations with smart devices, users seem to anticipate accountability along similar lines as they would with humans; e.g., Porcheron et al. describe users expecting appropriate responses, both verbal and actionable, from Alexa and express dismay when these are not provided. If the system makes an utterance, they react as they would to a human utterance [52]. + +However, AI systems that enable automation typically work under uncertainty, balancing false-negative and false-positive errors with potentially confusing and disruptive results [6]. Impact widens as standalone systems become platforms that control many home technologies through the Internet of Things (IoT) [7, 36, 45]. With Google Home, users can adjust lighting and set the thermostat [33], but also interact with systems invoking larger consequences. Smart washing machines can ruin clothes; personal assistant devices can spend money online. A semi-autonomous car can crash and kill in a moment of ambiguity over who is in charge. + +So, what happens in the case of a bad outcome? Does the user hold the system responsible, or themselves? + +Accountability can be defined as who (e.g., user or system) is obligated or willing to be responsible or accountable for the satisfactory execution of a task, including one that has been delegated [3]. It is fundamental to how people conceptualize their actions and react to outcomes in a social context, by considering risk, uncertainty, and trust when taking or delegating ownership of outcomes [13]. Both a societal and individual concept, accountability differs subtly from responsibility; it is possible to be responsible (in charge) without being accountable if you take action but not ownership of the results. + +Can interaction design mediate this balance, when it is important that the user retain accountability for a delegated task? We focus on perceived accountability (hereafter accountability), which varies with user experience and expectations as well as situation (e.g., degree of automation actually available), and therefore can vary by instance [62]. These factors impact a user's perception of system capability [18]. Because of these interlinked perceptions, we posit that through design we can manage user perception of a system's automation, and influence their notion of accountability. + +Research Questions: We consider two questions in the context of VUI interaction with advanced home appliances: + +RQ1: What design factors impact user perception of system accountability? + +RQ2: How does automation influence perceived accountability and user satisfaction? Can interface design mediate this influence? + +Approach: An invitation from industry colleagues to investigate user experience with voice-controlled smart appliances led us to consider VUIs in terms of user types, social roles, privacy and value added. In a first exploratory study, accountability perception emerged as an important and understudied factor. We used insights from this exploration to propose primary design factors that can influence accountability during the user's interaction with the system (RQ1). + +To go deeper, we studied more carefully how varying the level of system automation could influence user perception of system accountability. Our work is in the tradition of HCI community-proposed guidelines and recommendations for interaction with AI-infused systems. Early work by Norman [48] and Höök [30] sets out guidelines for avoiding unfavourable actions during interaction with intelligent systems, aiming to safeguard the final outcome by managing autonomy and requiring verification. Horvitz proposed a mixed-initiative method to balance automation with direct manipulation by users [29]. While these works discuss cautionary actions to avoid potential problems, their impact on users' perception of accountability in case of failure is unexplored. + +We constructed video prototypes [68] featuring different VUI interaction scenarios, as design probes to provoke open dialogues with participants about accountability for each interaction scenario. These prototypes present four types of VUI mechanisms that vary level of automation, all set in a home environment, enabling participants to compare accountability delegation. + +Through in-person interviews and online questionnaires, we obtained participants' rankings of accountability and their satisfaction of each mechanism in light of the task failure illustrated. We analyzed this data in relation to the represented system automation. + +## In contribution, we: + +- Propose primary design factors of accountability in voice user interactions with complex technology: task complexity, command ambiguity and user classification. + +- Demonstrate the ability to direct accountability perception through VUI design. + +- Provide insights on the relationship between user satisfaction and system automation. + +## 2 RELATED WORK + +### 2.1 Automation, Interaction & Accountability + +Internet of Things and Smart Home Appliances: IoT technology connects users and environmentally-embedded "smart" objects, from individual gadgets (like smartphones and smartwatches) [31], communal appliances (smart speakers, thermostats, vacuum robots), to semi-autonomous systems and sensor networks) [42,67]. The exploding number of IoT devices and complexity of controlling them can negatively impact user attitudes [49]. We note that while considerable smart technology is available today, our study scenarios imbue VUIs with slightly futuristic decision-making ability. + +Intelligent user interfaces (IUI): In addition to sensor capacity and the IoT, some smart home appliances benefit from embedded IUIs. IUIs simplify interactions through AI capabilities such as adaptation [32] and the ability to respond to natural commands and queries with apparent social intelligence. Suchman's discussion of situated action highlights the need for context-dependent responses in HCI [63]. Although situated-action models result in versatile and conversational systems, this approach is based on probabilistic behaviour which is prone to unexpected errors. + +Task delegation to AI: Studies in AI-human interaction focused primarily on systems capabilities such as reliability and cost of decision (e.g., [51]). Considering human preferences and perceptions, Lubras et al. summarize the literature in shared control between humans and AI, and propose four factors for AI-task delegation: risk, trust, motivation, and difficulty. Emphasizing human perception, their research supports the human-in-the-loop design and low preference for automation [41]. However, the user's accountability perception generally does not appear in these studies. + +Explainable-accountable AI: For both usability and ethical reasons, algorithmically derived decisions should be explainable $\left\lbrack {{19},{34},{58}}\right\rbrack$ . Systems utilizing them should provide accounts of their behaviour and inform users about sensor information, resulting decisions, and likely consequences $\left\lbrack {8,{20}}\right\rbrack$ . Some argue for policies on automated decision making [22], and a few governments have established regulations that require AI systems to provide users with explanations about any algorithmic decisions [24]. Explainable AI enables human to make sense of the machine learning models and understand the rationales behind AI decisions. Abdul et al.reviewed over 12,000 papers from diverse communities on trends in explainable-accountable AI [4]. They highlight a lack of HCI research on practical, usable, and effective AI solutions. Other groups have found that classic UX design principles may be insufficient for AI-infused products, and we need to develop guidelines specifically for human-AI interaction [6] - a motive of the present work. + +Automation and accountability: Previous work on accountability mainly focused on social accountability among humans, and how people justify or explain their judgments $\left\lbrack {{10},{65}}\right\rbrack$ . Some, however, investigate accountability during collaborative decision making of a human and an intelligent agent $\left\lbrack {{18},{46},{57}}\right\rbrack$ . Skitka et al.show that holding users accountable for their performance reduces automation bias (too much trust in automated decision makers), improves performance and reduces errors [60]. Suchman shows how the agency attributed to a human or a machine is constructed during an interaction [63]. Others have investigated how users negotiate and interpret their agency while interacting with VUIs [38,59]. + +However, no study has yet shown how to direct the perception of accountability through design. Accountability research in HCI goes beyond usability and deserves significantly more attention from intelligent user interface designers, including VUI designers. + +Control Capabilities: Building on works from Dourish, Button and Suchman $\left\lbrack {{12},{20},{63}}\right\rbrack$ , Boos et al. propose that users feel they have control over a system based on its "control capabilities" - specifically, when it is transparent, predictable and can be influenced [10] The authors further suggest that users who feel in control of an interaction are more likely to consider themselves accountable for the outcome. We aim to determine whether users can identify subtle differences in control capabilities, and whether that affects the accountability of the system. + +### 2.2 Voice-User Interfaces + +We use Porcheron's definition of a VUI, which specifies interfaces that rely primarily on voice, such as Amazon's Alexa or the Google Assistant [52]. They are always on and can be accessed from room-level distances, which results in them being highly "embedded in the life of the home" compared to other technologies. The quality of their human-centered design is imperative. + +The union of VUIs and smart home appliances is largely unexplored despite its promise; e.g., IoT is one of the most frequent VUI command categories that users employ in their daily interactions with home assistance devices [7]. While we see this as a great opportunity to incorporate VUI into home appliance technology, we heed Dourish's advice to "take sociological insights into the heart of the process and the fabric of design" [21]. + +Our work is distinguished from past efforts in VUI use in everyday life [52] by moving beyond understanding users' perception of accountability and trying to direct it through design. + +## 3 EXPLORATORY STUDY + +While some design factors have been identified at the boundary of automation and human interaction (trust, state learning, workload, machine accuracy, etc. [50]), we needed specific insights for VUI semi-automated systems. To answer RQ1, we investigated user experience with VUI products relative to non-VUI-controlled but "smart" home products. We did this through interviews $\left( {\mathrm{n} = {10}}\right)$ and questionnaires $\left( {\mathrm{n} = {43}}\right)$ , recruiting through social media (Facebook, Twitter). The results, briefly summarized here, motivated using VUIs and suggested where accountability matters most. + +### 3.1 Methods + +Participants: We targeted past purchasers of smart home appliances. Of 43 questionnaire respondents (20/21/2 F/M/unreported), age range was 25-55 years, from Canada, USA, Colombia, UK, China and Australia. All did or had owned smart home appliances. + +Questions: Participants reflected on their experiences with smart home appliances and voice-command technology, compared voice with other input modalities, and considered VUI integration for two hypothetical smart systems: lighting, and a washing machine. They were asked to imagine the functions these systems might fulfill through VUI commands, and explain any concerns. + +### 3.2 Results + +Accountability figured strongly in responses, emerging as a an under-explored design lever. Results further exposed three factors framing the situational impact of accountability: User Classification, Task Complexity and Command Ambiguity. + +Motivations and De-motivations for VUIs: Our participants appreciated VUI speed, convenience (particularly hands-free use), multitasking, shallow learning curve, and natural language. In contrast to human conversations, they wished to minimize interactions. When describing envisioned VUI smart home appliances, we heard that they needed a "machine that can decide for [itself]." [P8] However, they were concerned about unreliability, hesitating to use VUI for complex tasks with irreversible outcomes, and concerned about misinterpretation, likely from prior experience. + +Factor I - User Classification: We observed primary users, in charge of choice and maintenance, and secondary users reliant on the primary. Consistently, [56] notes that home technological management is not evenly distributed by gender or across the household. + +Factor II - Task Complexity: Participants categorized home appliances mainly by interface complexity, not underlying technology. We thus subsequently focused on home appliances with more complicated UIs and non-trivial consequences of failure. + +Factor III - Command Ambiguity: While positive overall, participants cited examples of concern which we categorize as naive access, hidden functionality and open-ended requests. Natural language is inherently ambiguous, requiring the system to make assumptions and decisions, as with human-human interactions. In so doing, accountability can be delegated - important to recognize should something go wrong. We seek design factors that influence this delegation. + +## 4 FRAMEWORK + +### 4.1 Accountability via "Control Capabilities" + +We explored how VUIs can affect accountability using Boos et al.'s theoretical framework of Control Capabilities (Section 2.1), which is based on the premise that "in order to answer accountability demands [...], certain requirements of control need to be fulfilled" [10]. We framed our experimental study around an extension of this proposition and framework, seeking to verify or disprove it. To the Transparency and Predictability dimensions proposed by Boos et al. [12] we added Reliability because of its prominence in our exploratory study. + +Transparency: Transparency can be achieved through executing clear and understandable actions. Several studies recommend improving transparency by providing explanations about the behaviour of AI-empowered systems [28, 35, 40, 53]. + +Predictability: Predictability can be obtained by producing desired and anticipated outcomes. Human-AI guidelines suggest two points where interactions with an AI should be shaped: over time and when wrong. They advise that during an interaction, a system should convey updates to users regarding future consequences of the system's behaviour, and support invocation of requests as needed [6]. + +Reliability (added): Reliability can be achieved through delivery of desired outcomes based on given explanations. A well-studied construct in automated systems, trust is crucial in long-term adoption [27] and key for voice interaction [11]. To invoke trust, we chose reliability: the quality of performing the correct actions. + +### 4.2 User Satisfaction as a Metric + +User satisfaction with automation generally improves with reduced cognitive effort. However, a system can avoid accountability by requesting detail, e.g., by providing choices or asking for confirmation. This increases user involvement, at the potential cost of satisfaction. Measuring user satisfaction as well as accountability perception indicates how well that balance is achieved. + +We defined this metric based on principles of measuring customer satisfaction level $\left\lbrack {{39},{55}}\right\rbrack$ , then designed a questionnaire to assess emotional satisfaction by asking about: (a) overall quality (Attitudinal), (b) the extent user's needs are fulfilled (Affective and Cognitive) (c) users' feelings (Affective and Cognitive) [1]. + +## 5 PRIMARY STUDY: METHODS + +### 5.1 Overview and Hypotheses + +Since humans can manage accountability in their conversations, we surmise that designers should be able to enable this in human-machine interaction. We hypothesize a correlation between automation level (from fully machine-controlled to fully user-controlled), and system accountability. Our goal was to focus on how the level of automation influences both accountability and user satisfaction, which eventually inform the design of interactive systems. + +We chose laundry as our focus task because modern washing machines require more engagement than other appliances (like refrigerators or toasters) and have a plethora of complex settings that can seem impenetrable and can cause confusion and errors. This complexity is what opens possibilities for guiding accountability perception, in a dialogue-type interaction. This also aligned with current industry activity: Samsung and LG have both developed washing machines with voice assistants. + +We conducted a controlled experiment (15 survey respondents, of which 8 were also interviewed). Participants watched a series of video sketches [68] showing four levels of automation, where an individual uses a VUI with a smart washing machine for both simple and complex tasks. In every video, the washing machine fails to fulfill the user's expectations, since accountability is relevant primarily when the system fails. + +Participants were instructed to imagine themselves as the user. We surveyed their perceptions of the washing machine's accountability for each VUI mechanism to obtain quantitative data, followed by open dialogues on accountability, satisfaction and general thoughts about each scenario. + +Our hypotheses address the joint effect of system automation and task complexity on system accountability with the future goal of employing them in balance. We anticipated that: + +H1: Increasing users' involvement in decision-making (thereby decreasing system automation) will reduce their perception of system accountability, particularly for high-complexity tasks. + +H2: As we increasingly automate task decision-making, user satisfaction will increase. + +If these hypotheses are correct, then system automation creates a trade-off between system accountability and user satisfaction. This work explores user perceptions surrounding this trade-off in the context of a VUI interaction. We also investigate the effects of task complexity on accountability and user satisfaction. + +### 5.2 VUI Accountability-Directing Mechanisms + +To direct users' accountability perception, we conceptualized four VUI mechanisms representing levels of system accountability, based on guidelines for a progression of automation in AI systems [29, 30,48]: automation, recommendation, instruction, and command. We created video dialogues by following highly cited guidelines $\left\lbrack {5,{17},{25},{26}}\right\rbrack$ . The levels differ primarily in the degree of direct manipulation, automation and information conveyed, and method of information delivery. We captured the mechanisms in walkthrough-style video prototypes for use in the study task. + +Automation presents a straightforward workflow: the user requests an outcome and the VUI notifies them of the action to be taken: e.g., after the user states they would like to wash their clothes as quickly as possible, the machine chooses to execute a quick wash cycle. Because of the system's take-charge approach, we anticipate that users will regard this largely as a delegation of accountability to the system. This accountability delegation comes into play in failure cases. For example, in the above case of the quick wash cycle, if the clothes are not cleaned as effectively as a normal wash. + +Recommendation provides options based on the user's description of the clothes. For example, after the user describes his clothes as colored, made of no special material, and medium load, the mechanism provides two suggestions with different temperatures and spin speeds. The user selects one. We posit that here, the system is accountable for the quality of recommendations, but the user who makes the choice is ultimately responsible for the outcome. + +Instruction provides the most information. It guides users in examining their clothes, and based on description and requirements, explains multiple washing suggestions. For example, after suggesting an extra rinse, the machine gives a detailed justification. If the user feels they have enough information, they can stop by saying 'Stop, I choose the first suggestion'. Here, we expect the user to hold the system accountable only for instruction accuracy. + +Although users have equivalent choices in Instruction and Recommendation, they differ in presentation of the choices. We expect this to be reflected in the Control Capabilities measures of Transparency, Reliability and Predictability. + +Command enables the user to set the washing cycle without any information from the machine. Users simply state their requirements instead of pushing buttons. This implies that the user knows what she wants. With this mechanism, we do not expect the user to hold the system accountable for the outcome. + +### 5.3 Design and Variables + +As we investigate influence of control capabilities (2.1), we rather than measures of the system's actual controllability by the user, these are guidelines whereby a designer can increase a user's sense of control They appear in two ways: informing mechanisms design (5.2); and as outcome measures (5.5), confirming whether this design manipulation was impactful. + +The study itself uses within-subject $2 \times 4$ design, with independent variables of complexity level (low/high, described below), and VUI mechanism ( 4 mechanisms). For each complexity level, we counterbalanced the order of VUI mechanisms. + +Task Complexity: With our exploratory study revealing the importance of task complexity and failure consequences on accountability perception, we varied task complexity for insight into H1 (whether design, via increased user involvement in decisions, can mediate perception of system accountability in high-complexity tasks). Low complexity - Routine laundry, common in a household. High complexity - A job involving special material (wool), extra requirements (stained fabric), and non-standard functions. + +In our videos, for the low complexity condition the system attempted to remove mud from clothing; for high complexity, to remove wine stains from a valuable sweater. For all conditions, the washing machine failed to completely clean the clothes. + +### 5.4 Procedure + +We recruited homeowners with purchasing power for home appliances, using social media advertising and referral of participants (similar to $\left\lbrack {{37},{44}}\right\rbrack$ ), necessary due to the inclusion criteria and lack of participant compensation. We recruited a subset of survey participants to be interviewed in-person, immediately post-survey. The survey took an average of 30 minutes, and the follow-up interviews 20-45 minutes, average 27 minutes. + +Survey participants answered a demographic questions and watched eight video prototypes: four distinct mechanisms, each performing a high- and low-complexity task. Videos were labelled by numerical order of appearance, counterbalanced by participant. Participants were then asked to rank the mechanisms by "how accountable each one was for the failed laundry task". Then, they scored each mechanism for Control Capabilities of Transparency, Predictability and Reliability and User Satisfaction. We asked the interviewee participant subset to verbally explain their responses. + +### 5.5 Data Collected + +The pre-video questionnaire (28 questions) collected participants' demographic information and past experience with non-smart washing machines, including whether they tended to hold non-smart washing machines "responsible for failed laundry tasks"). The post-video questionnaire collected participants' ratings for participants' satisfaction and Control Capabilities for each video (i.e., VUI mechanism), while the interviews collected qualitative justifications of participants' survey responses. + +Accountability Ranking: After watching each mechanism fail to complete the washing task, participants ranked them ( 1 (most) to 4 (least) accountable, tie not allowed). Ranking facilitated direct comparisons between short lists of items [47]. + +Control Capabilities: We sought participant opinions on Control Capabilities (Transparency, Predictability and Reliability) for each VUI mechanism as presented in the video prototypes. As with [14, 23], they scored each mechanism for Control Capabilities using a slider on a [0-100] point scale. + +User Satisfaction: Again with a [0-100] point scale and a slider, we asked participants to respond to three questions: + +- How easy would it be to use the voice-assisted system? + +- How confusing was the voice-assisted system? + +- How satisfied would you be with this interaction? + +### 5.6 Analysis + +Perceived Accountability Rankings: We performed Friedman tests (widely used for ranked data $\left\lbrack {{47},{54}}\right\rbrack$ ) on the mechanisms’ accountability ratings to identify any correlation between accountability perception and system automation (which varied with the VUI mechanism in each video), for each level of task complexity (high or low). In post-hoc analysis, we used Bonferroni correction of confidence intervals to compare accountability rankings by VUI mechanism. For all statistical results, we report significance at $\alpha = {0.05}$ . + +Control Capabilities & User Satisfaction: We analyzed each set of [0-100] scores with a repeated-measures ANOVA. Due to a violation of sphericity, we report Greenhouse-Geisser results. Post-hoc analysis included a Bonferroni alpha adjustment. User Satisfaction scores were taken by averaging participant responses to the three questions listed in 5.5, which provided a broad depiction of ease of use, clarity and interaction experience. + +Interviews: We used Braun and Clark's approach for thematic analysis [16]. In repeated passes, two investigators conducted open coding. Afterwards, two other team members checked the coding and brought disagreements to the full team for resolution. This division provided a broader perspective, deepened our understanding and generated multiple discussions around each theme. + +## 6 RESULTS + +We recruited 15 survey participants ( 10 male, 5 female, age distribution $\mathrm{M} = {34.97},\mathrm{{SD}} = {7.86})$ . Of these, we interviewed 8 (3 male,5 female). Participants were from various ethnic backgrounds but all lived in North America at the time of recruitment. + +### 6.1 Quantitative Results: Questionnaires + +Pre-Questionnaire Data: We surveyed participants on their past experiences with household technology, including their technology roles within their households. In our exploratory study, we identified two role classifications. Primary users are enthusiastic about initial setup and ongoing maintenance of home technology; we designated other users as secondary. All respondents indicated they had purchasing power within their households. + +$\sim {73}\%$ of participants reported enthusiasm in exploring new features on their smart home appliances, and took responsibility for configuring home technology. This suggests that the majority of the participants were primary technology users based on our definition. + +As an assessment of how participants related the notion "accountability" to washing machines, we asked where they placed the blame when a non-smart washing machine damaged their clothes. ${60}\%$ had had that experience and "mostly" or "completely" blamed their washing machine. This seems to dispel a notion that perceived accountability skews towards self in such situations. + +![01963e86-ac9a-7ebe-9689-039a43bbcb95_4_213_153_595_315_0.jpg](images/01963e86-ac9a-7ebe-9689-039a43bbcb95_4_213_153_595_315_0.jpg) + +Figure 1: Average responsibility rankings of experiment's VUI mechanisms by task complexity, for question "How accountable (responsible) is the system if something goes wrong?" Rank 1 (greatest) to 4 . Error bars are standard error of mean. + +Table 1: Relative VUI Mechanism Accountability (Bonferroni-adjusted). + +
Mechanisms ComparedLow-ComplexityHigh-Complexity
Command-Automation$z = - {4.10}, p < {0.001} *$$z = - {3.25}, p = {.007} *$
Command-Recommendation$z = - {2.97}, p = {0.018} *$$z = - {2.97}, p = {.018} *$
Command-Instruction$z = - {2.83}, p = {0.02} *$$z = - {2.546}, p = {.065}$
Automation-Recommendation$z = - {1.131}, p = {1.0}$$z = - {.283}, p = {1.0}$
Automation-Instruction$z = - {1.273}, p = {1.0}$$z = - {.707}, p = {1.0}$
Recommendation-Instruction$z = - {.141}, p = {1.0}$$z = - {.424}, p = {1.0}$
+ +Perceived Accountability Rankings: Figure 1 shows participants' rankings of mechanism accountability for the portrayed outcome. + +Friedman tests on task complexity found automation level, varied through mechanism type statistically significant for accountability ranking. For low-complexity tasks, ${\chi }^{2}\left( {3, N = {15}}\right) = {18.28}, p <$ ${0.001} *$ ; for high-complexity, ${\chi }^{2}\left( {3, N = {15}}\right) = {13.32}, p = {0.004} *$ . Post-hoc analysis (Table 1) with Bonferroni correction of confidence interval found that for both high and low complexity tasks, Command had significantly lower accountability than Full Automation and Recommendation. For low-complexity tasks, the Command mechanism had significantly lower accountability than Instruction. + +Control Capability scores: We analyzed participant scores for each mechanism in the Control Capability (CC) dimensions of Transparency, Predictability and Reliability. Differences between CC scores for the Recommendation and Instruction mechanisms (Figure 2) suggest that option delivery impacts experience of control over the interaction. For example, though Recommendation and Instruction offer similar choices to users, Recommendation was consistently seen as less transparent, predictable and accountable. + +Figure 2 reports average ratings for CC dimensions by VUI mechanism; it shows a trend suggesting that increased automation is linked to reduced perceived transparency and predictability. We found statistical significance only for predictability for the high complexity task. However, in our post-hoc test with Bonferroni alpha adjustment, we were not able to find any statistical significance between specific mechanisms for predictability. + +Accountability and user satisfaction for low-complexity task: The trend of the average satisfaction scores in Figure 3 suggests that participants preferred the Automation mechanism to those requiring more user involvement, for both low and high-complexity tasks. Participants also reported higher satisfaction with Instruction and Command for high task complexity. However, we did not find statistical significance for either tasks (High Complexity: F(2.075, 29.05), p=0.576; Low Complexity: F(1.885, 26.391), p=0.258). + +![01963e86-ac9a-7ebe-9689-039a43bbcb95_4_933_159_740_368_0.jpg](images/01963e86-ac9a-7ebe-9689-039a43bbcb95_4_933_159_740_368_0.jpg) + +Figure 2: Participant scores (1-100) on control capabilities of VUI mechanisms (15 samples $/$ bar). Error bars are standard error of the mean. + +Table 2: ANOVA results for Control Capabilities and User Satisfaction + +
Ctrl CapabilityLow ComplexityHigh Complexity
Transparency$F\left( {{1.848},{25.877}}\right) , p = 1$$F\left( {{1.887},{26.423}}\right) , p = {0.314}$
Predictability$F\left( {{1.514},{21.193}}\right) , p = {0.079}$$F\left( {{1.94},{27.164}}\right) , p = {0.028} *$
Reliability$F\left( {{1.917},{26.834}}\right) , p = {0.305}$$F\left( {{2.044},{28.616}}\right) , p = {0.275}$
Satisfaction$F\left( {{1.885},{26.391}}\right) , p = {0.258}$$F\left( {{2.075},{29.05}}\right) , p = {0.576}$
+ +Greenhouse-Geisser results are presented due to the violation of Sphericity. A post-hoc test with Bonferroni alpha adjustment was not significant for predictability. + +### 6.2 Qualitative Results: Interviews + +Eight questionnaire participants (4 female), selected through snowball sampling [66], were interviewed (Section 5.5). All were adults living with others who self-identified as primary or secondary users of home appliances, meaning they had purchasing power for home appliances in their households. No compensation was provided. + +In the following we organize our analysis of the interview transcripts as laid out in Section 4. Mechanism names here replace the numerical labels that participants used to refer to the videos. + +When asked to revisit their ranking of accountability across the VUI mechanisms, the majority of participants identified Full Automation as the most accountable. However, this was not unanimous: P3 suggested all mechanisms were "completely accountable", and P8 found the Recommendation mechanism most accountable. These individual variations further justify the use of Control Capabilities to identify design factors that contribute to accountability. + +Automation is seen as most accountable: Automation was deemed by the majority of participants (both primary and secondary) as the most accountable, because the machine gives minimal information and selects the washing cycle by itself. "Not given a choice" [P4] and "machine do[es] whatever... it feels the best" [P7] are reasons that participants ranked the automation mechanism as most accountable. + +'I think in [Automation], the machine should take the most responsibility since it makes all the decisions...’[P2] + +![01963e86-ac9a-7ebe-9689-039a43bbcb95_4_988_1797_594_292_0.jpg](images/01963e86-ac9a-7ebe-9689-039a43bbcb95_4_988_1797_594_292_0.jpg) + +Figure 3: Average user satisfaction score (1-100) for low complexity and high complexity tasks, by VUI mechanism. Error bars: standard error of the mean. + +Shared accountability: Recommendation and Instruction were viewed as generating a sense of shared accountability, by giving suggestions for participants to choose from. Participants are still accountable for the final decision: if "something goes wrong, it should be your fault instead of the machine" [P5]. However, "maybe the user will blame the machine for giving [...] the wrong recommendation" [P6] in the case of an error. Some suggested that since the system "understands the situation [...] it's more accountable" [P7]. + +## Control Capability Dimensions: + +A. Transparency - Participants generally agreed that all mechanisms were transparent enough for them to understand the interactions. As in Figure 3, most of participants rated and mentioned Command and Instruction as "more transparent" [P3, P4, P5, P7]. Some viewed Command as "the most transparent" [P4], since the user has complete control in this traditional method of doing laundry. + +'...To me transparency relates to the extent to which they use or understand what's going on within the machine, so in video number one that I mentioned a machine is pretty much making the selections on behalf of the user [...] so the user doesn't really know what's going on. Whereas [Full Automation] for instructions, the machine just told me what the user is saying so the user one hundred percent still all the time what's going on... '[P7] + +Instruction was also viewed as transparent since it provides detailed information and participants understand the procedure. + +'... the [instruction-based] with lots of details. Also, it's transparent. Although it's a bit annoying but it is transparent... '[P4] + +Some responses seem to indicate that the participants found the amount of information excessive. Additionally, some participants found Automation clearer since the interaction process was less complex and the "machine takes care of everything" [P7]. + +B. Predictability - Participants tended to consider Instruction as predictable since it is "the most specific" [P6] and it "give[s] explanations" [P1]. Participants claimed that they "trust it most" [P1] and described it as an expert guide: + +'It's so smart, the machine acts as a teacher to teach you like, uh, what should you do? I don't have to worry about anything. He just tells you everything... ' [P5] + +The Command mechanism received high predictability ratings. Some participants suggested that the user in the video must have been familiar with the system already: + +'[Command based mechanism]... cause the user knows what he or she wants and maybe that's because he/she has already tried it before. Then for the [instruction based mechanism] because you have all the descriptions of the options the results would also be predictable'. [P7] + +C. Reliability - Participants also tended to view Instruction as the most reliable for both simple and complicated tasks, since they gather the most information and it "seems to know a lot" [P1, P2]. + +'[...] if something goes wrong, it should be your fault instead of the machine. Because the machine let you know all the consequences before.'[P5] + +Interviewer: Why do you think that the instruction based mechanism is more reliable? ... " "... I believe that the machine knows what it is doing because it has all the information about the laundry process that I don't"[P4] + +Satisfaction: Participants' satisfaction with the interaction depended on both the VUI mechanism and task complexity. The "concise [..] and very quick" [P3] Automation mechanism was the most satisfying for some, especially for routine tasks because "you don't have to think about what you're doing" [P6]. + +For complex, critical or high-stakes washing tasks such as "really expensive clothes" [P6], the "detailed instructions" [P3] of instruction-based mechanisms were considered more satisfying. Participants appreciated additional information when the perceived cost of error was high (e.g., damaging expensive clothes). + +'... it depends on what you're trying to wash like if I'm gonna wash really expensive clothes that I cannot mess up. Uh, then I would have done the third one because whatever I don't know what to do, it tells me exactly what's the stuff to take separate the clothes and all that stuff.'[P6] + +## 7 DISCUSSION + +This study demonstrates a difference in accountability between our designed mechanisms. The results of both qualitative and quantitative analysis supports our first hypothesis of a positive relationship between system automation and accountability. This relationship is well represented by P2's interview response that "...the machine should take the most responsibility since it makes all the decisions.". A similar trend has been reported by Sheridan et al.: "... individuals using the [automated] system may feel that the machine is in complete control, disclaiming personal accountability for any error or performance degradation" [57]. + +Though our results showed that Automation takes the highest accountability, two participants provided insights on other potential design factors that affect accountability. P5 argued that Instruction should be accountable when the system fails because users "wasted" their time listening to its verbose instructions. P6 indicated the importance of claims about the system: "Actually that really depends on what the machine says it can do, you know if it says like 'I'm gonna be able to distinguish colour clothes from regular clothes and I'm not gonna mess up.' and it messes up then it's the machine's fault." Setting realistic expectations about a system's abilities may help manage its accountability. + +Our results echoed Suchman's prediction that automation can lead to shared accountability between humans and machines [64], and further that level of automation impact perceived accountability. + +Users did not experience the washing machines firsthand, a limitation imposed by the state of technology. However, participants empathized with the common experience of doing laundry sufficiently to report a projected level of satisfaction with the interaction. They showed no difficulty in bridging the gap between experienced and imagined scenarios, making comments such as "there is common ground between me and the machine" [P3]. + +Our results suggest that task complexity does influence user satisfaction. Our qualitative analysis made it clear that for the high complexity laundry task, participants were more willing to ask for guidance and more likely to include the VUI agent in the decision-making process compared to the low-complexity task. However, they might prefer Command or Automation once they became comfortable with the system. Multiple users expressed the desire to transition to command-based systems once they had learned about the washing machine's hidden functionality through the instruction-based or recommendation-based mechanisms. This key finding motivates VUIs during naive access, which could be invaluable for secondary users of home appliances. + +The qualitative results also support our quantitative analysis outcomes. Some participants stated that only experienced users who found the washing machine predictable would use the command-based mechanism. This may have contributed to an inflated predictability rating for the command-based mechanism. Though results suggest Automation is perceived as the most accountable, and Command the least, it is difficult to make a conclusive judgment on shared accountability for Recommendation and Instruction. Each of these mechanisms was scored differently in one Control Capability; however, the difference was not significant. + +### 7.1 Implications for design + +We encapsulate these findings in VUI design recommendations: + +1. Accountability-aware design must consider context, specifically task complexity, type of user(s) and ambiguity of the interactions. + +2. Automation has opposing effects on accountability and user satisfaction. A highly automated system may be satisfying to use, but in case of failure, users are more likely to find it blameworthy. Designers should consider this duality in the unique context of their product and its anticipated use. + +3. User perceptions of Reliability, Transparency and Predictability depend both on available choices, and how those choices are presented, particularly for high complexity tasks. Designers should consider providing justification for system-presented choices, especially for high-stakes tasks. + +4. Results suggest that detailed instruction- and recommendation-based mechanisms improve learnability, but could eventually be too repetitive. Designers should consider transitional mechanisms, in which system operation gradually provides less explanation and becomes more automated. + +## 8 CONCLUSIONS AND FUTURE WORK + +We investigated the concept of accountability in home appliance VUIs. We examined automation level as a parameter that could impact accountability delegation, by designing and studying four mechanisms which varied automation and user involvement in decision-making, in simple and complex tasks. + +Our primary study sought to characterize differences between these mechanisms. We found our use of video prototypes a successful basis for initial discussions on design concepts, providing non-obvious insights. + +Qualitative and quantitative results support our first hypothesis of a positive relationship between automation and accountability, which held whether for both high and low complexity tasks. + +Concerning our second hypothesis (that system automation increases user satisfaction), the quantitative result $\left( {\mathrm{N} = {15}}\right)$ was not statistically significant, but trended towards users preferring the most automated system. Interviews consistently supported H2 in that increased user involvement reduced satisfaction. This creates a dilemma for designers of automated systems, who must minimize users' cognitive load without saddling the system with complete accountability for errors. + +We found participants more receptive to instructions and recommendations when they were concerned about the outcome of a process, or when they first used a system. We recommend that VUI designers implement guided interfaces as well as command-based ones. This gives users the freedom to transition from guided use to command-based use without leaving the system (and its designers) accountable for mistakes. + +Future Work: From this foundation, we recommend next steps. Sample Size - Our study size was appropriate for this early stage of investigation, revealing clear trends supporting the possibility of directing perceptions of accountability in users to support greater investment (more realistic study approaches) in this idea. However, increasing the size and diversity of even this exploratory approach might provide higher power of statistical tests and more significant quantitative insights. + +Mechanism design - We examined four distinct mechanisms in isolation. As suggested in Section 7.1, we propose a mechanism that adjusts its automation as the user becomes more familiar with the device. The benefits of such a mechanism would need to be confirmed in a longitudinal study. + +Metrics - User satisfaction is a volatile metric. In this study it is especially so because participants did not interact with a physical prototype. To minimize this limitation on user empathy, we assessed common moderate failure outcomes instead of complete failures (i.e., a stained rather than a destroyed shirt). We will have more realistic results when users can reflect on their satisfaction level by observing the laundry process outcome on their own clothes. + +Real results - A functional VUI system and, separately, a machine that truly enacts its instructions would advance the reality of the participants' experience and make their responses more reliable. A real system would succeed more often than fail, as opposed to our scenarios which aimed to make use of a short study session. When studied longitudinally within real homes in actual use, we can follow the development of trust and familiarity over time. + +## REFERENCES + +[1] 4 key measurements for customer satisfaction, Jan 2017. + +[2] Demographics of mobile device ownership and adoption in the united states, Jun 2019. + +[3] "accountability". In Merriam-Webster.com. 2020. Date Accessed: May 25, 2020. + +[4] A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10. 1145/3173574.3174156 + +[5] Amazon. Design process: The process of thinking through the design of a voice experience, 2020. + +[6] S. Amershi, D. Weld, M. Vorvoreanu, A. Fourney, B. Nushi, P. Collis-son, J. Suh, S. Iqbal, P. N. Bennett, K. Inkpen, and et al. Guidelines for human-ai interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605. 3300233 + +[7] T. Ammari, J. Kaye, J. Y. Tsai, and F. Bentley. Music, search, and iot: How people (really) use voice assistants. ACM Trans. Comput.-Hum. Interact., 26(3), Apr. 2019. doi: 10.1145/3311956 + +[8] V. Bellotti and K. Edwards. Intelligibility and accountability: Human considerations in context-aware systems. Hum.-Comput. Interact., 16(2):193-212, Dec. 2001. doi: 10.1207/S15327051HCI16234_05 + +[9] D. Bohn. Amazon says 100 million alexa devices have been sold - what's next?, Jan 2019. + +[10] D. Boos, H. Guenter, G. Grote, and K. Kinder. Controllable accountabilities: the internet of things and its challenges for organisations. Behaviour & Information Technology, 32(5):449-467, 2013. doi: 10. 1080/0144929X.2012.674157 + +[11] M. Braun, A. Mainz, R. Chadowitz, B. Pfleging, and F. Alt. At your service: Designing voice assistant personalities to improve automotive user interfaces. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605. 3300270 + +[12] G. Button and W. Sharrock. The organizational accountability of technological work. Social Studies of Science, 28(1):73-102, 1998. doi: 10.1177/030631298028001003 + +[13] C. Castelfranchi and R. Falcone. Towards a theory of delegation for agent-based systems. Robotics and Autonomous Systems, 24(3):141 - 157, 1998. Multi-Agent Rationality. doi: 10.1016/S0921-8890(98) 00028-1 + +[14] C. M. Cheung and M. K. Lee. Understanding the sustainability of a virtual community: model development and empirical test. Journal of Information Science, 35(3):279-298, 2009. + +[15] L. Clark, N. Pantidi, O. Cooney, P. Doyle, D. Garaialde, J. Edwards, B. Spillane, E. Gilmartin, C. Murad, C. Munteanu, and et al. What + +makes a good conversation? challenges in designing truly conversational agents. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19. Association for Comput- + +ing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605. 3300705 + +[16] V. Clarke and V. Braun. Thematic analysis. The Journal of Positive Psychology, 12(3):297-298, 2017. doi: 10.1080/17439760.2016. 1262613 + +[17] M. H. Cohen, M. H. Cohen, J. P. Giangola, and J. Balogh. Voice user interface design. Addison-Wesley Professional, 2004. + +[18] M. L. Cummings. Automation and accountability in decision support system interface design. 2006. + +[19] A. Datta, S. Sen, and Y. Zick. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE Symposium on Security and Privacy (SP), pp. 598-617, May 2016. doi: 10.1109/SP.2016.42 + +[20] P. Dourish. Accounting for System Behavior: Representation, Reflection, and Resourceful Action, p. 145-170. MIT Press, Cambridge, MA, USA, 1997. + +[21] P. Dourish. Where the action is: the foundations of embodied interaction. MIT press, 2004. + +[22] C. Dwork and D. K. Mulligan. It's not privacy, and it's not fair. Stan. L. Rev. Online, 66:35, 2013. + +[23] M. Giuliani, M. E. Foster, A. Isard, C. Matheson, J. Oberlander, and A. Knoll. Situated reference in a hybrid human-robot interaction system. In Proceedings of the 6th International Natural Language Generation Conference (INLG 2010), 2010. + +[24] B. Goodman and S. Flaxman. European union regulations on algorithmic decision-making and a "right to explanation". AI Magazine, 38(3):50-57, Oct. 2017. doi: 10.1609/aimag.v38i3.2741 + +[25] Google. Conversation design guideline, 2020. + +[26] R. A. Harris. Voice interaction design: crafting the new conversational speech systems. Elsevier, 2004. + +[27] S. Hergeth, L. Lorenz, R. Vilimek, and J. F. Krems. Keep your scanners peeled: Gaze behavior as a measure of automation trust during highly automated driving. Human Factors, 58(3):509-519, 2016. PMID: 26843570. doi: 10.1177/0018720815625744 + +[28] J. L. Herlocker, J. A. Konstan, and J. Riedl. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, CSCW '00, p. 241-250. Association for Computing Machinery, New York, NY, USA, 2000. doi: 10.1145/358916.358995 + +[29] E. Horvitz. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '99, p. 159-166. Association for Computing Machinery, New York, NY, USA, 1999. doi: 10.1145/302979.303030 + +[30] K. Höök. Steps to take before intelligent user interfaces become real. Interacting with Computers, 12(4):409 - 426, 2000. doi: 10.1016/ S0953-5438(99)00006-5 + +[31] R. Kang, A. Guo, G. Laput, Y. Li, and X. hen. Minuet: Multimodal interaction with an internet of things. In Symposium on Spatial User Interaction, SUI '19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3357251.3357581 + +[32] B. Knijnenburg, P. Bahirat, Y. He, M. Willemsen, Q. Sun, and A. Kobsa. Iuiot: Intelligent user interfaces for iot. In Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion, IUI ' 19, p. 139-140. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3308557.3313121 + +[33] J. Kowalski, A. Jaskulska, K. Skorupska, K. Abramczuk, C. Biele, W. Kopeundefined, and K. Marasek. Older adults and voice interaction: A pilot study with google home. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA '19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290607.3312973 + +[34] J. A. Kroll, S. Barocas, E. W. Felten, J. R. Reidenberg, D. G. Robinson, and H. Yu. Accountable algorithms. U. Pa. L. Rev., 165:633, 2016. + +[35] T. Kulesza, M. Burnett, W.-K. Wong, and S. Stumpf. Principles of explanatory debugging to personalize interactive machine learning. In O. Brdiczka and P. Chau, eds., Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 126-137. ACM, New + +York, USA, January 2015. doi: 10.1145/2678025.2701399 + +[36] D. Kumar, R. Paccagnella, P. Murley, E. Hennenfent, J. Mason, A. Bates, and M. Bailey. Emerging threats in internet of things voice services. IEEE Security Privacy, 17(4):18-24, July 2019. doi: 10. + +1109/MSEC.2019.2910013 + +[37] J. La Delfa, M. A. Baytaş, H. Ngari, R. Patibanda, R. Khot, and F. Mueller. Drone chi: Somaesthetic human-drone interaction. 04 2020. doi: 10.1145/3313831.3376786 + +[38] L. Leahu, M. Cohn, and W. March. How categories come to matter. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, p. 3331-3334. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654. 2466455 + +[39] J. R. Lewis. Ibm computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. Int. J. Hum.-Comput. Interact., 7(1):57-78, Jan. 1995. doi: 10.1080/10447319509526110 + +[40] B. Y. Lim and A. K. Dey. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing, UbiComp '09, p. 195-204. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1620545.1620576 + +[41] B. Lubars and C. Tan. Ask not what ai can do, but what ai should do: Towards a framework of task delegability. In Advances in Neural Information Processing Systems, pp. 57-67, 2019. + +[42] T. Malche and P. Maheshwary. Internet of things (iot) for building smart home system. In 2017 International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), pp. 65-70, Feb 2017. doi: 10.1109/I-SMAC.2017.8058258 + +[43] Y. Mehdi and Y. Mehdi. Windows 10: Powering the world with 1 billion monthly active devices, Mar 2020. + +[44] D. R. Millen. Rapid ethnography: Time deepening strategies for hci field research. In Proceedings of the 3rd Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, DIS '00, p. 280-286. Association for Computing Machinery, New York, NY, USA, 2000. doi: 10.1145/347642.347763 + +[45] Y. Mittal, P. Toshniwal, S. Sharma, D. Singhal, R. Gupta, and V. K. Mittal. A voice-controlled multi-functional smart home automation system. In 2015 Annual IEEE India Conference (INDICON), pp. 1-6, Dec 2015. doi: 10.1109/INDICON.2015.7443538 + +[46] K. L. Mosier, L. J. Skitka, M. D. Burdick, and S. T. Heers. Automation bias, accountability, and verification behaviors. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 40(4):204- 208, 1996. doi: 10.1177/154193129604000413 + +[47] S. Nobarany, L. Oram, V. K. Rajendran, C.-H. Chen, J. McGrenere, and T. Munzner. The design space of opinion measurement interfaces: exploring recall support for rating and ranking. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 2035-2044, 2012. + +[48] D. A. Norman. How might people interact with agents. Commun. ACM, 37(7):68-71, July 1994. doi: 10.1145/176789.176796 + +[49] X. Page, P. Bahirat, M. Safi, B. Knijnenburg, and P. Wisniewski. The internet of what?: Understanding differences in perceptions and adoption for the internet of things. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2:1-22, 12 2018. doi: 10.1145/3287061 + +[50] R. Parasuraman and V. Riley. Humans and automation: Use, misuse, disuse, abuse. Human factors, 39(2):230, 06 1997. Copyright - Copyright Human Factors and Ergonomics Society Jun 1997; Last updated - 2017-11-09; CODEN - HUFAA6. + +[51] R. Parasuraman, T. B. Sheridan, and C. D. Wickens. A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 30(3):286-297, May 2000. doi: 10.1109/3468.844354 + +[52] M. Porcheron, J. E. Fischer, S. Reeves, and S. Sharples. Voice interfaces in everyday life. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3173574. 3174214 + +[53] E. Rader, K. Cotter, and J. Cho. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI + +Conference on Human Factors in Computing Systems, CHI '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10. 1145/3173574.3173677 + +[54] R. Riffenburgh. Statistics in Medicine (3rd ed.). Academic Press, 2012. + +[55] I. Robertson and P. Kortum. An investigation of different methodologies for rating product satisfaction. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1):1259-1263, 2019. doi: 10.1177/1071181319631071 + +[56] J. A. Rode and E. S. Poole. Putting the gender back in digital housekeeping. In Proceedings of the 4th Conference on Gender & IT, pp. 79-90, 2018. + +[57] T. Sheridan, T. Vámos, and S. Aida. Adapting automation to man, culture and society. Automatica, 19(6):605 - 612, 1983. doi: 10.1016/ 0005-1098(83)90024-9 + +[58] B. Shneiderman. Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight. Proceedings of the National Academy of Sciences, 113(48):13538-13540, 2016. doi: 10.1073/pnas. 1618211113 + +[59] J. SILVA. Increasing perceived agency in human-ai interactions: Learnings from piloting a voice user interface with drivers on uber. Ethnographic Praxis in Industry Conference Proceedings, 2019(1):441-456, 2019. doi: 10.1111/1559-8918.2019.01299 + +[60] L. Skitka, K. Mosier, and M. BURDICK. Accountability and automation bias. International Journal of Human-Computer Studies, 52:701-717, 04 2000. doi: 10.1006/ijhc. 1999.0349 + +[61] B. Stigall, J. Waycott, S. Baker, and K. Caine. Older adults' perception and use of voice user interfaces: A preliminary review of the computing literature. In Proceedings of the 31st Australian Conference on Human-Computer-Interaction, OZCHI'19, p. 423-427. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10. 1145/3369457.3369506 + +[62] N. Stout, A. Dennis, and T. Wells. The buck stops there: The impact of perceived accountability and control on the intention to delegate to software agents. AIS Transactions on Human-Computer Interaction, 6, 032014. doi: 10.17705/1thci.00058 + +[63] L. Suchman. Human-machine reconfigurations: Plans and situated actions. Cambridge university press, 2007. + +[64] L. Suchman and J. Weber. Human-machine autonomies. Autonomous weapons systems: law, ethics, policy. Cambridge University Press, Cambridge, pp. 75-102, 2016. + +[65] P. E. Tetlock and R. Boettger. Accountability: A social magnifier of the dilution effect. Journal of personality and social psychology, 57(3):388, 1989. + +[66] S. J. Tracy. Qualitative quality: Eight "big-tent" criteria for excellent qualitative research. Qualitative Inquiry, 16(10):837-851, 2010. doi: 10.1177/1077800410383121 + +[67] D. Wang, D. Lo, J. Bhimani, and K. Sugiura. Anycontrol - iot based home appliances monitoring and controlling. In Proceedings of the 2015 IEEE 39th Annual Computer Software and Applications Conference - Volume 03, COMPSAC '15, p. 487-492. IEEE Computer Society, USA, 2015. doi: 10.1109/COMPSAC.2015.259 + +[68] J. Zimmerman. Video sketches: Exploring pervasive computing interaction designs. IEEE pervasive computing, 4(4):91-94, 2005. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/eVyaNs3sm8a/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/eVyaNs3sm8a/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..6892cbfbb3306f2b28722d71c5a6a887e5ec6c8d --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/eVyaNs3sm8a/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,457 @@ +# SwipeRing: Gesture Typing on Smartwatches Using a Circular Segmented QWERTY + +Category: Research + +![01963eaf-a807-7e3b-884c-312e448feea0_0_220_384_1353_420_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_0_220_384_1353_420_0.jpg) + +Figure 1: SwipeRing arranges the standard QWERTY layout around the edge of a smartwatch in seven zones. To enter a word, the user connects the zones containing the target letters by drawing gestures on the screen, like gesture typing on a virtual QWERTY. A statistical decoder interprets the input and enters the most probable word. A suggestion bar appears to display other possible words. The user could stroke right or left on the suggestion bar to see additional suggestions. Tapping on a suggestion replaces the last entered word. One-letter and out-of-vocabulary words are entered by repeated strokes from/to the zones containing the target letters, in which case the keyboard first enters the two one-letter words in the English language (see the second last image from the left), then the other letters in the sequence in which they appear in the zones (like multi-tap). Users could also repeatedly tap (instead of stroke) on the zones to enter the letters. The keyboard highlights the zones when the finger enters them and traces all finger movements. This figure illustrates the process of entering the phrase "the world is a stage" on the SwipeRing keyboard (upper sequence) and on a smartphone keyboard (bottom sequence). We can clearly see the resemblance of the gestures. + +## Abstract + +Most text entry techniques for smartwatches require repeated taps to enter one word, occupy most of the screen, or use layouts that are difficult to learn. Users are usually reluctant to use these techniques since the skills acquired in learning cannot be transferred to other devices. SwipeRing is a novel keyboard that arranges the QWERTY layout around the bezel of a smartwatch divided into seven zones to enable gesture typing. These zones are optimized for usability and to maintain similarities between the gestures drawn on a smartwatch and a virtual QWERTY to facilitate skill transfer. Its circular layout keeps most of the screen available. We compared SwipeRing with C-QWERTY that uses a similar layout but does not divide the keys into zones or optimize for skill transfer and target selection. In a study, SwipeRing yielded a 33% faster entry speed (16.67 WPM) and a ${56}\%$ lower error rate than C-QWERTY. + +Index Terms: Human-centered computing-Text input; Human-centered computing—Gestural input + +## 1 INTRODUCTION + +Smartwatches are becoming increasingly popular among mobile users [31]. However, the absence of an efficient text entry technique for these devices limits smartwatch interaction to mostly controlling applications running on smartphones (e.g., pausing a song on a media player or rejecting a phone call), checking notifications on incoming text messages and social media posts, and using them as fitness trackers to record daily physical activity. Text entry on smartwatches is difficult due to several reasons. First, the smaller key sizes of miniature keyboards make it difficult to tap on the target keys (the "fat-finger problem" [52]), resulting in frequent input errors even when augmented with a predictive system. Correcting these errors is also difficult, and often results in additional errors. To address this, many existing keyboards use a multi-action approach to text entry, where the user performs multiple actions to enter one letter (e.g., multiple taps). This increases not only learning time but also physical and mental demands. Besides, most existing keyboards cover much of the smartwatch touchscreen (50-85%), reducing the real estate available to view or interact with the elements in the background. Many keyboards for smartwatches that use novel layouts [1] do not facilitate skill transfer. That is, the skills acquired in learning new keyboards are usually not usable on other devices. This discourages users from learning a new technique. Further, most of these keyboards were designed for square watch-faces, thus do not always work on round screens. Finally, some techniques rely on external hardware, which is impractical for wearable devices. + +To address these issues, we present SwipeRing, a circular keyboard that sits around the smartwatch bezel to enable gesture typing with the support of a statistical decoder. Its circular layout keeps most of the touchscreen available to view additional information and perform other tasks. It uses a QWERTY-like layout divided into seven zones that are optimized to provide comfortable areas to initiate and release gestures, and to maintain similarities between the gestures drawn on a virtual QWERTY and SwipeRing to facilitate skill transfer. + +The remainder of the paper is organized as follows. First, we discuss the motivation for the work, followed by a review of the literature in the area. We then introduce the new keyboard and discuss its rigorous optimization process. We present the results of a user study that compared the performance of the proposed keyboard with the C-QWERTY keyboard that uses a similar layout but does not divide the keys into zones or optimize for skill transfer and target selection. Finally, we conclude the paper with potential future extensions of the work. + +## 2 MOTIVATION + +The design of SwipeRing is motivated by the following considerations. + +### 2.1 Free-up Touchscreen Real-Estate + +On a ${30.5}\mathrm{\;{mm}}$ circular watch, a standard QWERTY layout without and with the suggestion bar occupy about ${66}\% \left( {{480}{\mathrm{\;{mm}}}^{2}}\right)$ and ${85}\%$ $\left( {{621}{\mathrm{\;{mm}}}^{2}}\right)$ of the screen, respectively (Fig. 2). On the same device, our technique, SwipeRing, occupies only about ${36}\% \left( {{254.34}{\mathrm{\;{mm}}}^{2}}\right)$ of the screen, almost half the QWERTY layout. Saving screen space is important since the extra space could be used to display additional information and to make sure that the interface is not cluttered, which affects performance [23]. For example, the extra space could be used to display the email users are responding to or more of what they have composed. The former can keep users aware of the context. The latter improves writing quality [55, 57] and performance [17]. Numerous studies have verified this in various settings, contexts, and devices $\left\lbrack {{40} - {42},{44} - {46}}\right\rbrack$ . + +![01963eaf-a807-7e3b-884c-312e448feea0_1_202_731_619_303_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_1_202_731_619_303_0.jpg) + +Figure 2: When entering the phrase "the work is done take a coffee break" with a smartwatch QWERTY, only the last 10 characters are visible (left), while the whole phrase is visible (37 characters) with SwipeRing (right). Besides, there are extra space available below the floating suggestion bar to display additional information. + +### 2.2 Facilitate Skill Transfer + +The skill acquired in using a new smartwatch keyboard is usually not transferable to other devices. This discourages users from learning a new technique. The keyboards that attempt to facilitate skill transfer are miniature versions of QWERTY that are difficult to use due to the small key sizes. To mitigate this, most of these keyboards rely heavily on statistical decoders, making the entry of out-of-vocabulary words very difficult, if not impossible. SwipeRing uses a different approach. Although gesture typing is much faster than tapping [29], it is not a dominant text entry method on mobile devices. SwipeRing strategically arranges the letters in the zones to maintain gesture similarities between the gestures drawn on a virtual QWERTY and SwipeRing to enable performing the same (or very similar) gesture to enter the same word on various devices. The idea is that it will encourage gesture typing by facilitating skill transfer from smartphones to smartwatches and vice versa. + +### 2.3 Increase Usability + +As a result of an optimization process, the layout is strategically divided into seven zones, accounting for the mean contact area in index finger touch (between 28.5 and ${33.5}{\mathrm{\;{mm}}}^{2}$ [53]), to facilitate comfortable and precise target selection during text entry [37]. The zones range between 29.34 and ${58.68}{\mathrm{\;{mm}}}^{2}$ (lengths between 9.0 and 18.0 $\mathrm{{mm}}$ ), which are within the recommended range for target selection on both smartphones $\left\lbrack {{25},{30},{38}}\right\rbrack$ and smartwatches $\left( {{7.0}\mathrm{\;{mm}}}\right) \left\lbrack {10}\right\rbrack$ . SwipeRing employs the whole screen for drawing gestures, which is more comfortable than drawing gestures on a miniature QWERTY. Unlike most virtual keyboards, SwipeRing requires users to slide their fingers from/to the zones instead of tapping, which also makes target selection easier. Existing work on eyes-free bezel-initiated swipe for the circular layouts revealed that the most accurate layouts have 6-8 segments [58]. SwipeRing enables the entry of out-of-vocabulary words through a multi-tap like an approach [9], where users repeatedly slide their fingers from/to the zone that contains the target letter until the letter is entered (see Section 5.2 for further details). Besides, research showed that radial interfaces on circular devices visually appear to take less space even when they occupy the same area as rectangular interfaces, which not only increases clarity but also makes the interface more pleasant and attractive [43]. + +### 2.4 Face Agnostic + +Since SwipeRing arranges the keys around the edge of a smartwatch, it works on both round and square/rectangular smartwatches. To validate this, we investigated whether the gestures drawn on a square smartwatch and a circular smartwatch are comparable to the ones drawn on a virtual QWERTY. In a Procrustes analysis on the 10,000 most frequent words drawn on these devices, SwipeRing yielded a score of 114.81 with the square smartwatch and 118.48 with the circular smartwatch. This suggests that the gestures drawn on these devices are very similar. In fact, the square smartwatch yielded a slightly better score than the circular smartwatch (the smaller the score the better the similarity, Section 4.2), most likely because the shapes of these devices are similar (Fig. 3). + +![01963eaf-a807-7e3b-884c-312e448feea0_1_947_901_694_240_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_1_947_901_694_240_0.jpg) + +Figure 3: Gestures for the most common word "the" on a circular SwipeRing, a square SwipeRing, and a virtual QWERTY. + +## 3 RELATED WORK + +This section covers the most common text entry techniques for smartwatches. Tables 1 and 2 summarize the performance of some of these techniques. For a comprehensive review of existing text entry techniques for smartwatches, we refer to Arif et al. [1]. + +### 3.1 QWERTY Layout + +Most text entry techniques for smartwatches are miniature versions of the standard QWERTY that use multi-step approaches to increase the target area. ZoomBoard [36] displays a miniature QWERTY that enables iterative zooming to enlarge regions of the keyboard for comfortable tapping. SplitBoard [18] displays half of a QWERTY keyboard so that the keys are large enough for tapping. Users flick left and right to see the other half of the keyboard. SwipeBoard [5] requires two swipes to enter one letter, the first to select the region of the target letter and the second towards the letter to enter it. DriftBoard [47] is a movable miniature QWERTY with a fixed cursor point. To enter text, users drag the keyboard to position the intended key within the cursor point. Some miniature QWERTY keyboards use powerful statistical decoders to account for the "fat-finger problem" [52]. WatchWriter [13] appropriates a smartphone QWERTY for smartwatches. It supports both predictive tap and gesture typing. Yi et al. [59] uses a similar approach with even smaller keyboards (30 and ${35}\mathrm{\;{mm}}$ ). VelociWatch [51] also uses a statistical decoder, but enables users to lock in particular letters of their input to disable potential auto-corrections. Some techniques use variants of the standard QWERTY layout. ForceBoard [19] maps QWERTY to a $3 \times 5$ grid by assigning two letters to each key. Applying different levels of force on the keys enters the respective letters. DualKey [15] uses a similar layout, but requires users to tap with different fingers to disambiguate the input. It uses external hardware to differentiate between the fingers. DiaQWERTY [27] uses diamond-shaped keys to fit QWERTY in a round smartwatch at 10:7 aspect ratio. Optimal-T9 [39] maps QWERTY to a $3 \times 3$ grid, then disambiguates input using a statistical decoder. These techniques, however, occupy a large area of the screen real-estate, require multiple actions to enter one letter, or use aggressive prediction models that make the entry out-of-vocabulary words very difficult, often impossible. + +Table 1: Average entry speed (WPM) of popular keyboards for smart-watches from the literature (only the highest reported speed in the last block or session are presented, when applicable) along with the estimated percentage of touchscreen area they occupy. + +
MethodUsed DeviceScreen OccupancyEntry Speed (WPM)
Yi et al. [59]Watch 1.56"50%33.6
WatchWriter [13]Watch 1.30"85%24.0
DualKey [15]Watch 1.65"80%21.6
SwipeBoard [5]Tablet45%19.6
SplitBoard [18]Watch 1.63"75%15.9
ForceBoard [19]Phone67%12.5
ZoomBoard [36]Tablet50%9.3
+ +### 3.2 Other Layouts + +There are a few techniques that use different layouts. Dunlop et al. [10] and Komninos and Dunlop [28] map an alphabetical layout to six ambiguous keys, then uses a statistical decoder to disambiguate input. It enables contextual word suggestions and word completion. QLKP [18] (initially designed for smartphones [20]) maps a QWERTY-like layout to a $3 \times 3$ grid. Similar to multi-tap [9], users tap on a key repeatedly until they get the intended letter. These techniques also occupy a large area of the touchscreen real-estate and require multiple actions to enter one letter. + +### 3.3 Circular Layouts + +There are some circular keyboards available for smartwatches. In-clineType [16] places an alphabetical layout around the edge of the devices. To enter a letter, users first select the letter by moving the wrist, then tap on the screen. COMPASS [60] also uses an alphabetical layout, but does not use touch interaction. To enter text, users rotate the bezel to place one of the three available cursors on the desired letter, then press a button on the side of the watch. WrisText [12] is a one-handed technique, with which users enter text by whirling the wrist of the hand towards six directions, each representing a key in a circular keyboard with the letters in alphabetical order. BubbleFlick [49] is a circular keyboard for Japanese text entry. It enables text entry through two actions. Users first touch a key, which partitions the circular area inside the layout into four radial areas for the four kana letters on the key. Users then stroke towards the intended letter to enter it. A commercial product, TouchOne Keyboard Wear [50] divides an alphabetical layout into eight zones to let users enter text using a T9-like [14] approach. It enables the entry of out-of-vocabulary words using an approach similar to BubbleFlick. Go et al. [11] designed an eyes-free text entry technique that enables users to EdgeWrite [56] on a smartwatch with the support of auditory feedback. Most of these techniques use a sequence of actions or a statistical decoder to disambiguate the input. Besides, most of these techniques are standalone, hence the skills acquired in using these keyboards are usually not usable on other devices. + +Table 2: Average entry speed (WPM) of popular circular keyboards for smartwatches from the literature (only the highest reported speed in the last block or session are presented, when applicable) along with the estimated percentage of touchscreen area they occupy. + +
MethodUsed DeviceScreen OccupancyEntry Speed (WPM)
WrisText [12]Watch 1.4"42.99%15.2
HiPad [21]VR39.70%13.6
COMPASS [60]Watch 1.2"36.07%12.5
BubbleFlick [49]Watch 1.37"52.06%8.0
C-QWERTYgesture [7]Watch 1.39"43.16%7.7
Cirrin [24]PC60.57%6.4
InclineType [16]Watch 1.6"38.64%5.9
+ +Cirrin [34] is a pen-based word-level text entry technique for PDAs. It uses a novel circular layout with dedicated keys for each letter. To enter a word, users pass their pen through the intended letters. This approach has also been used on other devices [24]. C-QWERTY is a similar technique [7], but differs in letter arrangement, which is based on the QWERTY layout. To enter a word with C-QWERTY, users either tap on the letters in the word individually or drag the finger over them in sequence. While there are some similarities between Cirrin, C-QWERTY, and SwipeRing, the approach employed in the latter technique is fundamentally different. First, SwipeRing divides the layout into seven zones, thus does not require precise selection of the letters, but much larger zones. Both Cirrin and C-QWERTY, in contrast, use individual keys for each letter, thus require precise selection of the keys. Second, like gesture typing on a smartphone, SwipeRing does not require users to go over the same letter (or the letters on the same zone) repeatedly if they appear in a word multiple times in sequence (such as, "oo" in "book"). But both Cirrin and C-QWERTY require users to go over the same letter repeatedly in such cases by sliding the finger out of the keyboard then sliding back to the key. We found out users use this strategy also for entering letters that have the respective keys placed side-by-side, as they are difficult to select consecutively due to the smaller size. Finally, SwipeRing is optimized to maintain gesture similarities between SwipeRing and a virtual QWERTY to facilitate skill transfer between devices. Cirrin and C-QWERTY do not account for this. + +## 4 LAYOUT OPTIMIZATION + +SwipeRing maps the standard QWERTY layout to a ring around the edge of a smartwatch (Fig. 1). It places the left and the righthand keys of QWERTY [35] to the left and right sides of the layout, respectively. Likewise, the top, home, and bottom row keys of QWERTY are placed at the top, middle, and bottom parts of the layout, respectively. Each letter is positioned at multiple of ${360}^{ \circ }/{26}$ , resulting in an angular step of ${13.80}^{ \circ }$ , starting with the letter ’q’ at ${180}^{ \circ }$ . This design was adapted to maintain a likeness to QWERTY to exploit the widespread familiarity with the keyboard to facilitate learning $\left\lbrack {{20},{39}}\right\rbrack$ . We then grouped the letters into zones to improve the usability of the keyboard by facilitating precise target selection, further discussed in Section 4.3. In practice, the letters can be grouped in numerous different ways, resulting in a set of possible layouts $L$ . However, the purpose here was to identify a particular layout $l \in L$ that ensures that the gestures drawn on the layout $l$ are similar to the ones drawn on a virtual QWERTY. This requires searching for an optimal letter grouping that maximizes gesture similarity. We introduce the following notation to formally define the optimization procedure. Let ${g}_{O}\left( w\right)$ be the gesture used to enter a word $w$ on the virtual QWERTY and ${g}_{\text{SwipeRing }}\left( {w;l}\right)$ be the gesture used to enter the word $w$ on the layout $l$ of SwipeRing. Instead of maximizing the similarity between the gestures, we can equivalently minimize the discrepancy between the gestures, which we measure using a function $\psi$ . Then, our problem is to find the layout $l$ that minimizes the following loss function $\mathcal{L}$ : + +$$ +\mathop{\min }\limits_{{\text{layout }l \in L}}\mathcal{L}\left( l\right) = \mathop{\sum }\limits_{{w \in W}}p\left( w\right) \psi \left( {{g}_{Q}\left( w\right) ,{g}_{\text{SwipeRing }}\left( {w;l}\right) }\right) . \tag{1} +$$ + +Here, the dissimilarity between the gestures is weighted by the probability of the occurrence of the word to assure that the gestures for the most frequent words are the most similar. To efficiently optimize this problem, we made several modeling assumptions and simplifications, which are discussed in the following sections. + +### 4.1 Gesture Modelling + +We model each gesture as a piece-wise linear curve connecting the letters on a virtual QWERTY or the zones on SwipeRing. Therefore, the gesture for a word composed of $n$ letters can be seen as a $2 \times n$ dimensional matrix (Fig. 4), where each column contains coordinates(x, y)of the corresponding letter. To simulate the drawn gesture on a virtual QWERTY for the word $w$ , denoted as ${g}_{Q}\left( w\right)$ , we connect the centers of the corresponding keys of the default Android QWERTY on a Motorola ${G}^{5}$ smartphone ${\left( {24}{\mathrm{\;{cm}}}^{2}\text{ keyboard area }\right) }^{1}$ , producing unique gestures for each word. With SwipeRing, however, we account for the fact that a word can have multiple gestures forming a set ${G}_{\text{SwipeRing }}\left( {w;l}\right)$ . The zones containing 4-6 letters are wide enough to enable initiating a gesture either at the center, left, or right side of the zone (Fig. 5), resulting in multiple possibilities. Therefore, we set the gesture ${g}_{\text{SwipeRing }}\left( {w;l}\right)$ to be the one that has minimal difference when compared to the gesture drawn on the virtual QWERTY measured by discrepancy function $\psi$ : + +$$ +{g}_{\text{SwipeRing }}\left( {w;l}\right) \mathrel{\text{:=}} \underset{g \in {G}_{\text{SwipeRing }}\left( {w;l}\right) }{\arg \min }\psi \left( {{g}_{Q}\left( w\right) , g}\right) . \tag{2} +$$ + +### 4.2 Discrepancy Function + +For the discrepancy function $\psi \left( {{g}_{1},{g}_{2}}\right)$ between gestures ${g}_{1}$ and ${g}_{2}$ , our requirement is to have a rotation and scale agnostic measure that attains a value of 0 if and only if ${g}_{2}$ is a rotated and re-scaled version of ${g}_{1}$ . One possible form of such $\psi$ function can be defined as yet another optimization problem of: + +$$ +\psi \left( {{g}_{1},{g}_{2}}\right) = \mathop{\min }\limits_{{R,\alpha }}{\begin{Vmatrix}{g}_{2} - \alpha R{g}_{1}\end{Vmatrix}}_{F}^{2}. \tag{3} +$$ + +${}^{1}$ Since most virtual QWERTY layouts maintain comparable aspect ratios and the gestures only loosely connect the keys, the gestures on different sized phones, keyboards, keys are comparable when the recognizer is size agnostic. A Procrustes analysis of the gestures drawn on five different sized phones with different keyboard and key sizes yielded results between 6 and 19, suggesting they are almost identical. + +![01963eaf-a807-7e3b-884c-312e448feea0_3_156_1744_713_261_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_3_156_1744_713_261_0.jpg) + +Figure 4: The gesture for the most common word "the" on a virtual QWERTY and the respective $2 \times 3$ dimensional matrix. + +![01963eaf-a807-7e3b-884c-312e448feea0_3_1012_150_549_222_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_3_1012_150_549_222_0.jpg) + +Figure 5: Gestures on the three letter zones are likely to be initiated from the center, while gesture on the wider zones (such as, a six letter zone) can be initiated from either the center or the two sides. + +Where $\alpha$ and $R$ are the rescaling factor and the rotation matrix applied to a gesture, respectively, while $\parallel .{\parallel }_{F}$ is the Frobenius norm ${}^{2}$ . We recognize Equation 3 as an Ordinary Procrustes analysis problem, the solution of which is given in closed-form by Singular Value Decomposition [8]. Note that the value of $\psi \left( {{g}_{1},{g}_{2}}\right)$ is within the range of $\left\lbrack {0,\infty }\right\rbrack$ . Additionally, we restrict the rotation within the range of $\pm {45}^{ \circ }$ since SwipeRing gestures that are rotated more than ${45}^{ \circ }$ in either direction are unlikely to look similar to their virtual QWERTY counterparts (Fig. 6). + +
Best Match $\psi \left( {{g}_{1},{g}_{2}}\right) = {0.86}$Average Match $\psi \left( {{g}_{1},{g}_{2}}\right) = {57.49}$Worst Match $\psi \left( {{g}_{1},{g}_{2}}\right) = {133.53}$
+ +Figure 6: Gesture dissimilarity measures using the Procrustes loss function. The red curve represents the gesture for the word "the" on a virtual QWERTY $\left( {g}_{1}\right)$ , the blue curves represent gestures for the same word on a SwipeRing layout $\left( {g}_{2}\right)$ , and the gray dots show the optimal rotation and rescaling of the gesture $\left( {g}_{1}\right)$ represented as $\left( {{\alpha R}{g}_{1}}\right)$ to match $\left( {g}_{2}\right)$ . + +### 4.3 Enumeration of All Possible Letter Groupings + +The total number of possible letter groupings, and thus layouts, depends on how large we allow the groups to be. To determine this, we conducted a literature review of ambiguous keyboards that use linguistic models for decoding the input to find out whether the number of letters assigned per key or zone (the level of ambiguity) affects the performance of a keyboard. Table 3 displays an excerpt of our review, where one can see that there is a "somewhat" inverse relationship between the level of ambiguity and entry speed. Keyboards that assign a fewer number of letters per key or zone yield a relatively better entry speed than the ones that have more letters per key or zone. Based on this, we decided to assign 3-6 letters per zone. Although, this alone cannot determine the appropriate number of letters in each key since the performance of a keyboard depends on other factors, such as the layout and the reliability of the decoder, it gives a rough estimate. + +Next, we discovered all possible shatterings of the circular string \{quertyuiophjklmnbvczzgfdsa\} into substrings of length 3-6 letters each, resulting in 4,355 different layouts in total, which constitute our search set $L$ . Each possible shattering, such as \{qwer\}\{tyu\}\{iophjk\}\{lmnbv\}\{cxzg\}\{fdsa\}, represents one possible layout. We tested several of these layouts on a small smartwatch ( ${9.3}{\mathrm{\;{cm}}}^{2}$ circular display) to investigate if the zones containing three letters are wide enough for precise target selection. Results showed that the zones range between 29.0 and ${57.5}{\mathrm{\;{mm}}}^{2}$ (lengths between 9.0 and ${18.0}\mathrm{\;{mm}}$ ), which are within the length recommended for target selection on both smartphones $\left\lbrack {{25},{30},{38}}\right\rbrack$ and smartwatches $\left( {{7.0}\mathrm{\;{mm}}}\right) \left\lbrack {10}\right\rbrack$ . Fig. 7 illustrate some of these layouts. + +--- + +${}^{2}$ Frobenius norm is a generalization of Euclidean norm to the matrices, such as if $A$ is a matrix then $\parallel A{\parallel }_{F}^{2} = \mathop{\sum }\limits_{{i, j}}{a}_{ij}^{2}$ + +--- + +Table 3: Average entry speed of several ambiguous keyboards that map multiple letters to each key or zone. + +
MethodLetters per KeyEntry Speed (WPM)
COMPASS [60]39-13
HiPad [21]4-59.6-11
WrisText [12]4-510
Komninos, Dunlop [10,28]3-68
+ +![01963eaf-a807-7e3b-884c-312e448feea0_4_149_759_736_236_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_4_149_759_736_236_0.jpg) + +Figure 7: Gesture typing the word "the" on a virtual QWERTY and three possible SwipeRing layouts. For the virtual QWERTY, the figure shows ${g}_{Q}$ ("the"). For the SwipeRing layouts, the figure shows all possible gestures for "the": ${G}_{\text{SwipeRing }}$ ("the", $l$ ). Notice how the gestures for the same word are different on different SwipeRing layouts. + +### 4.4 Algorithm + +To find the optimal layout, we simulated billions of gestures for the 10,000 most frequent words in the English language [54] on the 4,355 possible segmented SwipeRing layouts ${}^{3}$ . We then matched the gestures produced for each word on each layout with the gestures produced on a virtual QWERTY using Procrustes analysis to pick the layout that yielded the best match score (118.48). The final layout (Fig. 1) scored, on average, 1.27 times better Procrustes value compared to the other possible layouts. + +Algorithm 1: Search for an optimal layout $l$ . + +--- + +Input: Possible grouping layouts $L = \left\{ {{l}_{1},{l}_{2},\ldots }\right\}$ , word + + corpus $W = \left\{ {{w}_{1},{w}_{2},\ldots }\right\}$ + +Function OptLayout(L, W): + + ${\mathcal{L}}_{\min }\leftarrow \infty ,{l}_{\min }\leftarrow \infty$ + + for layout $l \in L$ do + + $\check{\mathcal{L}} \leftarrow 0$ + + for ${wordw} \in W$ do + + $\mathcal{L} \leftarrow \mathcal{L} + p\left( w\right) \psi \left( {{g}_{Q}\left( w\right) ,{g}_{\text{SwipeRing }}\left( {w;l}\right) }\right)$ + + end + + if $\mathcal{L} \leq {\mathcal{L}}_{\min }$ then + + ${\mathcal{L}}_{\min } \leftarrow \mathcal{L},{l}_{\min } \leftarrow l$ + + end + + end + +--- + +## 5 KEYBOARD FEATURES + +This section describes some key features of the proposed keyboard. + +### 5.1 Decoder + +We developed a simple decoder to suggest corrections and to display the most probable words in a suggestion bar. For this, we used a combination of a character-level language model and a word-level bigram model for the next word prediction. To this end, we calculate the conditional probability of the user typing the word $w$ given that the previous word was ${w}_{n - 1}$ and the current zone sequence is $s$ : + +$$ +P\left( {{w}_{n} = w \mid s,{w}_{n - 1}}\right) = \frac{P\left( {{w}_{n} = w, s,{w}_{n - 1}}\right) }{P\left( {s,{w}_{n - 1}}\right) } \tag{4} +$$ + +$$ += \frac{\operatorname{count}\left( {{w}_{n} = w,{w}_{n - 1}}\right) \times \operatorname{match}\left( {M\left( w\right) , s}\right) }{\mathop{\sum }\limits_{{w}^{\prime }}\operatorname{count}\left( {{w}_{n} = {w}^{\prime },{w}_{n - 1}}\right) \times \operatorname{match}\left( {M\left( {w}^{\prime }\right) , s}\right) )}. +$$ + +Here, $M\left( w\right)$ is the sequence of zones that the user must gesture over to enter the word $w$ with SwipeRing, match $\left( {{s}_{1},{s}_{2}}\right)$ is the indicator function that returns 1 if ${s}_{2}$ is a prefix of ${s}_{1}$ or 0 otherwise, and $\operatorname{count}\left( {{w}_{n},{w}_{n - 1}}\right)$ is the number of occurrences of a bigram $\left( {{w}_{n},{w}_{n - 1}}\right)$ in the training corpus. + +To predict the most probable word for a given zone sequence $s$ and previous word ${w}_{n - 1}$ , we compute $\arg \mathop{\max }\limits_{w}P\left( {{w}_{n} = w \mid s,{w}_{n - 1}}\right)$ using the prefix tree (Trie) data structure. This implementation can output $k$ highest probable words, which we display in the suggestion bar. When no word has been typed yet, we use a unigram reduction of the model, otherwise we use the bigram model trained on the COCA corpus [6]. Due to the limited memory capacity of the smartwatch, the Trie uses the 1,300 most probable bigrams: bigram models scale as the square of the number of words, thus quickly outrun the available memory on the device. If the Trie does not have a bigram containing the user’s previous word ${w}_{n - 1}$ , we revert to the unigram predictions. Our language model is fairly simple, and more advanced models (involving neural nets, for instance) can be created. However, devising efficient language models for new keyboards is a research problem on its own and beyond the scope of our paper. + +After obtaining the list of the most probable words, SwipeRing places up to 10 most probable words in the suggestion bar, automatically positioned in close proximity to the input area (Fig. 1). The suggestion bar automatically updates as the user continues gesturing. Once done, the most probable word from the list is entered. The user could select a different word from the suggestion bar by tapping on it, which replaces the last entered word. Although the user can only see 2-4 words in the suggestion bar due to the smaller screen, she can swipe left and right on the bar to see the remaining words. + +### 5.2 One-Letter and Out-of-Vocabulary (OOV) Words + +SwipeRing enables the entry of one-letter and out-of-vocabulary words through repeated taps or strokes from/to the zones containing the target letters. The keyboard first enters the two one-letter words in the English language "a" and "T", then the other letters in the sequence in which they appear in the zones, like multi-tap [9]. For instance, to enter the letter 'e', which is in the top-right zone containing the letters: 'q', 'w', 'e', and 'r' (Fig. 1), the user taps or slides the finger three times from the middle area to the zone or from the edge to the middle area (Fig. 8). + +### 5.3 Error Correction and Special Characters + +SwipeRing automatically enters a space when a word is predicted or manually selected from the suggestion bar. During character-level text entry (to enter out-of-vocabulary words), users enter space by performing a right stroke inside the empty area of the keyboard. Tapping on the transcribed text deletes the last entry, which could be either a word or a letter. The keyboard performs a carriage return or an enter action when the user double-taps on the screen. Currently, SwipeRing does not support the entry of uppercase letters, special symbols, numbers, and languages other than English. However, support for these could be easily added by enabling the user to long-press or dwell on the screen or the zones to switch back and forth between the cases and change the layout for digits and symbols. Note that the evaluation of novel text entry techniques without the support for numeric and special characters is common practice since it eliminates a potential confound [33]. + +--- + +${}^{3}$ We only used words that had more than one letter, there were 9,828 such words in the corpus. + +--- + +![01963eaf-a807-7e3b-884c-312e448feea0_5_256_149_507_240_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_5_256_149_507_240_0.jpg) + +Figure 8: SwipeRing enables users to enter one-letter and out-of-vocabulary words by repeated strokes from/to the zones containing the target letters, like multi-tap (right). Users could also repeatedly tap on the zones (instead of strokes) to enter the letters (left). + +## 6 User Study + +We conducted a user study to compare SwipeRing with C-QWERTY. C-QWERTY uses almost the same layout as SwipeRing but places 'g' at the NE corner, while SwipeRing places it at the SW corner (left side of the layout since ’ $g$ ’ on QWERTY is usually pressed with the left hand). Both layouts share the deign goal of maintaining a similarity to QWERTY by using the touch-typing metaphor of physical keyboards (keys assigned to different hands). This likely resulted in similar (but nonidentical) layouts. Studies showed that using a physical analogy/metaphor like this enables novices to learn a method faster by skill transfer $\left\lbrack {{32},\mathrm{{pp}}{.255} - {263}}\right\rbrack$ . Besides, C-QWERTY does not divide the keys into zones, optimize them for gesture typing and skill transfer, and uses slightly different mechanism for gesture drawing approaches for the two are also different (Section 3.3). We will clarify this. Hence, a comparison between the two will highlight the performance difference due to the contributions of this work. + +### 6.1 Apparatus + +We used an LG Watch Style smartwatch, ${42.3} \times {45.7} \times {10.8}\mathrm{\;{mm}},{9.3}$ ${\mathrm{{cm}}}^{2}$ circular display,46grams, running on the Wear OS at ${360} \times {360}$ pixels in the study (Fig. 9). We decided to use a circular watch in the study since it is the most popular shape for (smart)watches $\left\lbrack {{22},{26}}\right\rbrack$ . We developed SwipeRing with the Android Studio 3.4.2, SDK 28. We collected the original source code of C-QWERTY from Costagliola et al. [7], which was also developed for the Wear OS. Both applications calculated all performance metrics directly and logged all interactions with timestamps. + +### 6.2 Design + +We used a between-subjects design to avoid interference between the conditions. Since both techniques use similar layouts, the skill acquired while learning one technique would have affected performance with the other technique [32]. There were separate groups of twelve participants for C-QWERTY and SwipeRing. Each group used the respective technique to enter short English phrases in eight blocks. Each block contained 10 random phrases from a set [33]. Hence, the design was as follows. + +2 groups: C-QWERTY and SwipeRing $\times$ + +12 participants $\times$ + +8 blocks $\times$ + +10 random phrases $= 1,{920}$ phrases in total. + +Table 4: Demographics of the C-QWERTY study. YoE-years of experience. + +
Age21-34 years $\left( {\mathrm{M} = {25.8},\mathrm{{SD}} = {3.92}}\right)$
Gender3 female, 9 male
Handedness11 right, 1 left
Owner of smartwatches5 (M = 0.8 YoE, SD = 1.4)
Experienced gesture typists3 (M = 4.7 YoE, SD = 2.5)
+ +Table 5: Demographics of the SwipeRing study. YoE-years of experience. + +
Age${21} - {28}$ years $\left( {\mathrm{M} = {24.8},\mathrm{{SD}} = {2.33}}\right)$
Gender4 female, 8 male
Handedness10 right, 1 ambidextrous, 1 left
Owner of smartwatches6 (M = 1.2 YoE, SD = 0.9)
Experienced gesture typists3 (M = 2.7 YoE, SD = 0.9)
+ +### 6.3 Participants + +Twenty-four participants took part in the user study. They were divided into two groups. Table 4, 5 present the demographics of these groups. Almost all participants chose to wear the smartwatch on their left hand and perform the gestures using the index finger of the right hand (Fig. 9). All participants were proficient in the English language. In both groups, three participants identified themselves as experienced gesture typists. However, none of them used the method dominantly, instead frequently switched between tap typing and gesture typing for text entry. The remaining participants never or very rarely used gesture typing on their devices. Initially, we wanted to recruit more experienced gesture typists to compare the performance of inexperienced and experienced users to investigate whether the gesture typing skill acquired on mobile devices transferred to SwipeRing. But we were unable to recruit experienced gesture typists after months of trying. This strengthens our argument that gesture typing is still not a dominant method of text entry, regardless of being much faster than tap typing [29], and using SwipeRing may encourage some users to apply the acquired skill on mobile devices. All participants received a small compensation for participating in the study. + +![01963eaf-a807-7e3b-884c-312e448feea0_5_926_1393_720_179_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_5_926_1393_720_179_0.jpg) + +Figure 9: The device with C-QWERTY and a participant volunteering in the study over Zoom (left). The device with SwipeRing and a volunteer participating in the study (right). + +### 6.4 Performance Metrics + +We calculated the conventional words per minute (WPM) [2] and total error rate (TER) performance metrics to measure the speed and accuracy of the keyboard, respectively. TER [48] is a commonly used error metric in text entry research that measures the ratio of the total number of incorrect characters and corrected characters to the total number of correct, incorrect, and corrected characters in the transcribed text. We also calculated the actions per word metric that signifies the average number of actions performed to enter one word. An action could be a gesture performed to enter a word, a tap on the suggestion bar, or a gesture to delete an unwanted word or letter. + +### 6.5 Procedure + +The study was conducted in a quiet room, one participant at a time. First, we introduced the keyboards to all participants, explained the study procedure, and collected their consents. We then asked them to complete a short demographics and mobile usage questionnaire. We instructed participants to sit on a chair, wear the smartwatch on their preferred hand, and practice with the keyboard they were assigned to by transcribing two short phrases. These practice phrases were not included in the main study. Interestingly, all participants decided to wear the smartwatch on their left hand and perform the gestures using the index finger of the other hand. The actual study started after that. There were eight blocks in each condition, with at least 5-10 minutes gap between the blocks. In each block, participants transcribed ten random short English phrases from a set [33] using either C-QWERTY Gesture or SwipeRing. Both applications presented one phrase at a time at the bottom of the smartwatch (Fig. 9). Participants were instructed to read, understand, and memorize the phrase, transcribe it "as fast and accurate as possible", then double-tap on the touchscreen to see the next phrase. The transcribed text was displayed on the top of the smartwatch. Error correction was recommended but not forced. After the study, all participants completed a short post-study questionnaire that asked them to rate various aspects of the keyboard on a 7-point Likert scale. It also enabled participants to comment and give feedback on the examined keyboards. + +Due to the spread of COVID-19, the C-QWERTY group participated in the study via Zoom, a teleconference application. We personally delivered the smartwatch to each participant's mailbox and scheduled individual online sessions with them. They were instructed to join the session from a quiet room. All forms were completed and signed electronically. Apart from that, an online session followed the same structure as a physical session. A researcher observed and recorded a complete study session. We piked up the devices after the study. The device, the charger, and the container were disinfected before delivery and after pickup. + +### 6.6 Results + +A Shapiro-Wilk test revealed that the response variable residuals were normally distributed. A Mauchly's test indicated that the variances of populations were equal. Hence, we used a Mixed-design ANOVA for one between-subjects and one within-subjects factors (technique and block, respectively). We used a Mann-Whitney U test to compare user ratings of various aspects of the two techniques. + +#### 6.6.1 Entry Speed + +An ANOVA identified a significant effect of technique on entry speed $\left( {{F}_{1,{22}} = {25.05}, p < {.0001}}\right)$ . There was also a significant effect of block $\left( {{F}_{7,{22}} = {63.65}, p < {.0001}}\right)$ . The technique $\times$ block interaction effect was also statistically significant $\left( {{F}_{7.154} = {4.02}, p < {.0005}}\right)$ . Fig. 10 (top) illustrates average entry speed for both techniques in each block, fitted to a function to model the power law of practice [4]. In the last block, the average entry speed with C-QWERTY and SwipeRing were 11.20 WPM (SD = 3.0) and 16.67 WPM (SD $= {5.36})$ , respectively. Nine users of SwipeRing yielded a much higher entry speed than the maximum entry speed reached with C-QWERTY, illustrated in Fig. 10 (bottom). The highest average entry speed in the last block was 21.53 WPM (P23, inexperienced gesture typist). + +#### 6.6.2 Error Rate + +An ANOVA identified a significant effect of technique on error rate $\left( {{F}_{1.22} = {24.61}, p < {.0001}}\right)$ . There was also a significant effect of block $\left( {{F}_{7.22} = {2.89}, p < {.01}}\right)$ . However, the technique $\times$ block interaction effect was not significant $\left( {{F}_{7,{154}} = {1.01}, p > {.05}}\right)$ . Fig. 11 (top) illustrates average error rate for both techniques in each block, fitted to a function to model the power law of practice [4]. In the last block, the average error rates with C-QWERTY and SwipeRing were ${12.52}\% \left( {\mathrm{{SD}} = {13.91}}\right)$ and ${5.56}\% \left( {\mathrm{{SD}} = {8.53}}\right)$ , respectively. + +![01963eaf-a807-7e3b-884c-312e448feea0_6_939_156_695_784_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_6_939_156_695_784_0.jpg) + +Figure 10: Average entry speed (WPM) per block fitted to a power trendline (top). The SwipeRing group surpassed the C-QWERTY group's maximum entry speed by the third block. Note the scale on the vertical axis. Average entry speed (WPM) with the two techniques for each participant in the final block (bottom). + +#### 6.6.3 Actions per Word + +An ANOVA identified a significant effect of technique on actions per word $\left( {{F}_{1,{22}} = {10.31}, p < {.005}}\right)$ . There was also a significant effect of block $\left( {{F}_{7,{22}} = {3.14}, p < {.005}}\right)$ . However, the technique $\times$ block interaction effect was not significant $\left( {{F}_{7,{154}} = {0.61}, p > {.05}}\right)$ . Fig. 11 (bottom) illustrates average actions per word for both techniques in each block, fitted to a function to model the power law of practice [4]. In the last block, the average actions per word with C-QWERTY and SwipeRing were 2.45 (SD = 1.64) and 1.59 (SD = 0.72), respectively. + +### 6.7 User Feedback + +A Mann-Whitney U test identified a significant effect of technique on willingness to use $\left( {U = {21.0}, Z = - {3.1}, p < {.005}}\right)$ , perceived speed $\left( {U = {22.5}, Z = - {3.07}, p < {.005}}\right)$ , and perceived accuracy $\left( {U = {27.0}, Z = - {2.72}, p < {.01}}\right)$ . However, there was no significant effect on ease of use $\left( {U = {48.0}, Z = - {1.5}, p > {.05}}\right)$ or learnability $\left( {U = {66.0}, Z = - {0.37}, p > {.05}}\right)$ . Fig. 12 illustrates median user ratings of all investigated aspects of the two keyboards on a 7-point Likert scale. + +## 7 DISCUSSION + +SwipeRing reached a competitive entry speed in only eight short blocks. It was ${33}\%$ faster than C-QWERTY. The average entry speed with C-QWERTY and SwipeRing were 11.20 WPM and 16.67 WPM, respectively. Four participants reached over 20 WPM with SwipeRing (Fig. 10, bottom). Further, the SwipeRing group surpassed the C-QWERTY group's maximum entry speed by the third block (Fig. 10, top). It also performed better than all popular circular text entry techniques (Table 2) and some QWERTY-based techniques + +![01963eaf-a807-7e3b-884c-312e448feea0_7_162_150_694_870_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_7_162_150_694_870_0.jpg) + +Figure 11: Average total error rate (TER) (top) and actions per word (APW) (bottom) in each block fitted to a power trendline. Note the scale on the vertical axis. + +(Table 1) for smartwatches. Yi et al. [59] and WatchWriter [13] reported much higher entry speed than SwipeRing. Both techniques use aggressive statistical models with a miniature QWERTY to account for frequent incorrect target selection due to the smaller key sizes (the "fat-finger problem" [52]). This makes entering out-of-vocabulary words difficult with these techniques. In fact, the former technique does not include a mechanism for entering out-of-vocabulary words [59, p. 58]. DualKey [15] and SwipeBoard [5] also reported higher entry speed than SwipeRing. However, DualKey depends on external hardware to distinguish between different fingers and has a steep learning curve (the reported entry speed was achieved in the 15th session). SwipeBoard, on the other hand, was evaluated on a tablet computer, hence unclear whether the reported entry speed can be maintained on an actual smartwatch. Besides, all of these keyboards occupy about ${45} - {85}\%$ of the screen real-estate, leaving a little room for displaying the entered text, let alone additional information. There was a significant effect of block and technique $\times$ block on entry speed. Entry speed increased by 38% with C-QWERTY and ${43}\%$ with and SwipeRing in the last block compared to the first. The average entry speed over block for both techniques correlated well with the power law of practice [4] $\left( {{R}^{2} = {0.9588}}\right)$ . However, the learning curve for C-QWERTY was flattening out by the last block, while SwipeRing was going strong. Analysis revealed that entry speed improved by 2% with C-QWERTY and 13% with SwipeRing in the last block compared to the second-last. This suggests that SwipeRing did not reach its highest possible speed in the study. Relevantly, the highest entry speed recorded in the study was 33.18 WPM (P23, Block 6). + +There was a significant effect of technique on error rate. SwipeRing was significantly more accurate than C-QWERTY (Fig. 11, top). The average error rate with C-QWERTY and + +![01963eaf-a807-7e3b-884c-312e448feea0_7_943_160_696_424_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_7_943_160_696_424_0.jpg) + +Figure 12: Median user ratings of the willingness to use, ease of use, learnability, perceived speed, and perceived accuracy of SwipeRing and C-QWERTY on a 7-point Likert scale, where " 1 " to "7" represented "Strongly Disagree" to "Strongly Agree". The error bars signify $\pm 1$ standard deviations (SD). + +SwipeRing were 12.52% and 5.56%, respectively, in the last block (56% fewer errors with SwipeRing). This is unsurprising since the designers of C-QWERTY also reported a high error rate with the technique (20.6%) using the same TER metric [7] that accounts for both corrected and uncorrected errors in the transcribed text [48]. Most text entry techniques for smartwatches report character error rate (CER) that only accounts for the uncorrected errors in the transcribed text [2]. Most errors with C-QWERTY were committed due to incorrect target selection since the keys were too small. SwipeRing yielded a lower error rate due to the larger zones that were designed to accommodate precise target selection. There was a significant effect of block on error rate. Participants committed 13% fewer errors with C-QWERTY and 29% fewer errors with SwipeRing in the last block compared to the first. The average error rate over block correlated moderately for C-QWERTY $\left( {{R}^{2} = {0.5895}}\right)$ but well for SwipeRing $\left( {{R}^{2} = {0.8706}}\right)$ with the power law of practice [4]. Hence, it is likely that SwipeRing will become much more accurate with practice. Actions per word yielded a similar pattern as error rate. SwipeRing consistently required fewer actions to enter words than C-QWERTY (Fig. 11, bottom). C-QWERTY and SwipeRing required on average 2.45 and 1.59 actions per word in the last block, respectively (35% fewer actions with SwipeRing). This is mainly because participants performed fewer corrective actions with SwipeRing than C-QWERTY. There was also a significant effect of block. The average actions per word over block correlated well for SwipeRing $\left( {{R}^{2} = {0.8459}}\right)$ but not for C-QWERTY $\left( {{R}^{2} = {0.4999}}\right)$ with the power law of practice [4]. This suggests that actions per word with SwipeRing is likely to further improve with practice. + +Qualitative results revealed that the SwipeRing group found the examined technique faster and more accurate than the C-QWERTY group (Fig. 12). These differences were statistically significant. Consequently, the SwipeRing group was significantly more interested in using the technique on their devices than the C-QWERTY group. However, both techniques were rated comparably on ease of use and learnability, which is unsurprising since both techniques used similar layouts. + +We compared the performance of C-QWERTY in our study with the results from the literature to find out whether conducting the study remotely affected its performance. Costagliola et al. [7] reported a 7.7 WPM entry speed with a ${20.6}\%$ error rate on a slightly larger smartwatch using the same phrase set in a single block containing 6 phrases. In our study, C-QWERTY yielded a comparable 7 WPM and 16.8% error rate in the first block containing 10 phrases. + +![01963eaf-a807-7e3b-884c-312e448feea0_8_161_150_695_564_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_8_161_150_695_564_0.jpg) + +Figure 13: Average entry speed (WPM) per block for the two user groups with the two techniques fitted to power trendlines. Note the scale on the vertical axis. + +### 7.1 Skill Transfer from Virtual QWERTY + +Learning occurred with both experienced and inexperienced gesture typists with both techniques. The average entry speed over block correlated well with the power law of practice for C-QWERTY with both user groups (experienced: ${R}^{2} = {0.8142}$ , inexperienced: ${R}^{2} = {0.9547}$ ), also for SwipeRing (experienced: ${R}^{2} = {0.8732}$ , inexperienced: ${R}^{2} = {0.9702}$ ). Although there were not enough data points to run statistical tests, average performance over blocks suggests that experienced participants were performing much better with both techniques from the start. Fig. 13 shows that experienced participants consistently performed better than inexperienced participants. This suggests that experienced participants were able to transfer their smartphone gesture typing skills to both techniques. However, with C-QWERTY, inexperienced participants almost caught up with the experienced participants by the last block. While with SwipeRing, both user groups were learning at comparable rates in all blocks. Besides, both experienced and inexperienced participants constantly performed better with SwipeRing than C-QWERTY. These indicate towards the possibility that optimizing the zones for gesture similarities facilitated a higher rate of skill transfer. As blocks progressed, experienced participants were most probably more confident, consciously or subconsciously, in applying their gesture typing skills to SwipeRing. + +Interestingly, error rate and actions per word patterns were quite different than the patterns observed in entry speed. With SwipeRing, experienced participants were consistently better than inexperienced users, while inexperienced participants were learning to be more accurate. The average error rate over block correlated well with the power law of practice [4] for inexperienced participants $\left( {{R}^{2} = }\right.$ 0.7019 but not for experienced participants $\left( {{R}^{2} = {0.3329}}\right)$ . We speculate this is because experienced participants made fewer errors than inexperienced participants, which required performing fewer corrective actions (a phenomenon reported in the literature [3]). In contrast, with C-QWERTY, experienced participants committed more errors, requiring more corrective actions. In Fig. 14, one can see that experienced participants' error rates and actions per word went up and down in alternating blocks. We do not have a definite explanation for this, but based on user comments we speculate that this is because experienced participants were trying to apply their gesture typing skills to C-QWERTY, only committing more errors due to the smaller target size, then reduced speed in the following block to increase accuracy. This process continued till the end of the study. In contrast, actions per word continued improving for both experienced $\left( {{R}^{2} = {0.7171}}\right)$ and inexperienced $\left( {{R}^{2} = {0.8012}}\right)$ participants with SwipeRing. This suggests that optimizing the zones for precise target selection facilitated a higher rate of skill acquisition. + +![01963eaf-a807-7e3b-884c-312e448feea0_8_934_148_700_1134_0.jpg](images/01963eaf-a807-7e3b-884c-312e448feea0_8_934_148_700_1134_0.jpg) + +Figure 14: Average error rate (TER) (top) and average actions per word (APW) (bottom) per block for the two user groups with the two techniques fitted to power trendlines. Note the scale on the vertical axis. + +## 8 CONCLUSION + +We presented SwipeRing, a circular keyboard arranged around the bezel of a smartwatch to enable gesture typing with the support of a statistical decoder. It also enables character-based text entry by using a multi-tap like approach. It divides the layout into seven zones and maintains a resemblance to the standard QWERTY layout. Unlike most existing solutions, it does not occupy most of the touchscreen real-estate or require repeated actions to enter most words. Yet, it employs the whole screen for drawing gestures, which is more comfortable than drawing gestures on a miniature QWERTY. The keyboard is optimized for target selection and to maintain similarities between the gestures drawn on a smartwatch and a virtual QWERTY to facilitate skill transfer between devices. We compared the technique with C-QWERTY, a similar technique that uses almost the same layout to enable gesture typing, but do not divide the keyboard into zones or optimize the zones for target selection and gesture similarity. In the study, SwipeRing yielded a 33% higher entry speed, 56% lower error rate, and 35% lower actions per word than C-QWERTY in the last block. The average entry speed with SwipeRing was ${16.67}\mathrm{{WPM}}$ , faster than all popular circular and most QWERTY-based text entry techniques for smartwatches. Results indicate towards the possibility that skilled gesture typists were able to transfer their skills to SwipeRing. Besides, participants found the keyboard easy to learn, easy to use, fast, and accurate, thus wanted to continue using it on smartwatches. + +### 8.1 Future Work + +The proposed keyboard could be useful in saving touchscreen realestate on larger devices, such as smartphones and tablets. The keyboard could appear on the screen like a floating widget, where users perform gestures to enter text. We will also explore this possibility. We will explore SwipeRing in virtual and augmented reality by using a smartwatch or different types of controllers. Finally, we will investigate the possibility of eyes-free text entry with SwipeRing. We speculate, when the positions of the zones are learned, users would be able to perform the gestures without visual aid. This can make the whole touchscreen available to display additional information by making the keyboard invisible. + +## REFERENCES + +[1] A. S. Arif and A. Mazalek. A Survey of Text Entry Techniques for Smartwatches. In M. Kurosu, ed., Human-Computer Interaction. Interaction Platforms and Techniques, Lecture Notes in Computer Science, pp. 255-267. Springer International Publishing, Cham, 2016. doi: 10. 1007/978-3-319-39516-6_24 + +[2] A. S. Arif and W. Stuerzlinger. Analysis of Text Entry Performance Metrics. In 2009 IEEE Toronto International Conference Science and Technology for Humanity (TIC-STH), pp. 100-105, Sept. 2009. doi: 10. 1109/TIC-STH.2009.5444533 + +[3] A. S. Arif and W. Stuerzlinger. User Adaptation to a Faulty Unistroke-Based Text Entry Technique by Switching to an Alternative Gesture Set. In Proceedings of Graphics Interface 2014, GI '14, pp. 183- 192. Canadian Information Processing Society, Toronto, Ont., Canada, Canada, 2014. + +[4] S. K. Card, T. P. Moran, and A. Newell. The Psychology of Human-Computer Interaction. CRC Press, 1983. + +[5] X. A. Chen, T. Grossman, and G. Fitzmaurice. Swipeboard: A Text Entry Technique for Ultra-Small Interfaces That Supports Novice to Expert Transitions. In Proceedings of the 27th Annual Acm Symposium on User Interface Software and Technology, UIST '14, pp. 615-620. ACM, Oct. 2014. doi: 10.1145/2642918.2647354 + +[6] COCA. N-Grams: Based on 520 Million Word Coca Corpus, Dec. 2019. + +[7] G. Costagliola, M. D. Rosa, R. D'Arco, S. D. Gregorio, V. Fuccella, and D. Lupo. C-QWERTY: A Text Entry Method for Circular Smart-watches. pp. 51-57, July 2019. doi: 10.18293/DMSVIVA2019-014 + +[8] I. L. Dryden. Shape Analysis. In Wiley StatsRef: Statistics Reference Online. American Cancer Society, 2014. doi: 10.1002/9781118445112 .stat05087 + +[9] M. D. Dunlop and A. Crossan. Predictive Text Entry Methods for Mobile Phones. Personal Technologies, 4(2):134-143, June 2000. doi: 10.1007/BF01324120 + +[10] M. D. Dunlop, A. Komninos, and N. Durga. Towards High Quality Text Entry on Smartwatches. In CHI'14 Extended Abstracts on Human Factors in Computing Systems, CHI EA '14, pp. 2365-2370. Association for Computing Machinery, Toronto, Ontario, Canada, Apr. 2014. doi: 10.1145/2559206.2581319 + +[11] K. Go, M. Kikawa, Y. Kinoshita, and X. Mao. Eyes-Free Text Entry with EdgeWrite Alphabets for Round-Face Smartwatches. In 2019 International Conference on Cyberworlds (CW), pp. 183-186, Oct. 2019. doi: 10.1109/CW.2019.00037 + +[12] J. Gong, Z. Xu, Q. Guo, T. Seyed, X. A. Chen, X. Bi, and X.-D. Yang. WrisText: One-Handed Text Entry on Smartwatch Using Wrist Gestures. In Proceedings of the 2018 CHI Conference on Human + +Factors in Computing Systems, CHI '18, pp. 181:1-181:14. ACM, New York, NY, USA, 2018. doi: 10.1145/3173574.3173755 + +[13] M. Gordon, T. Ouyang, and S. Zhai. WatchWriter: Tap and Gesture Typing on a Smartwatch Miniature Keyboard with Statistical Decoding. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, pp. 3817-3821. ACM, New York, NY, USA, 2016. doi: 10.1145/2858036.2858242 + +[14] D. L. Grover, M. T. King, and C. A. Kushler. Reduced Keyboard Disambiguating Computer, Oct. 1998. + +[15] A. Gupta and R. Balakrishnan. DualKey: Miniature Screen Text Entry via Finger Identification. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI ' 16, pp. 59-70. Association for Computing Machinery, San Jose, California, USA, May 2016. doi: 10.1145/2858036.2858052 + +[16] T. Götzelmann and P.-P. Vázquez. InclineType: An Accelerometer-Based Typing Approach for Smartwatches. In Proceedings of the XVI International Conference on Human Computer Interaction, In-teracción '15, pp. 1-4. Association for Computing Machinery, Vilanova i la Geltru, Spain, Sept. 2015. doi: 10.1145/2829875.2829929 + +[17] R. L. Hershman and W. A. Hillix. Data Processing in Typing: Typing Rate as a Function of Kind of Material and Amount Exposed. Human Factors, 7(5):483-492, Oct. 1965. doi: 10.1177/001872086500700507 + +[18] J. Hong, S. Heo, P. Isokoski, and G. Lee. SplitBoard: A Simple Split Soft Keyboard for Wristwatch-Sized Touch Screens. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, pp. 1233-1236. ACM, New York, NY, USA, 2015. doi: 10.1145/2702123.2702273 + +[19] M.-C. Hsiu, D.-Y. Huang, C. A. Chen, Y.-C. Lin, Y.-p. Hung, D.-N. Yang, and M. Chen. Forceboard: Using Force as Input Technique on Size-Limited Soft Keyboard. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, MobileHCI '16, pp. 599-604. Association for Computing Machinery, Florence, Italy, Sept. 2016. doi: 10.1145/2957265. 2961827 + +[20] S. Hwang and G. Lee. Qwerty-Like 3x4 Keypad Layouts for Mobile Phone. In CHI '05 Extended Abstracts on Human Factors in Computing Systems, CHI EA '05, pp. 1479-1482. Association for Computing Machinery, Portland, OR, USA, Apr. 2005. doi: 10.1145/1056808. 1056946 + +[21] H. Jiang. HiPad: Text Entry for Head-Mounted Displays Using Circular Touchpad. p. 12. + +[22] Y. Jung, S. Kim, and B. Choi. Consumer valuation of the wearables: The case of smartwatches. Computers in Human Behavior, 63:899 - 905, 2016. doi: 10.1016/j.chb.2016.06.040 + +[23] T. Kamba, S. A. Elson, T. Harpold, T. Stamper, and P. Sukaviriya. Using Small Screen Space More Efficiently. In Proceedings of the SIGCHI conference on Human factors in computing systems common ground - CHI '96, pp. 383-390. ACM Press, Vancouver, British Columbia, Canada, 1996. doi: 10.1145/238386.238582 + +[24] K. Katsuragawa, J. R. Wallace, and E. Lank. Gestural Text Input Using a Smartwatch. In Proceedings of the International Working Conference on Advanced Visual Interfaces, AVI '16, pp. 220-223. Association for Computing Machinery, Bari, Italy, June 2016. doi: 10.1145/2909132. 2909273 + +[25] J. H. Kim, L. S. Aulck, O. Thamsuwan, M. C. Bartha, C. A. Harper, and P. W. Johnson. The Effects of Touch Screen Virtual Keyboard Key Sizes on Typing Performance, Typing Biomechanics and Muscle Activity. In V. G. Duffy, ed., Digital Human Modeling and Applications in Health, Safety, Ergonomics, and Risk Management. Human Body Modeling and Ergonomics, Lecture Notes in Computer Science, pp. 239-244. Springer, Berlin, Heidelberg, 2013. doi: 10.1007/978-3-642 $- {39182} - 8\_ {28}$ + +[26] K. J. Kim. Shape and size matter for smartwatches: effects of screen shape, screen size, and presentation mode in wearable communication. Journal of Computer-Mediated Communication, 22(3):124-140, 2017. + +[27] S. Kim, S. Ahn, and G. Lee. DiaQwerty: QWERTY Variants to Better Utilize the Screen Area of a Round or Square Smartwatch. In Proceedings of the 2018 ACM International Conference on Interactive + +Surfaces and Spaces - ISS '18, pp. 147-153. ACM Press, Tokyo, Japan, 2018. doi: 10.1145/3279778.3279792 + +[28] A. Komninos and M. Dunlop. Text Input on a Smart Watch. IEEE + +Pervasive Computing, 13(4):50-58, Oct. 2014. doi: 10.1109/MPRV. 2014.77 + +[29] P. O. Kristensson and K. Vertanen. The Inviscid Text Entry Rate and Its Application as a Grand Goal for Mobile Text Entry. In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services, MobileHCI '14, pp. 335-338. Association for Computing Machinery, Toronto, ON, Canada, Sept. 2014. doi: 10. 1145/2628363.2628405 + +[30] S. Kwon, D. Lee, and M. K. Chung. Effect of Key Size and Activation Area on the Performance of a Regional Error Correction Method in a Touch-Screen Qwerty Keyboard. International Journal of Industrial Ergonomics, 39(5):888-893, Sept. 2009. doi: 10.1016/j.ergon.2009. 02.013 + +[31] P. Lamkin. Smart Wearables Market to Double by 2022: \$27 Billion Industry Forecast, Oct. 2018. + +[32] I. S. MacKenzie. Human-Computer Interaction: An Empirical Research Perspective. Morgan Kaufmann is an imprint of Elsevier, Amsterdam, first edition ed., 2013. + +[33] I. S. MacKenzie and R. W. Soukoreff. Phrase Sets for Evaluating Text Entry Techniques. In CHI '03 Extended Abstracts on Human Factors in Computing Systems, CHI EA '03, pp. 754-755. ACM, New York, NY, USA, 2003. doi: 10.1145/765891.765971 + +[34] J. Mankoff and G. D. Abowd. Cirrin: A Word-Level Unistroke Keyboard for Pen Input. In Proceedings of the 11th annual ACM symposium on User interface software and technology - UIST '98, pp. 213-214. ACM Press, San Francisco, California, United States, 1998. doi: 10. 1145/288392.288611 + +[35] E. Matias, I. S. MacKenzie, and W. Buxton. Half-QWERTY: Typing with One Hand Using Your Two-Handed Skills. In Conference companion on Human factors in computing systems - CHI '94, pp. 51-52. ACM Press, Boston, Massachusetts, United States, 1994. doi: 10.1145/259963.260024 + +[36] S. Oney, C. Harrison, A. Ogan, and J. Wiese. ZoomBoard: A Diminutive Qwerty Soft Keyboard Using Iterative Zooming for Ultra-Small Devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, pp. 2799-2802. ACM, New York, NY, USA, 2013. doi: 10.1145/2470654.2481387 + +[37] P. Parhi, A. K. Karlson, and B. B. Bederson. Target size study for one-handed thumb use on small touchscreen devices. In Proceedings of the 8th Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '06, p. 203-210. Association for Computing Machinery, New York, NY, USA, 2006. doi: 10.1145/ 1152215.1152260 + +[38] Y. S. Park, S. H. Han, J. Park, and Y. Cho. Touch Key Design for Target Selection on a Mobile Phone. In Proceedings of the 10th international conference on Human computer interaction with mobile devices and services, MobileHCI '08, pp. 423-426. Association for Computing Machinery, Amsterdam, The Netherlands, Sept. 2008. doi: 10.1145/ 1409240.1409304 + +[39] R. Qin, S. Zhu, Y.-H. Lin, Y.-J. Ko, and X. Bi. Optimal-T9: An Optimized T9-like Keyboard for Small Touchscreen Devices. In Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, ISS '18, pp. 137-146. Association for Computing Machinery, Tokyo, Japan, Nov. 2018. doi: 10.1145/3279778.3279786 + +[40] T. Salthouse and J. Saults. Multiple Spans in Transcription Typing. Journal of Applied Psychology, 72(2):187-196, May 1987. + +[41] T. A. Salthouse. Effects of Age and Skill in Typing. Journal of Experimental Psychology: General, 113(3):345-371, 1984. doi: 10. 1037/0096-3445.113.3.345 + +[42] T. A. Salthouse. Anticipatory Processing in Transcription Typing. Journal of Applied Psychology, 70(2):264-271, 1985. doi: 10.1037/ 0021-9010.70.2.264 + +[43] M. Serrano, A. Roudaut, and P. Irani. Visual Composition of Graphical Elements on Non-Rectangular Displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pp. 4405-4416. ACM, New York, NY, USA, 2017. doi: 10.1145/3025453. 3025677 + +[44] L. H. Shaffer. Latency Mechanisms in Transcription. Attention and Performance, IV:435-446, 1973. + +[45] L. H. Shaffer and A. French. Coding Factors in Transcription. Quar- + +terly Journal of Experimental Psychology, 23(3):268-274, Aug. 1971. doi: 10.1080/14640746908401821 + +[46] L. H. Shaffer and J. Hardwick. The Basis of Transcription Skill. Journal of Experimental Psychology, 84(3):424-440, 1970. doi: 10.1037/ h0029287 + +[47] T. Shibata, D. Afergan, D. Kong, B. F. Yuksel, I. S. MacKenzie, and R. J. Jacob. DriftBoard: A Panning-Based Text Entry Technique for Ultra-Small Touchscreens. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST '16, pp. 575-582. ACM, New York, NY, USA, 2016. doi: 10.1145/2984511.2984591 + +[48] R. W. Soukoreff and I. S. MacKenzie. Metrics for Text Entry Research: An Evaluation of Msd and Kspc, and a New Unified Error Metric. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '03, pp. 113-120. ACM, New York, NY, USA, 2003. doi: 10.1145/642611.642632 + +[49] T. Tojo, T. Kato, and S. Yamamoto. BubbleFlick: Investigating Effective Interface for Japanese Text Entry on Smartwatches. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '18, pp. 1-12. Association for Computing Machinery, Barcelona, Spain, Sept. 2018. doi: 10. 1145/3229434.3229455 + +[50] TouchOne. TouchOne Keyboard- The First Dedicated Smartwatch Keyboard, May 2016. + +[51] K. Vertanen, D. Gaines, C. Fletcher, A. M. Stanage, R. Watling, and P. O. Kristensson. VelociWatch: Designing and Evaluating a Virtual Keyboard for the Input of Challenging Text. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 1-14. ACM, May 2019. doi: 10.1145/3290605.3300821 + +[52] D. Vogel and P. Baudisch. Shift: A Technique for Operating Pen-Based Interfaces Using Touch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, pp. 657-666. ACM, New York, NY, USA, 2007. doi: 10.1145/1240624.1240727 + +[53] F. Wang, X. Cao, X. Ren, and P. Irani. Detecting and Leveraging Finger Orientation for Interaction with Direct-Touch Surfaces. In Proceedings of the 22nd annual ACM symposium on User interface software and technology, UIST '09, pp. 23-32. Association for Computing Machinery, Victoria, BC, Canada, Oct. 2009. doi: 10.1145/1622176. 1622182 + +[54] Wiktionary. Wiktionary:Frequency lists/PG/2006/04/1-10000 - Wik-tionary, Nov. 2019. + +[55] S. P. Witte and R. D. Cherry. Writing Processes and Written Products in Composition Research. In Studying Writing: Linguistic Approaches, vol. 1, pp. 112-153. Sage Publications, Beverly Hills, CA, USA, 1 ed., 1986. + +[56] J. O. Wobbrock, B. A. Myers, and J. A. Kembel. EdgeWrite: A Stylus-Based Text Entry Method Designed for High Accuracy and Stability of Motion. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, UIST '03, pp. 61-70. ACM, New York, NY, USA, 2003. doi: 10.1145/964696.964703 + +[57] M. Wolfersberger. L1 to L2 Writing Process and Strategy Transfer: A Look at Lower Proficiency Writers. TESL-EJ, 7(2), 2003. + +[58] P. C. Wong, K. Zhu, X.-D. Yang, and H. Fu. Exploring eyes-free bezel-initiated swipe on round smartwatches. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-11. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376393 + +[59] X. Yi, C. Yu, W. Shi, and Y. Shi. Is It Too Small?: Investigating the Performances and Preferences of Users When Typing on Tiny Qwerty Keyboards. International Journal of Human-Computer Studies, 106:44-62, Oct. 2017. doi: 10.1016/j.ijhcs.2017.05.001 + +[60] X. Yi, C. Yu, W. Xu, X. Bi, and Y. Shi. COMPASS: Rotational Keyboard on Non-Touch Smartwatches. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pp. 705-715. Association for Computing Machinery, Denver, Colorado, USA, May 2017. doi: 10.1145/3025453.3025454 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/eVyaNs3sm8a/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/eVyaNs3sm8a/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..4aeda1390620fcdf223f0429bef0fe88163c6a8a --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/eVyaNs3sm8a/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,425 @@ +§ SWIPERING: GESTURE TYPING ON SMARTWATCHES USING A CIRCULAR SEGMENTED QWERTY + +Category: Research + + < g r a p h i c s > + +Figure 1: SwipeRing arranges the standard QWERTY layout around the edge of a smartwatch in seven zones. To enter a word, the user connects the zones containing the target letters by drawing gestures on the screen, like gesture typing on a virtual QWERTY. A statistical decoder interprets the input and enters the most probable word. A suggestion bar appears to display other possible words. The user could stroke right or left on the suggestion bar to see additional suggestions. Tapping on a suggestion replaces the last entered word. One-letter and out-of-vocabulary words are entered by repeated strokes from/to the zones containing the target letters, in which case the keyboard first enters the two one-letter words in the English language (see the second last image from the left), then the other letters in the sequence in which they appear in the zones (like multi-tap). Users could also repeatedly tap (instead of stroke) on the zones to enter the letters. The keyboard highlights the zones when the finger enters them and traces all finger movements. This figure illustrates the process of entering the phrase "the world is a stage" on the SwipeRing keyboard (upper sequence) and on a smartphone keyboard (bottom sequence). We can clearly see the resemblance of the gestures. + +§ ABSTRACT + +Most text entry techniques for smartwatches require repeated taps to enter one word, occupy most of the screen, or use layouts that are difficult to learn. Users are usually reluctant to use these techniques since the skills acquired in learning cannot be transferred to other devices. SwipeRing is a novel keyboard that arranges the QWERTY layout around the bezel of a smartwatch divided into seven zones to enable gesture typing. These zones are optimized for usability and to maintain similarities between the gestures drawn on a smartwatch and a virtual QWERTY to facilitate skill transfer. Its circular layout keeps most of the screen available. We compared SwipeRing with C-QWERTY that uses a similar layout but does not divide the keys into zones or optimize for skill transfer and target selection. In a study, SwipeRing yielded a 33% faster entry speed (16.67 WPM) and a ${56}\%$ lower error rate than C-QWERTY. + +Index Terms: Human-centered computing-Text input; Human-centered computing—Gestural input + +§ 1 INTRODUCTION + +Smartwatches are becoming increasingly popular among mobile users [31]. However, the absence of an efficient text entry technique for these devices limits smartwatch interaction to mostly controlling applications running on smartphones (e.g., pausing a song on a media player or rejecting a phone call), checking notifications on incoming text messages and social media posts, and using them as fitness trackers to record daily physical activity. Text entry on smartwatches is difficult due to several reasons. First, the smaller key sizes of miniature keyboards make it difficult to tap on the target keys (the "fat-finger problem" [52]), resulting in frequent input errors even when augmented with a predictive system. Correcting these errors is also difficult, and often results in additional errors. To address this, many existing keyboards use a multi-action approach to text entry, where the user performs multiple actions to enter one letter (e.g., multiple taps). This increases not only learning time but also physical and mental demands. Besides, most existing keyboards cover much of the smartwatch touchscreen (50-85%), reducing the real estate available to view or interact with the elements in the background. Many keyboards for smartwatches that use novel layouts [1] do not facilitate skill transfer. That is, the skills acquired in learning new keyboards are usually not usable on other devices. This discourages users from learning a new technique. Further, most of these keyboards were designed for square watch-faces, thus do not always work on round screens. Finally, some techniques rely on external hardware, which is impractical for wearable devices. + +To address these issues, we present SwipeRing, a circular keyboard that sits around the smartwatch bezel to enable gesture typing with the support of a statistical decoder. Its circular layout keeps most of the touchscreen available to view additional information and perform other tasks. It uses a QWERTY-like layout divided into seven zones that are optimized to provide comfortable areas to initiate and release gestures, and to maintain similarities between the gestures drawn on a virtual QWERTY and SwipeRing to facilitate skill transfer. + +The remainder of the paper is organized as follows. First, we discuss the motivation for the work, followed by a review of the literature in the area. We then introduce the new keyboard and discuss its rigorous optimization process. We present the results of a user study that compared the performance of the proposed keyboard with the C-QWERTY keyboard that uses a similar layout but does not divide the keys into zones or optimize for skill transfer and target selection. Finally, we conclude the paper with potential future extensions of the work. + +§ 2 MOTIVATION + +The design of SwipeRing is motivated by the following considerations. + +§ 2.1 FREE-UP TOUCHSCREEN REAL-ESTATE + +On a ${30.5}\mathrm{\;{mm}}$ circular watch, a standard QWERTY layout without and with the suggestion bar occupy about ${66}\% \left( {{480}{\mathrm{\;{mm}}}^{2}}\right)$ and ${85}\%$ $\left( {{621}{\mathrm{\;{mm}}}^{2}}\right)$ of the screen, respectively (Fig. 2). On the same device, our technique, SwipeRing, occupies only about ${36}\% \left( {{254.34}{\mathrm{\;{mm}}}^{2}}\right)$ of the screen, almost half the QWERTY layout. Saving screen space is important since the extra space could be used to display additional information and to make sure that the interface is not cluttered, which affects performance [23]. For example, the extra space could be used to display the email users are responding to or more of what they have composed. The former can keep users aware of the context. The latter improves writing quality [55, 57] and performance [17]. Numerous studies have verified this in various settings, contexts, and devices $\left\lbrack {{40} - {42},{44} - {46}}\right\rbrack$ . + + < g r a p h i c s > + +Figure 2: When entering the phrase "the work is done take a coffee break" with a smartwatch QWERTY, only the last 10 characters are visible (left), while the whole phrase is visible (37 characters) with SwipeRing (right). Besides, there are extra space available below the floating suggestion bar to display additional information. + +§ 2.2 FACILITATE SKILL TRANSFER + +The skill acquired in using a new smartwatch keyboard is usually not transferable to other devices. This discourages users from learning a new technique. The keyboards that attempt to facilitate skill transfer are miniature versions of QWERTY that are difficult to use due to the small key sizes. To mitigate this, most of these keyboards rely heavily on statistical decoders, making the entry of out-of-vocabulary words very difficult, if not impossible. SwipeRing uses a different approach. Although gesture typing is much faster than tapping [29], it is not a dominant text entry method on mobile devices. SwipeRing strategically arranges the letters in the zones to maintain gesture similarities between the gestures drawn on a virtual QWERTY and SwipeRing to enable performing the same (or very similar) gesture to enter the same word on various devices. The idea is that it will encourage gesture typing by facilitating skill transfer from smartphones to smartwatches and vice versa. + +§ 2.3 INCREASE USABILITY + +As a result of an optimization process, the layout is strategically divided into seven zones, accounting for the mean contact area in index finger touch (between 28.5 and ${33.5}{\mathrm{\;{mm}}}^{2}$ [53]), to facilitate comfortable and precise target selection during text entry [37]. The zones range between 29.34 and ${58.68}{\mathrm{\;{mm}}}^{2}$ (lengths between 9.0 and 18.0 $\mathrm{{mm}}$ ), which are within the recommended range for target selection on both smartphones $\left\lbrack {{25},{30},{38}}\right\rbrack$ and smartwatches $\left( {{7.0}\mathrm{\;{mm}}}\right) \left\lbrack {10}\right\rbrack$ . SwipeRing employs the whole screen for drawing gestures, which is more comfortable than drawing gestures on a miniature QWERTY. Unlike most virtual keyboards, SwipeRing requires users to slide their fingers from/to the zones instead of tapping, which also makes target selection easier. Existing work on eyes-free bezel-initiated swipe for the circular layouts revealed that the most accurate layouts have 6-8 segments [58]. SwipeRing enables the entry of out-of-vocabulary words through a multi-tap like an approach [9], where users repeatedly slide their fingers from/to the zone that contains the target letter until the letter is entered (see Section 5.2 for further details). Besides, research showed that radial interfaces on circular devices visually appear to take less space even when they occupy the same area as rectangular interfaces, which not only increases clarity but also makes the interface more pleasant and attractive [43]. + +§ 2.4 FACE AGNOSTIC + +Since SwipeRing arranges the keys around the edge of a smartwatch, it works on both round and square/rectangular smartwatches. To validate this, we investigated whether the gestures drawn on a square smartwatch and a circular smartwatch are comparable to the ones drawn on a virtual QWERTY. In a Procrustes analysis on the 10,000 most frequent words drawn on these devices, SwipeRing yielded a score of 114.81 with the square smartwatch and 118.48 with the circular smartwatch. This suggests that the gestures drawn on these devices are very similar. In fact, the square smartwatch yielded a slightly better score than the circular smartwatch (the smaller the score the better the similarity, Section 4.2), most likely because the shapes of these devices are similar (Fig. 3). + + < g r a p h i c s > + +Figure 3: Gestures for the most common word "the" on a circular SwipeRing, a square SwipeRing, and a virtual QWERTY. + +§ 3 RELATED WORK + +This section covers the most common text entry techniques for smartwatches. Tables 1 and 2 summarize the performance of some of these techniques. For a comprehensive review of existing text entry techniques for smartwatches, we refer to Arif et al. [1]. + +§ 3.1 QWERTY LAYOUT + +Most text entry techniques for smartwatches are miniature versions of the standard QWERTY that use multi-step approaches to increase the target area. ZoomBoard [36] displays a miniature QWERTY that enables iterative zooming to enlarge regions of the keyboard for comfortable tapping. SplitBoard [18] displays half of a QWERTY keyboard so that the keys are large enough for tapping. Users flick left and right to see the other half of the keyboard. SwipeBoard [5] requires two swipes to enter one letter, the first to select the region of the target letter and the second towards the letter to enter it. DriftBoard [47] is a movable miniature QWERTY with a fixed cursor point. To enter text, users drag the keyboard to position the intended key within the cursor point. Some miniature QWERTY keyboards use powerful statistical decoders to account for the "fat-finger problem" [52]. WatchWriter [13] appropriates a smartphone QWERTY for smartwatches. It supports both predictive tap and gesture typing. Yi et al. [59] uses a similar approach with even smaller keyboards (30 and ${35}\mathrm{\;{mm}}$ ). VelociWatch [51] also uses a statistical decoder, but enables users to lock in particular letters of their input to disable potential auto-corrections. Some techniques use variants of the standard QWERTY layout. ForceBoard [19] maps QWERTY to a $3 \times 5$ grid by assigning two letters to each key. Applying different levels of force on the keys enters the respective letters. DualKey [15] uses a similar layout, but requires users to tap with different fingers to disambiguate the input. It uses external hardware to differentiate between the fingers. DiaQWERTY [27] uses diamond-shaped keys to fit QWERTY in a round smartwatch at 10:7 aspect ratio. Optimal-T9 [39] maps QWERTY to a $3 \times 3$ grid, then disambiguates input using a statistical decoder. These techniques, however, occupy a large area of the screen real-estate, require multiple actions to enter one letter, or use aggressive prediction models that make the entry out-of-vocabulary words very difficult, often impossible. + +Table 1: Average entry speed (WPM) of popular keyboards for smart-watches from the literature (only the highest reported speed in the last block or session are presented, when applicable) along with the estimated percentage of touchscreen area they occupy. + +max width= + +Method Used Device Screen Occupancy Entry Speed (WPM) + +1-4 +Yi et al. [59] Watch 1.56" 50% 33.6 + +1-4 +WatchWriter [13] Watch 1.30" 85% 24.0 + +1-4 +DualKey [15] Watch 1.65" 80% 21.6 + +1-4 +SwipeBoard [5] Tablet 45% 19.6 + +1-4 +SplitBoard [18] Watch 1.63" 75% 15.9 + +1-4 +ForceBoard [19] Phone 67% 12.5 + +1-4 +ZoomBoard [36] Tablet 50% 9.3 + +1-4 + +§ 3.2 OTHER LAYOUTS + +There are a few techniques that use different layouts. Dunlop et al. [10] and Komninos and Dunlop [28] map an alphabetical layout to six ambiguous keys, then uses a statistical decoder to disambiguate input. It enables contextual word suggestions and word completion. QLKP [18] (initially designed for smartphones [20]) maps a QWERTY-like layout to a $3 \times 3$ grid. Similar to multi-tap [9], users tap on a key repeatedly until they get the intended letter. These techniques also occupy a large area of the touchscreen real-estate and require multiple actions to enter one letter. + +§ 3.3 CIRCULAR LAYOUTS + +There are some circular keyboards available for smartwatches. In-clineType [16] places an alphabetical layout around the edge of the devices. To enter a letter, users first select the letter by moving the wrist, then tap on the screen. COMPASS [60] also uses an alphabetical layout, but does not use touch interaction. To enter text, users rotate the bezel to place one of the three available cursors on the desired letter, then press a button on the side of the watch. WrisText [12] is a one-handed technique, with which users enter text by whirling the wrist of the hand towards six directions, each representing a key in a circular keyboard with the letters in alphabetical order. BubbleFlick [49] is a circular keyboard for Japanese text entry. It enables text entry through two actions. Users first touch a key, which partitions the circular area inside the layout into four radial areas for the four kana letters on the key. Users then stroke towards the intended letter to enter it. A commercial product, TouchOne Keyboard Wear [50] divides an alphabetical layout into eight zones to let users enter text using a T9-like [14] approach. It enables the entry of out-of-vocabulary words using an approach similar to BubbleFlick. Go et al. [11] designed an eyes-free text entry technique that enables users to EdgeWrite [56] on a smartwatch with the support of auditory feedback. Most of these techniques use a sequence of actions or a statistical decoder to disambiguate the input. Besides, most of these techniques are standalone, hence the skills acquired in using these keyboards are usually not usable on other devices. + +Table 2: Average entry speed (WPM) of popular circular keyboards for smartwatches from the literature (only the highest reported speed in the last block or session are presented, when applicable) along with the estimated percentage of touchscreen area they occupy. + +max width= + +Method Used Device Screen Occupancy Entry Speed (WPM) + +1-4 +WrisText [12] Watch 1.4" 42.99% 15.2 + +1-4 +HiPad [21] VR 39.70% 13.6 + +1-4 +COMPASS [60] Watch 1.2" 36.07% 12.5 + +1-4 +BubbleFlick [49] Watch 1.37" 52.06% 8.0 + +1-4 +C-QWERTYgesture [7] Watch 1.39" 43.16% 7.7 + +1-4 +Cirrin [24] PC 60.57% 6.4 + +1-4 +InclineType [16] Watch 1.6" 38.64% 5.9 + +1-4 + +Cirrin [34] is a pen-based word-level text entry technique for PDAs. It uses a novel circular layout with dedicated keys for each letter. To enter a word, users pass their pen through the intended letters. This approach has also been used on other devices [24]. C-QWERTY is a similar technique [7], but differs in letter arrangement, which is based on the QWERTY layout. To enter a word with C-QWERTY, users either tap on the letters in the word individually or drag the finger over them in sequence. While there are some similarities between Cirrin, C-QWERTY, and SwipeRing, the approach employed in the latter technique is fundamentally different. First, SwipeRing divides the layout into seven zones, thus does not require precise selection of the letters, but much larger zones. Both Cirrin and C-QWERTY, in contrast, use individual keys for each letter, thus require precise selection of the keys. Second, like gesture typing on a smartphone, SwipeRing does not require users to go over the same letter (or the letters on the same zone) repeatedly if they appear in a word multiple times in sequence (such as, "oo" in "book"). But both Cirrin and C-QWERTY require users to go over the same letter repeatedly in such cases by sliding the finger out of the keyboard then sliding back to the key. We found out users use this strategy also for entering letters that have the respective keys placed side-by-side, as they are difficult to select consecutively due to the smaller size. Finally, SwipeRing is optimized to maintain gesture similarities between SwipeRing and a virtual QWERTY to facilitate skill transfer between devices. Cirrin and C-QWERTY do not account for this. + +§ 4 LAYOUT OPTIMIZATION + +SwipeRing maps the standard QWERTY layout to a ring around the edge of a smartwatch (Fig. 1). It places the left and the righthand keys of QWERTY [35] to the left and right sides of the layout, respectively. Likewise, the top, home, and bottom row keys of QWERTY are placed at the top, middle, and bottom parts of the layout, respectively. Each letter is positioned at multiple of ${360}^{ \circ }/{26}$ , resulting in an angular step of ${13.80}^{ \circ }$ , starting with the letter ’q’ at ${180}^{ \circ }$ . This design was adapted to maintain a likeness to QWERTY to exploit the widespread familiarity with the keyboard to facilitate learning $\left\lbrack {{20},{39}}\right\rbrack$ . We then grouped the letters into zones to improve the usability of the keyboard by facilitating precise target selection, further discussed in Section 4.3. In practice, the letters can be grouped in numerous different ways, resulting in a set of possible layouts $L$ . However, the purpose here was to identify a particular layout $l \in L$ that ensures that the gestures drawn on the layout $l$ are similar to the ones drawn on a virtual QWERTY. This requires searching for an optimal letter grouping that maximizes gesture similarity. We introduce the following notation to formally define the optimization procedure. Let ${g}_{O}\left( w\right)$ be the gesture used to enter a word $w$ on the virtual QWERTY and ${g}_{\text{ SwipeRing }}\left( {w;l}\right)$ be the gesture used to enter the word $w$ on the layout $l$ of SwipeRing. Instead of maximizing the similarity between the gestures, we can equivalently minimize the discrepancy between the gestures, which we measure using a function $\psi$ . Then, our problem is to find the layout $l$ that minimizes the following loss function $\mathcal{L}$ : + +$$ +\mathop{\min }\limits_{{\text{ layout }l \in L}}\mathcal{L}\left( l\right) = \mathop{\sum }\limits_{{w \in W}}p\left( w\right) \psi \left( {{g}_{Q}\left( w\right) ,{g}_{\text{ SwipeRing }}\left( {w;l}\right) }\right) . \tag{1} +$$ + +Here, the dissimilarity between the gestures is weighted by the probability of the occurrence of the word to assure that the gestures for the most frequent words are the most similar. To efficiently optimize this problem, we made several modeling assumptions and simplifications, which are discussed in the following sections. + +§ 4.1 GESTURE MODELLING + +We model each gesture as a piece-wise linear curve connecting the letters on a virtual QWERTY or the zones on SwipeRing. Therefore, the gesture for a word composed of $n$ letters can be seen as a $2 \times n$ dimensional matrix (Fig. 4), where each column contains coordinates(x, y)of the corresponding letter. To simulate the drawn gesture on a virtual QWERTY for the word $w$ , denoted as ${g}_{Q}\left( w\right)$ , we connect the centers of the corresponding keys of the default Android QWERTY on a Motorola ${G}^{5}$ smartphone ${\left( {24}{\mathrm{\;{cm}}}^{2}\text{ keyboard area }\right) }^{1}$ , producing unique gestures for each word. With SwipeRing, however, we account for the fact that a word can have multiple gestures forming a set ${G}_{\text{ SwipeRing }}\left( {w;l}\right)$ . The zones containing 4-6 letters are wide enough to enable initiating a gesture either at the center, left, or right side of the zone (Fig. 5), resulting in multiple possibilities. Therefore, we set the gesture ${g}_{\text{ SwipeRing }}\left( {w;l}\right)$ to be the one that has minimal difference when compared to the gesture drawn on the virtual QWERTY measured by discrepancy function $\psi$ : + +$$ +{g}_{\text{ SwipeRing }}\left( {w;l}\right) \mathrel{\text{ := }} \underset{g \in {G}_{\text{ SwipeRing }}\left( {w;l}\right) }{\arg \min }\psi \left( {{g}_{Q}\left( w\right) ,g}\right) . \tag{2} +$$ + +§ 4.2 DISCREPANCY FUNCTION + +For the discrepancy function $\psi \left( {{g}_{1},{g}_{2}}\right)$ between gestures ${g}_{1}$ and ${g}_{2}$ , our requirement is to have a rotation and scale agnostic measure that attains a value of 0 if and only if ${g}_{2}$ is a rotated and re-scaled version of ${g}_{1}$ . One possible form of such $\psi$ function can be defined as yet another optimization problem of: + +$$ +\psi \left( {{g}_{1},{g}_{2}}\right) = \mathop{\min }\limits_{{R,\alpha }}{\begin{Vmatrix}{g}_{2} - \alpha R{g}_{1}\end{Vmatrix}}_{F}^{2}. \tag{3} +$$ + +${}^{1}$ Since most virtual QWERTY layouts maintain comparable aspect ratios and the gestures only loosely connect the keys, the gestures on different sized phones, keyboards, keys are comparable when the recognizer is size agnostic. A Procrustes analysis of the gestures drawn on five different sized phones with different keyboard and key sizes yielded results between 6 and 19, suggesting they are almost identical. + + < g r a p h i c s > + +Figure 4: The gesture for the most common word "the" on a virtual QWERTY and the respective $2 \times 3$ dimensional matrix. + + < g r a p h i c s > + +Figure 5: Gestures on the three letter zones are likely to be initiated from the center, while gesture on the wider zones (such as, a six letter zone) can be initiated from either the center or the two sides. + +Where $\alpha$ and $R$ are the rescaling factor and the rotation matrix applied to a gesture, respectively, while $\parallel .{\parallel }_{F}$ is the Frobenius norm ${}^{2}$ . We recognize Equation 3 as an Ordinary Procrustes analysis problem, the solution of which is given in closed-form by Singular Value Decomposition [8]. Note that the value of $\psi \left( {{g}_{1},{g}_{2}}\right)$ is within the range of $\left\lbrack {0,\infty }\right\rbrack$ . Additionally, we restrict the rotation within the range of $\pm {45}^{ \circ }$ since SwipeRing gestures that are rotated more than ${45}^{ \circ }$ in either direction are unlikely to look similar to their virtual QWERTY counterparts (Fig. 6). + +max width= + +Best Match $\psi \left( {{g}_{1},{g}_{2}}\right) = {0.86}$ Average Match $\psi \left( {{g}_{1},{g}_{2}}\right) = {57.49}$ Worst Match $\psi \left( {{g}_{1},{g}_{2}}\right) = {133.53}$ + +1-3 + + < g r a p h i c s > + + < g r a p h i c s > + + < g r a p h i c s > + +1-3 + +Figure 6: Gesture dissimilarity measures using the Procrustes loss function. The red curve represents the gesture for the word "the" on a virtual QWERTY $\left( {g}_{1}\right)$ , the blue curves represent gestures for the same word on a SwipeRing layout $\left( {g}_{2}\right)$ , and the gray dots show the optimal rotation and rescaling of the gesture $\left( {g}_{1}\right)$ represented as $\left( {{\alpha R}{g}_{1}}\right)$ to match $\left( {g}_{2}\right)$ . + +§ 4.3 ENUMERATION OF ALL POSSIBLE LETTER GROUPINGS + +The total number of possible letter groupings, and thus layouts, depends on how large we allow the groups to be. To determine this, we conducted a literature review of ambiguous keyboards that use linguistic models for decoding the input to find out whether the number of letters assigned per key or zone (the level of ambiguity) affects the performance of a keyboard. Table 3 displays an excerpt of our review, where one can see that there is a "somewhat" inverse relationship between the level of ambiguity and entry speed. Keyboards that assign a fewer number of letters per key or zone yield a relatively better entry speed than the ones that have more letters per key or zone. Based on this, we decided to assign 3-6 letters per zone. Although, this alone cannot determine the appropriate number of letters in each key since the performance of a keyboard depends on other factors, such as the layout and the reliability of the decoder, it gives a rough estimate. + +Next, we discovered all possible shatterings of the circular string {quertyuiophjklmnbvczzgfdsa} into substrings of length 3-6 letters each, resulting in 4,355 different layouts in total, which constitute our search set $L$ . Each possible shattering, such as {qwer}{tyu}{iophjk}{lmnbv}{cxzg}{fdsa}, represents one possible layout. We tested several of these layouts on a small smartwatch ( ${9.3}{\mathrm{\;{cm}}}^{2}$ circular display) to investigate if the zones containing three letters are wide enough for precise target selection. Results showed that the zones range between 29.0 and ${57.5}{\mathrm{\;{mm}}}^{2}$ (lengths between 9.0 and ${18.0}\mathrm{\;{mm}}$ ), which are within the length recommended for target selection on both smartphones $\left\lbrack {{25},{30},{38}}\right\rbrack$ and smartwatches $\left( {{7.0}\mathrm{\;{mm}}}\right) \left\lbrack {10}\right\rbrack$ . Fig. 7 illustrate some of these layouts. + +${}^{2}$ Frobenius norm is a generalization of Euclidean norm to the matrices, such as if $A$ is a matrix then $\parallel A{\parallel }_{F}^{2} = \mathop{\sum }\limits_{{i,j}}{a}_{ij}^{2}$ + +Table 3: Average entry speed of several ambiguous keyboards that map multiple letters to each key or zone. + +max width= + +Method Letters per Key Entry Speed (WPM) + +1-3 +COMPASS [60] 3 9-13 + +1-3 +HiPad [21] 4-5 9.6-11 + +1-3 +WrisText [12] 4-5 10 + +1-3 +Komninos, Dunlop [10,28] 3-6 8 + +1-3 + + < g r a p h i c s > + +Figure 7: Gesture typing the word "the" on a virtual QWERTY and three possible SwipeRing layouts. For the virtual QWERTY, the figure shows ${g}_{Q}$ ("the"). For the SwipeRing layouts, the figure shows all possible gestures for "the": ${G}_{\text{ SwipeRing }}$ ("the", $l$ ). Notice how the gestures for the same word are different on different SwipeRing layouts. + +§ 4.4 ALGORITHM + +To find the optimal layout, we simulated billions of gestures for the 10,000 most frequent words in the English language [54] on the 4,355 possible segmented SwipeRing layouts ${}^{3}$ . We then matched the gestures produced for each word on each layout with the gestures produced on a virtual QWERTY using Procrustes analysis to pick the layout that yielded the best match score (118.48). The final layout (Fig. 1) scored, on average, 1.27 times better Procrustes value compared to the other possible layouts. + +Algorithm 1: Search for an optimal layout $l$ . + +Input: Possible grouping layouts $L = \left\{ {{l}_{1},{l}_{2},\ldots }\right\}$ , word + + corpus $W = \left\{ {{w}_{1},{w}_{2},\ldots }\right\}$ + +Function OptLayout(L, W): + + ${\mathcal{L}}_{\min }\leftarrow \infty ,{l}_{\min }\leftarrow \infty$ + + for layout $l \in L$ do + + $\check{\mathcal{L}} \leftarrow 0$ + + for ${wordw} \in W$ do + + $\mathcal{L} \leftarrow \mathcal{L} + p\left( w\right) \psi \left( {{g}_{Q}\left( w\right) ,{g}_{\text{ SwipeRing }}\left( {w;l}\right) }\right)$ + + end + + if $\mathcal{L} \leq {\mathcal{L}}_{\min }$ then + + ${\mathcal{L}}_{\min } \leftarrow \mathcal{L},{l}_{\min } \leftarrow l$ + + end + + end + +§ 5 KEYBOARD FEATURES + +This section describes some key features of the proposed keyboard. + +§ 5.1 DECODER + +We developed a simple decoder to suggest corrections and to display the most probable words in a suggestion bar. For this, we used a combination of a character-level language model and a word-level bigram model for the next word prediction. To this end, we calculate the conditional probability of the user typing the word $w$ given that the previous word was ${w}_{n - 1}$ and the current zone sequence is $s$ : + +$$ +P\left( {{w}_{n} = w \mid s,{w}_{n - 1}}\right) = \frac{P\left( {{w}_{n} = w,s,{w}_{n - 1}}\right) }{P\left( {s,{w}_{n - 1}}\right) } \tag{4} +$$ + +$$ += \frac{\operatorname{count}\left( {{w}_{n} = w,{w}_{n - 1}}\right) \times \operatorname{match}\left( {M\left( w\right) ,s}\right) }{\mathop{\sum }\limits_{{w}^{\prime }}\operatorname{count}\left( {{w}_{n} = {w}^{\prime },{w}_{n - 1}}\right) \times \operatorname{match}\left( {M\left( {w}^{\prime }\right) ,s}\right) )}. +$$ + +Here, $M\left( w\right)$ is the sequence of zones that the user must gesture over to enter the word $w$ with SwipeRing, match $\left( {{s}_{1},{s}_{2}}\right)$ is the indicator function that returns 1 if ${s}_{2}$ is a prefix of ${s}_{1}$ or 0 otherwise, and $\operatorname{count}\left( {{w}_{n},{w}_{n - 1}}\right)$ is the number of occurrences of a bigram $\left( {{w}_{n},{w}_{n - 1}}\right)$ in the training corpus. + +To predict the most probable word for a given zone sequence $s$ and previous word ${w}_{n - 1}$ , we compute $\arg \mathop{\max }\limits_{w}P\left( {{w}_{n} = w \mid s,{w}_{n - 1}}\right)$ using the prefix tree (Trie) data structure. This implementation can output $k$ highest probable words, which we display in the suggestion bar. When no word has been typed yet, we use a unigram reduction of the model, otherwise we use the bigram model trained on the COCA corpus [6]. Due to the limited memory capacity of the smartwatch, the Trie uses the 1,300 most probable bigrams: bigram models scale as the square of the number of words, thus quickly outrun the available memory on the device. If the Trie does not have a bigram containing the user’s previous word ${w}_{n - 1}$ , we revert to the unigram predictions. Our language model is fairly simple, and more advanced models (involving neural nets, for instance) can be created. However, devising efficient language models for new keyboards is a research problem on its own and beyond the scope of our paper. + +After obtaining the list of the most probable words, SwipeRing places up to 10 most probable words in the suggestion bar, automatically positioned in close proximity to the input area (Fig. 1). The suggestion bar automatically updates as the user continues gesturing. Once done, the most probable word from the list is entered. The user could select a different word from the suggestion bar by tapping on it, which replaces the last entered word. Although the user can only see 2-4 words in the suggestion bar due to the smaller screen, she can swipe left and right on the bar to see the remaining words. + +§ 5.2 ONE-LETTER AND OUT-OF-VOCABULARY (OOV) WORDS + +SwipeRing enables the entry of one-letter and out-of-vocabulary words through repeated taps or strokes from/to the zones containing the target letters. The keyboard first enters the two one-letter words in the English language "a" and "T", then the other letters in the sequence in which they appear in the zones, like multi-tap [9]. For instance, to enter the letter 'e', which is in the top-right zone containing the letters: 'q', 'w', 'e', and 'r' (Fig. 1), the user taps or slides the finger three times from the middle area to the zone or from the edge to the middle area (Fig. 8). + +§ 5.3 ERROR CORRECTION AND SPECIAL CHARACTERS + +SwipeRing automatically enters a space when a word is predicted or manually selected from the suggestion bar. During character-level text entry (to enter out-of-vocabulary words), users enter space by performing a right stroke inside the empty area of the keyboard. Tapping on the transcribed text deletes the last entry, which could be either a word or a letter. The keyboard performs a carriage return or an enter action when the user double-taps on the screen. Currently, SwipeRing does not support the entry of uppercase letters, special symbols, numbers, and languages other than English. However, support for these could be easily added by enabling the user to long-press or dwell on the screen or the zones to switch back and forth between the cases and change the layout for digits and symbols. Note that the evaluation of novel text entry techniques without the support for numeric and special characters is common practice since it eliminates a potential confound [33]. + +${}^{3}$ We only used words that had more than one letter, there were 9,828 such words in the corpus. + + < g r a p h i c s > + +Figure 8: SwipeRing enables users to enter one-letter and out-of-vocabulary words by repeated strokes from/to the zones containing the target letters, like multi-tap (right). Users could also repeatedly tap on the zones (instead of strokes) to enter the letters (left). + +§ 6 USER STUDY + +We conducted a user study to compare SwipeRing with C-QWERTY. C-QWERTY uses almost the same layout as SwipeRing but places 'g' at the NE corner, while SwipeRing places it at the SW corner (left side of the layout since ’ $g$ ’ on QWERTY is usually pressed with the left hand). Both layouts share the deign goal of maintaining a similarity to QWERTY by using the touch-typing metaphor of physical keyboards (keys assigned to different hands). This likely resulted in similar (but nonidentical) layouts. Studies showed that using a physical analogy/metaphor like this enables novices to learn a method faster by skill transfer $\left\lbrack {{32},\mathrm{{pp}}{.255} - {263}}\right\rbrack$ . Besides, C-QWERTY does not divide the keys into zones, optimize them for gesture typing and skill transfer, and uses slightly different mechanism for gesture drawing approaches for the two are also different (Section 3.3). We will clarify this. Hence, a comparison between the two will highlight the performance difference due to the contributions of this work. + +§ 6.1 APPARATUS + +We used an LG Watch Style smartwatch, ${42.3} \times {45.7} \times {10.8}\mathrm{\;{mm}},{9.3}$ ${\mathrm{{cm}}}^{2}$ circular display,46grams, running on the Wear OS at ${360} \times {360}$ pixels in the study (Fig. 9). We decided to use a circular watch in the study since it is the most popular shape for (smart)watches $\left\lbrack {{22},{26}}\right\rbrack$ . We developed SwipeRing with the Android Studio 3.4.2, SDK 28. We collected the original source code of C-QWERTY from Costagliola et al. [7], which was also developed for the Wear OS. Both applications calculated all performance metrics directly and logged all interactions with timestamps. + +§ 6.2 DESIGN + +We used a between-subjects design to avoid interference between the conditions. Since both techniques use similar layouts, the skill acquired while learning one technique would have affected performance with the other technique [32]. There were separate groups of twelve participants for C-QWERTY and SwipeRing. Each group used the respective technique to enter short English phrases in eight blocks. Each block contained 10 random phrases from a set [33]. Hence, the design was as follows. + +2 groups: C-QWERTY and SwipeRing $\times$ + +12 participants $\times$ + +8 blocks $\times$ + +10 random phrases $= 1,{920}$ phrases in total. + +Table 4: Demographics of the C-QWERTY study. YoE-years of experience. + +max width= + +Age 21-34 years $\left( {\mathrm{M} = {25.8},\mathrm{{SD}} = {3.92}}\right)$ + +1-2 +Gender 3 female, 9 male + +1-2 +Handedness 11 right, 1 left + +1-2 +Owner of smartwatches 5 (M = 0.8 YoE, SD = 1.4) + +1-2 +Experienced gesture typists 3 (M = 4.7 YoE, SD = 2.5) + +1-2 + +Table 5: Demographics of the SwipeRing study. YoE-years of experience. + +max width= + +Age ${21} - {28}$ years $\left( {\mathrm{M} = {24.8},\mathrm{{SD}} = {2.33}}\right)$ + +1-2 +Gender 4 female, 8 male + +1-2 +Handedness 10 right, 1 ambidextrous, 1 left + +1-2 +Owner of smartwatches 6 (M = 1.2 YoE, SD = 0.9) + +1-2 +Experienced gesture typists 3 (M = 2.7 YoE, SD = 0.9) + +1-2 + +§ 6.3 PARTICIPANTS + +Twenty-four participants took part in the user study. They were divided into two groups. Table 4, 5 present the demographics of these groups. Almost all participants chose to wear the smartwatch on their left hand and perform the gestures using the index finger of the right hand (Fig. 9). All participants were proficient in the English language. In both groups, three participants identified themselves as experienced gesture typists. However, none of them used the method dominantly, instead frequently switched between tap typing and gesture typing for text entry. The remaining participants never or very rarely used gesture typing on their devices. Initially, we wanted to recruit more experienced gesture typists to compare the performance of inexperienced and experienced users to investigate whether the gesture typing skill acquired on mobile devices transferred to SwipeRing. But we were unable to recruit experienced gesture typists after months of trying. This strengthens our argument that gesture typing is still not a dominant method of text entry, regardless of being much faster than tap typing [29], and using SwipeRing may encourage some users to apply the acquired skill on mobile devices. All participants received a small compensation for participating in the study. + + < g r a p h i c s > + +Figure 9: The device with C-QWERTY and a participant volunteering in the study over Zoom (left). The device with SwipeRing and a volunteer participating in the study (right). + +§ 6.4 PERFORMANCE METRICS + +We calculated the conventional words per minute (WPM) [2] and total error rate (TER) performance metrics to measure the speed and accuracy of the keyboard, respectively. TER [48] is a commonly used error metric in text entry research that measures the ratio of the total number of incorrect characters and corrected characters to the total number of correct, incorrect, and corrected characters in the transcribed text. We also calculated the actions per word metric that signifies the average number of actions performed to enter one word. An action could be a gesture performed to enter a word, a tap on the suggestion bar, or a gesture to delete an unwanted word or letter. + +§ 6.5 PROCEDURE + +The study was conducted in a quiet room, one participant at a time. First, we introduced the keyboards to all participants, explained the study procedure, and collected their consents. We then asked them to complete a short demographics and mobile usage questionnaire. We instructed participants to sit on a chair, wear the smartwatch on their preferred hand, and practice with the keyboard they were assigned to by transcribing two short phrases. These practice phrases were not included in the main study. Interestingly, all participants decided to wear the smartwatch on their left hand and perform the gestures using the index finger of the other hand. The actual study started after that. There were eight blocks in each condition, with at least 5-10 minutes gap between the blocks. In each block, participants transcribed ten random short English phrases from a set [33] using either C-QWERTY Gesture or SwipeRing. Both applications presented one phrase at a time at the bottom of the smartwatch (Fig. 9). Participants were instructed to read, understand, and memorize the phrase, transcribe it "as fast and accurate as possible", then double-tap on the touchscreen to see the next phrase. The transcribed text was displayed on the top of the smartwatch. Error correction was recommended but not forced. After the study, all participants completed a short post-study questionnaire that asked them to rate various aspects of the keyboard on a 7-point Likert scale. It also enabled participants to comment and give feedback on the examined keyboards. + +Due to the spread of COVID-19, the C-QWERTY group participated in the study via Zoom, a teleconference application. We personally delivered the smartwatch to each participant's mailbox and scheduled individual online sessions with them. They were instructed to join the session from a quiet room. All forms were completed and signed electronically. Apart from that, an online session followed the same structure as a physical session. A researcher observed and recorded a complete study session. We piked up the devices after the study. The device, the charger, and the container were disinfected before delivery and after pickup. + +§ 6.6 RESULTS + +A Shapiro-Wilk test revealed that the response variable residuals were normally distributed. A Mauchly's test indicated that the variances of populations were equal. Hence, we used a Mixed-design ANOVA for one between-subjects and one within-subjects factors (technique and block, respectively). We used a Mann-Whitney U test to compare user ratings of various aspects of the two techniques. + +§ 6.6.1 ENTRY SPEED + +An ANOVA identified a significant effect of technique on entry speed $\left( {{F}_{1,{22}} = {25.05},p < {.0001}}\right)$ . There was also a significant effect of block $\left( {{F}_{7,{22}} = {63.65},p < {.0001}}\right)$ . The technique $\times$ block interaction effect was also statistically significant $\left( {{F}_{7.154} = {4.02},p < {.0005}}\right)$ . Fig. 10 (top) illustrates average entry speed for both techniques in each block, fitted to a function to model the power law of practice [4]. In the last block, the average entry speed with C-QWERTY and SwipeRing were 11.20 WPM (SD = 3.0) and 16.67 WPM (SD $= {5.36})$ , respectively. Nine users of SwipeRing yielded a much higher entry speed than the maximum entry speed reached with C-QWERTY, illustrated in Fig. 10 (bottom). The highest average entry speed in the last block was 21.53 WPM (P23, inexperienced gesture typist). + +§ 6.6.2 ERROR RATE + +An ANOVA identified a significant effect of technique on error rate $\left( {{F}_{1.22} = {24.61},p < {.0001}}\right)$ . There was also a significant effect of block $\left( {{F}_{7.22} = {2.89},p < {.01}}\right)$ . However, the technique $\times$ block interaction effect was not significant $\left( {{F}_{7,{154}} = {1.01},p > {.05}}\right)$ . Fig. 11 (top) illustrates average error rate for both techniques in each block, fitted to a function to model the power law of practice [4]. In the last block, the average error rates with C-QWERTY and SwipeRing were ${12.52}\% \left( {\mathrm{{SD}} = {13.91}}\right)$ and ${5.56}\% \left( {\mathrm{{SD}} = {8.53}}\right)$ , respectively. + + < g r a p h i c s > + +Figure 10: Average entry speed (WPM) per block fitted to a power trendline (top). The SwipeRing group surpassed the C-QWERTY group's maximum entry speed by the third block. Note the scale on the vertical axis. Average entry speed (WPM) with the two techniques for each participant in the final block (bottom). + +§ 6.6.3 ACTIONS PER WORD + +An ANOVA identified a significant effect of technique on actions per word $\left( {{F}_{1,{22}} = {10.31},p < {.005}}\right)$ . There was also a significant effect of block $\left( {{F}_{7,{22}} = {3.14},p < {.005}}\right)$ . However, the technique $\times$ block interaction effect was not significant $\left( {{F}_{7,{154}} = {0.61},p > {.05}}\right)$ . Fig. 11 (bottom) illustrates average actions per word for both techniques in each block, fitted to a function to model the power law of practice [4]. In the last block, the average actions per word with C-QWERTY and SwipeRing were 2.45 (SD = 1.64) and 1.59 (SD = 0.72), respectively. + +§ 6.7 USER FEEDBACK + +A Mann-Whitney U test identified a significant effect of technique on willingness to use $\left( {U = {21.0},Z = - {3.1},p < {.005}}\right)$ , perceived speed $\left( {U = {22.5},Z = - {3.07},p < {.005}}\right)$ , and perceived accuracy $\left( {U = {27.0},Z = - {2.72},p < {.01}}\right)$ . However, there was no significant effect on ease of use $\left( {U = {48.0},Z = - {1.5},p > {.05}}\right)$ or learnability $\left( {U = {66.0},Z = - {0.37},p > {.05}}\right)$ . Fig. 12 illustrates median user ratings of all investigated aspects of the two keyboards on a 7-point Likert scale. + +§ 7 DISCUSSION + +SwipeRing reached a competitive entry speed in only eight short blocks. It was ${33}\%$ faster than C-QWERTY. The average entry speed with C-QWERTY and SwipeRing were 11.20 WPM and 16.67 WPM, respectively. Four participants reached over 20 WPM with SwipeRing (Fig. 10, bottom). Further, the SwipeRing group surpassed the C-QWERTY group's maximum entry speed by the third block (Fig. 10, top). It also performed better than all popular circular text entry techniques (Table 2) and some QWERTY-based techniques + + < g r a p h i c s > + +Figure 11: Average total error rate (TER) (top) and actions per word (APW) (bottom) in each block fitted to a power trendline. Note the scale on the vertical axis. + +(Table 1) for smartwatches. Yi et al. [59] and WatchWriter [13] reported much higher entry speed than SwipeRing. Both techniques use aggressive statistical models with a miniature QWERTY to account for frequent incorrect target selection due to the smaller key sizes (the "fat-finger problem" [52]). This makes entering out-of-vocabulary words difficult with these techniques. In fact, the former technique does not include a mechanism for entering out-of-vocabulary words [59, p. 58]. DualKey [15] and SwipeBoard [5] also reported higher entry speed than SwipeRing. However, DualKey depends on external hardware to distinguish between different fingers and has a steep learning curve (the reported entry speed was achieved in the 15th session). SwipeBoard, on the other hand, was evaluated on a tablet computer, hence unclear whether the reported entry speed can be maintained on an actual smartwatch. Besides, all of these keyboards occupy about ${45} - {85}\%$ of the screen real-estate, leaving a little room for displaying the entered text, let alone additional information. There was a significant effect of block and technique $\times$ block on entry speed. Entry speed increased by 38% with C-QWERTY and ${43}\%$ with and SwipeRing in the last block compared to the first. The average entry speed over block for both techniques correlated well with the power law of practice [4] $\left( {{R}^{2} = {0.9588}}\right)$ . However, the learning curve for C-QWERTY was flattening out by the last block, while SwipeRing was going strong. Analysis revealed that entry speed improved by 2% with C-QWERTY and 13% with SwipeRing in the last block compared to the second-last. This suggests that SwipeRing did not reach its highest possible speed in the study. Relevantly, the highest entry speed recorded in the study was 33.18 WPM (P23, Block 6). + +There was a significant effect of technique on error rate. SwipeRing was significantly more accurate than C-QWERTY (Fig. 11, top). The average error rate with C-QWERTY and + + < g r a p h i c s > + +Figure 12: Median user ratings of the willingness to use, ease of use, learnability, perceived speed, and perceived accuracy of SwipeRing and C-QWERTY on a 7-point Likert scale, where " 1 " to "7" represented "Strongly Disagree" to "Strongly Agree". The error bars signify $\pm 1$ standard deviations (SD). + +SwipeRing were 12.52% and 5.56%, respectively, in the last block (56% fewer errors with SwipeRing). This is unsurprising since the designers of C-QWERTY also reported a high error rate with the technique (20.6%) using the same TER metric [7] that accounts for both corrected and uncorrected errors in the transcribed text [48]. Most text entry techniques for smartwatches report character error rate (CER) that only accounts for the uncorrected errors in the transcribed text [2]. Most errors with C-QWERTY were committed due to incorrect target selection since the keys were too small. SwipeRing yielded a lower error rate due to the larger zones that were designed to accommodate precise target selection. There was a significant effect of block on error rate. Participants committed 13% fewer errors with C-QWERTY and 29% fewer errors with SwipeRing in the last block compared to the first. The average error rate over block correlated moderately for C-QWERTY $\left( {{R}^{2} = {0.5895}}\right)$ but well for SwipeRing $\left( {{R}^{2} = {0.8706}}\right)$ with the power law of practice [4]. Hence, it is likely that SwipeRing will become much more accurate with practice. Actions per word yielded a similar pattern as error rate. SwipeRing consistently required fewer actions to enter words than C-QWERTY (Fig. 11, bottom). C-QWERTY and SwipeRing required on average 2.45 and 1.59 actions per word in the last block, respectively (35% fewer actions with SwipeRing). This is mainly because participants performed fewer corrective actions with SwipeRing than C-QWERTY. There was also a significant effect of block. The average actions per word over block correlated well for SwipeRing $\left( {{R}^{2} = {0.8459}}\right)$ but not for C-QWERTY $\left( {{R}^{2} = {0.4999}}\right)$ with the power law of practice [4]. This suggests that actions per word with SwipeRing is likely to further improve with practice. + +Qualitative results revealed that the SwipeRing group found the examined technique faster and more accurate than the C-QWERTY group (Fig. 12). These differences were statistically significant. Consequently, the SwipeRing group was significantly more interested in using the technique on their devices than the C-QWERTY group. However, both techniques were rated comparably on ease of use and learnability, which is unsurprising since both techniques used similar layouts. + +We compared the performance of C-QWERTY in our study with the results from the literature to find out whether conducting the study remotely affected its performance. Costagliola et al. [7] reported a 7.7 WPM entry speed with a ${20.6}\%$ error rate on a slightly larger smartwatch using the same phrase set in a single block containing 6 phrases. In our study, C-QWERTY yielded a comparable 7 WPM and 16.8% error rate in the first block containing 10 phrases. + + < g r a p h i c s > + +Figure 13: Average entry speed (WPM) per block for the two user groups with the two techniques fitted to power trendlines. Note the scale on the vertical axis. + +§ 7.1 SKILL TRANSFER FROM VIRTUAL QWERTY + +Learning occurred with both experienced and inexperienced gesture typists with both techniques. The average entry speed over block correlated well with the power law of practice for C-QWERTY with both user groups (experienced: ${R}^{2} = {0.8142}$ , inexperienced: ${R}^{2} = {0.9547}$ ), also for SwipeRing (experienced: ${R}^{2} = {0.8732}$ , inexperienced: ${R}^{2} = {0.9702}$ ). Although there were not enough data points to run statistical tests, average performance over blocks suggests that experienced participants were performing much better with both techniques from the start. Fig. 13 shows that experienced participants consistently performed better than inexperienced participants. This suggests that experienced participants were able to transfer their smartphone gesture typing skills to both techniques. However, with C-QWERTY, inexperienced participants almost caught up with the experienced participants by the last block. While with SwipeRing, both user groups were learning at comparable rates in all blocks. Besides, both experienced and inexperienced participants constantly performed better with SwipeRing than C-QWERTY. These indicate towards the possibility that optimizing the zones for gesture similarities facilitated a higher rate of skill transfer. As blocks progressed, experienced participants were most probably more confident, consciously or subconsciously, in applying their gesture typing skills to SwipeRing. + +Interestingly, error rate and actions per word patterns were quite different than the patterns observed in entry speed. With SwipeRing, experienced participants were consistently better than inexperienced users, while inexperienced participants were learning to be more accurate. The average error rate over block correlated well with the power law of practice [4] for inexperienced participants $\left( {{R}^{2} = }\right.$ 0.7019 but not for experienced participants $\left( {{R}^{2} = {0.3329}}\right)$ . We speculate this is because experienced participants made fewer errors than inexperienced participants, which required performing fewer corrective actions (a phenomenon reported in the literature [3]). In contrast, with C-QWERTY, experienced participants committed more errors, requiring more corrective actions. In Fig. 14, one can see that experienced participants' error rates and actions per word went up and down in alternating blocks. We do not have a definite explanation for this, but based on user comments we speculate that this is because experienced participants were trying to apply their gesture typing skills to C-QWERTY, only committing more errors due to the smaller target size, then reduced speed in the following block to increase accuracy. This process continued till the end of the study. In contrast, actions per word continued improving for both experienced $\left( {{R}^{2} = {0.7171}}\right)$ and inexperienced $\left( {{R}^{2} = {0.8012}}\right)$ participants with SwipeRing. This suggests that optimizing the zones for precise target selection facilitated a higher rate of skill acquisition. + + < g r a p h i c s > + +Figure 14: Average error rate (TER) (top) and average actions per word (APW) (bottom) per block for the two user groups with the two techniques fitted to power trendlines. Note the scale on the vertical axis. + +§ 8 CONCLUSION + +We presented SwipeRing, a circular keyboard arranged around the bezel of a smartwatch to enable gesture typing with the support of a statistical decoder. It also enables character-based text entry by using a multi-tap like approach. It divides the layout into seven zones and maintains a resemblance to the standard QWERTY layout. Unlike most existing solutions, it does not occupy most of the touchscreen real-estate or require repeated actions to enter most words. Yet, it employs the whole screen for drawing gestures, which is more comfortable than drawing gestures on a miniature QWERTY. The keyboard is optimized for target selection and to maintain similarities between the gestures drawn on a smartwatch and a virtual QWERTY to facilitate skill transfer between devices. We compared the technique with C-QWERTY, a similar technique that uses almost the same layout to enable gesture typing, but do not divide the keyboard into zones or optimize the zones for target selection and gesture similarity. In the study, SwipeRing yielded a 33% higher entry speed, 56% lower error rate, and 35% lower actions per word than C-QWERTY in the last block. The average entry speed with SwipeRing was ${16.67}\mathrm{{WPM}}$ , faster than all popular circular and most QWERTY-based text entry techniques for smartwatches. Results indicate towards the possibility that skilled gesture typists were able to transfer their skills to SwipeRing. Besides, participants found the keyboard easy to learn, easy to use, fast, and accurate, thus wanted to continue using it on smartwatches. + +§ 8.1 FUTURE WORK + +The proposed keyboard could be useful in saving touchscreen realestate on larger devices, such as smartphones and tablets. The keyboard could appear on the screen like a floating widget, where users perform gestures to enter text. We will also explore this possibility. We will explore SwipeRing in virtual and augmented reality by using a smartwatch or different types of controllers. Finally, we will investigate the possibility of eyes-free text entry with SwipeRing. We speculate, when the positions of the zones are learned, users would be able to perform the gestures without visual aid. This can make the whole touchscreen available to display additional information by making the keyboard invisible. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/iSSYEihSvdY/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/iSSYEihSvdY/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..26387e236ad25459024c464b3804b42aeb5bd410 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/iSSYEihSvdY/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,281 @@ +# Exploring Alternative Methods of Visualizing Patient Data + +Author 1* Author 2† Author 3 ‡ + +## Abstract + +Patient data visualization can help healthcare providers gain an overview of their patient's condition and assist in decision-making about the next steps on management and communication. We explore the acceptance and opinion of five different visualizations that can be used to summarize patient data, including a Text Summary, text and frequency-based Word Cloud, a Bar Graph, a time-based Line Graph and a newly developed Text Graph that combines text and time-based distribution. A user study with 15 professional healthcare providers, 16 first- or second-year medical students, and 17 third or greater year medical students was conducted to test the preference and opinion of visualizations between the three groups. The main findings in the ratings of usefulness of each visualization were that most visualizations were found to be useful and had positive feedback from the users. However, the Text Summary and Text Graph were rated to be the most useful visualizations by all three groups in extracting patient health information. + +Keywords: Healthcare data visualization, Health information, Patient management. + +Index Terms: Human-centered computing -Visualizations-Visualization techniques - Graph drawings. + +## 1 INTRODUCTION + +Health care providers (HCPs) are health professionals whom a person sees when they are in need of medical care or advice. This may include general practitioners, medical specialists, nurses, etc., [1]. Part of the continuity of this care/advice is documenting the interactions with, and measurements of, the people they see, which becomes the patient's medical record. HCPs use the information found in a patient's medical record to support their decisions on patient care and management [2]. Patient medical records consist of clinical notes and patient data including demographics, laboratory results, radiographic images, problem lists, medication lists, etc., that have been gathered via official requisitions, and use approved, validated measurement techniques [3], [4]. A recent trend of medical record data collection is patient-generated data where patients or their caregivers record or gather their own health data. The information collected may include health symptoms, lifestyle choices, biometric data, etc., [5]. Specialized technology that can either be provided by an $\mathrm{{HCP}}$ or is publicly available, such as a FitBit ${}^{\mathrm{{TM}}}$ , is often used for collecting patient-generated data. However, these data are often considered to be informal and less trustworthy because of their periodic inaccuracies [6]. The quantity of patient data collected can be overwhelming to process by HCPs as there can be many individual data items in different styles and formats from a large variety of sources [7]-[9] The information overload increases even further for patients with chronic illnesses as patient data accumulates over time [8], [10]. In addition, sorting through and interpreting patient records can be time consuming and intensive. HCPs are under constant time pressure due to the amount and complexity of patient cases they have under their care. Gathering information, interpreting it, and deciding the next step for their patient must be done as quickly as possible [11], [12]. There is a need to have patient data presented in a concise and summarized manner allowing healthcare providers to efficiently access relevant data and it to manage their patient care [9]. + +MyHealthMyRecord (MHMR) is designed to allow patients to self-produce brief audio-video recordings of their experiences in-between visits to their healthcare provider [13]. A feature of this system discussed in this paper presents and summarizes the patient-generated data in a variety of graphical and text formats; as a Text-Summary, text-based Word Cloud, frequency and time-based graphs, and a newly developed Text Graph that combines text and time-based distribution. + +The research questions we aim to answer are: 1) How do healthcare providers interpret visual summaries of patient-generated data presented in the different MHMR visualization formats? and 2) What are the preferences and acceptability of these visualizations for managing patient care? In this paper, we will present the design and implementation of five MHMR visualizations that include a Text Summary, Word Cloud, Bar Graph, Line Graph, Text Graph, and Text Summary. We will then present and discuss our findings from a qualitative evaluation with 15 professional HCPs and 33 medical students. Because these individuals vary in their level of medical training, we want to investigate whether there is a difference between how they perceive the five visualizations and how useful they find them in extracting patient health information. The results and discussion cover the users' rating of different visualizations based on usefulness and their opinions of the visualizations. + +## 2 BACKGROUND + +Data visualization is the use of graphics to illustrate information [14] and the best visualizations are those that "convey complex ideas in a concise and intuitive way" [15, p.2]. Rau et al., [16] explain that the purpose of data visualization is to explore, confirm, and present data. Exploring data enables one to efficiently find trends and outliers and understand what the data is suggesting. Using visualization to confirm means to accept or reject a hypothesis based on the visually depicted data. Finally, findings can be presented and communicated to others enabling them to understand and make sense of the data. + +### 2.1 Methods of visualizing data + +One common method of visualizing data is using a graph that displays a relationship between two or more variables within a dataset [17]. One type of graph that is often used to depict continuous data is a line graph. A line graph connects data points displayed on a two-dimensional scale. An advantage of a line graph is that it can highlight trend [18], and multiple continuous datasets can be plotted on the same graph for comparison. But, when reading line graphs, individuals spend less time viewing the trends and more time relating different graphical features such as axis and graph titles or data point labels to one another to make sense of the graph [19]. Another example of a common graph is a bar graph, created with the use of vertical or horizontal columns. A bar graph compares a single variable (often the dependent variable) against several variables, and each column represents one group. The greater the length of the column, the greater the value with respect to the dependent variable [17]. + +--- + +* email: author 1 + +† email: author 2 + +$\ddagger$ email: author 3 + +--- + +Word clouds are another method of data visualization. They summarize a body of text by illustrating the words that occur most frequently [20]. A high-frequency word will be shown in the word cloud in large font, and any words mentioned less frequently in comparison will be displayed in a smaller font or not included at all [21]. Text features and word placements are often used to create a word cloud [22]. Text features describe the font colours, font-weight, and font size. Word placement then describes the layout of the word cloud ranging from sorted (e.g., alphabetically), to semantically clustered (e.g., placing all nouns together), and to spatially laid out (i.e., unordered placement of words) [22]. Word clouds are useful for four main activities: searching, browsing, impression forming and recognizing or matching [22]. When an individual uses the word cloud to search, they look for cues such as font size or colour to get a sense of the organization and frequencies of words as proxies for underlying concepts. When users browse a word cloud, they get an overview of the text properties, forming impressions of which concepts are important and inferring information about the text data underlying the world cloud, and in some cases identifying themes that emerge within the dataset [22]. + +### 2.2 Healthcare data visualization + +Some examples of early work in healthcare data visualization include a one-page detailed graphical summary proposed by Powsner and Tufte [23], and Lifelines by Plaisant et al., [24]. The graphical summary could reveal patient condition status to physicians by plotting numerical patient data as a variation of a scatter plot, then adding doctor's notes and relevant medical images on the same page [23]. Lifelines display a patient's history as a timeline where patient visitation dates are on the horizontal axis and information such as patient concerns, diagnosis, medications etc. are presented on the vertical axis as dots or horizontal lines depending on their duration [24]. Both of these studies only have one type of visualization for users. The graphical summary by Powsner and Tufte displays the patient data as a variation of a scatter plot and Lifelines has a single mode of the display as a timeline. The MHMR system provides five different visualizations for the users to be able to choose what is best for them in viewing and understanding patient data. + +In recent work, Sultanum et al., [25] developed the Doccurate system. It presents patient records on a timeline similar to the Lifelines system but there is also a text panel showing associated clinical notes for each patient session. The text panel was added because the clinical text is the physician's primary source of information as they use it to obtain a general understanding of the patient condition, to recall information, and while answering patient-related questions [12]. Since text-based patient records and annotations are a dominant part of physicians' practice, they prefer text as it is a "familiar place to return to" [12, p. 10]. In Doccurate, a filtering system allows physicians to select particular medical condition terms. As more mentions of these filtering terms occur in the written notes, the terms appear in larger fonts on the timeline. A user study with one physician and five residents was conducted to evaluate the system. The participants were asked to compare the patient's information gathered with the system using a set of filter terms they generated themselves and with a set of predefined terms. They attempted to gather information using both sets of terms for two patients. The main findings from this study were that the participants were content with the system but generally had a low level of trust in the automation used in Doccurate as it made classification errors [25]. The use of different font sizes to highlight words in Doccurate is similar to MHMR's Word Cloud and Text Graph features. The difference is that in Doccurate the font sizes draw attention to certain visitation dates and in Word Cloud and Text Graph they draw attention to the symptoms themselves. Like the graphical summaries [26] and Lifelines [24], Doccurate also has one method of visualization (a timeline), limiting the user's choice in how they want the patient data displayed. + +The use of a timeline and different font sizes to emphasize instances is also seen in the Harvest system [27]. Harvest displays patient data as a longitudinal timeline, problem cloud, and doctor's notes. The problem clouds (word clouds) demonstrate concepts extracted from the notes based on the frequency of mention. This system was based on the work of Reichert et al., [8] who asked physicians to create patient record summaries. The aim was to determine on which section of the patient record the physicians would spend most of their time to create these summaries. Similar to the findings of Sultanum et al., [12], the notes section was used the most by the participants indicating that perhaps reading text summaries is the easiest way for physicians to achieve an overview. Since users preferred text in that study, the researchers kept the essence of that in the form of the problem clouds and the ability to view notes. The use of word clouds to display concerns of the patients is similar to the Word Cloud visualization in MHMR that displays the most frequently mentioned words in the patient videos. Harvest, however, does not display a day-to-day variation of a particular symptom. The timeline simply allows users to switch between different appointment dates. The Line Graph and Text Graph in MHMR illustrates the distribution of a particular symptom over time. + +The MHMR system is a mobile application that patients can use to audio-video record their experiences in-between HCP visits. A case study with one patient was conducted to evaluate the use of MHMR, and explore the topics, and issues that arose during the patient's journey. The patient, who was diagnosed with a chronic disease, used a tablet version of MHMR for three months and documented their frustrations or barriers they faced. This study found that the patient was willing and able to create videos about their experience and that there were readily identifiable themes related to health, pain, and accessibility issues. While the task of making video entries may be doable and worthwhile for patients, the information contained in the videos could also be useful to their HCP in understanding events, activities and issues that arise between visits and that may affect the patient's ability to manage their health conditions. However, patients may generate a large number of video materials. Asking HCP to watch, analyze and understand a large set of videos in the time that is usually allocated to individual patients for an HCP visit is unrealistic and not possible. The next step in developing MHMR is, thus, determining how to organize the large quantity of data so that it can be useful to HCP in managing their patient's care by incorporating the concerns, activities and progress identified in the videos into consultations and decision-making [13]. + +## 3 Method + +A user study was designed to evaluate the usability and visualization preferences from three different groups of HCPs (first- or second-year medical students, third or greater year medical students, and professional healthcare providers). The study was conducted online with a desktop computer using Zoom conferencing services [28]. A prototype web application for the visualization aspects of MHMR was developed and deployed to GitHub Pages [29] for the online study. In addition to the five visualizations, the prototype also had a set of six samples of 30- second personal health videos. These videos represented examples of a patient's perspective on pain over a period of time. This study was approved by the university's ethics board. + +### 3.1 MHMR Visualizations + +Currently, there are five visualizations that are used in the data visualization study for MHMR: Word Cloud, Bar Graph, Line graph, Text Graph, and Text Summary. These are made available in a mobile app that simulates the visualization functionality that could be used in the prototype MHMR application. To generate the visualizations, audio from the patient videos was first transcribed into text using IBM Watson's speech to text feature [30]. Common "stop" words such as "the, and, but, how, a, etc." are eliminated from the transcript using a natural language toolkit inside a python script. The remaining words and their frequency of occurrence in the videos are then used for creating the five visualizations. In this section, the examples show the occurrences of words from a sample patient video set. + +#### 3.1.1 Word Cloud + +The Word Cloud used in this MHMR data visualization study was created using an online tool, WordItOut [31] (see Figure 1). The tool allowed for the selection of font colour and style that would match the other visualizations. All words extracted from the transcript were added to WordItOut. Within the MHMR data visualization tool, the user was able to sort and organize the word cloud by selecting a minimum number of word occurrences e.g., 5 or more instances, 10 or more instances and more than 30 instances. The user was also able to filter between "All words" and "Physical health symptoms". The "All words" option displayed all words that were present in the transcript providing a birds-eye-view of all of the experiences reported by the patient during the recording period. When the word cloud was limited to only "Physical health symptoms", HCP could focus only on health-related issues. + +#### 3.1.2 Bar Graph + +The Bar Graph was generated using the ten most frequently mentioned words in the videos and displayed them with five sorting and filtering options (see Figure 2). The x-axis represents the words, and the y-axis represents the frequency of the word in the videos. The filters allowed users to choose between the top ten words mentioned, physical health symptoms, or the top ten words with additional adjectives or nouns. For example, the word "swollen" from the "Top ten words mentioned" graph became "Ankle swollen" in the "More description" graph. The user was able to sort the order of the word frequencies in the graph alphabetically, highest-to-lowest-mention, group by positive or negative sentiments, and group-by-ranges. Grouping by positive and negative organized the bar graph based on the sentiment of the videos. Colour was used to indicate whether the word had a positive or negative sentiment (orange is used for negative and green for positive). For example, one of the words in the bar graph was "walking"; once a user applied the positive/negative sentiment filter, the bar became green indicating that this word was associated with positive sentiments in the videos. The group-by-ranges filter grouped words together based on frequency size. For example, all words between 10-29 mentions were grouped together, 30-49 in a second grouping and finally 50 plus in a third. In Figure 2, a dotted rectangle appears surrounding the "Pain" bar. This rectangle represents a button that takes the user to the associated Text Graph of a certain word. Since there was only one Text Graph in the prototype and it represents pain, this Bar Graph had only one button. + +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_2_971_500_667_716_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_2_971_500_667_716_0.jpg) + +Figure 1: Word Cloud example used in the MHMR data visualization study. + +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_2_954_1348_670_657_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_2_954_1348_670_657_0.jpg) + +Figure 2: Bar Graph example used in the MHMR data visualization study. + +#### 3.1.3 Line Graph + +The Line Graph (see Figure 3) represents the data as the occurrences of words over specific time intervals. The example in Figure 3 shows the number of negative mentions of pain per day over a two-month recording period. The x-axis represents the time interval in days over the two-month period, and the y-axis represented the number of times the word pain is mentioned with negative sentiment in the videos for a particular day. The clear or black-filled circles plotted at the 0 points are either when there is no video created that day or no videos where there were negative pain words respectively (the legend below the graph indicates the meaning of the clear and black-filled circle). The two points surrounded by the dotted rectangle indicate a button that when clicked will take users to the videos created for that particular day. + +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_3_185_625_680_555_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_3_185_625_680_555_0.jpg) + +Figure 3: Line Graph example used in the MHMR data visualization study. + +#### 3.1.4 Text Graph + +The newly developed Text Graph adds text markers to maxima and minima points on the time-based Line Graph. In the example shown in Figure 4, the maximal and minimal of the graph are labelled either "More pain" or "Less pain" indicating whether the video(s) for that day mentioned pain in a positive or negative manner. The font-sizes of these text vary with the frequency of mention like in the Word Cloud. The graph also has filters for users to choose to show positive only points, negative only points, or both. In addition, users are able to toggle between a coloured and black/white representation. We wanted to determine whether colour could help people interpret a graph populated with the large variety of information displayed on the Text Graph (e.g., lines, labels/text, points, and button indicators). When there is a day where no video is created, or there is no mention of pain, the graph shows clear or black filled-in circles, respectively. Points on the graph shown in Figure 4 with a dotted rectangle are buttons that when selected lead users to the videos created for that day offering users a drill-down option. + +#### 3.1.5 Text Summary + +The Text Summary (see Figure 5) is a general summary of the videos presented as text-based bullet points created from high-frequency words. It is created using the data from the transcripts. The purpose of the summary is to briefly describe the main patient experiences over the duration of the entire video set. For example, "aches" and "pain" were mentioned often in conjunction with the word "morning". The Text Summary then shows "The patient complained of pain and aches multiple times...On most occasions, complaints of pain and aches were mentioned with "morning"". The user is able to toggle between a general summary and a quantitative summary. The quantitative summary displays the number data in the text. For example, "59/72 videos mentioned "pain" with a total account of 90 mentions". + +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_3_963_400_679_600_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_3_963_400_679_600_0.jpg) + +Figure 4: Text Graph example used in the MHMR data visualization study. + +Text Summary + +Date Range: 2 Months + +- General summary + +○ Quantitative summary + +- The patient complained of pain and aches multiple times throughout the past 2 months - On most occasions, complaints of pain and aches were + +mentioned with "morning' + +- The patient complained a lot about being tired + +- The patient mentioned walking being easy multiple times + +Figure 5: Text Summary example used in the MHMR data visualization study. + +### 3.2 Participants + +A total of 48 healthcare providers and individuals in medical school were recruited for the user study. The three groups were: students currently in their first or second year of medical school (16 in total), students in their third or greater year of medical school (17 in total), and finally healthcare providers with two or more years of work experience in the healthcare industry (15 in total). The healthcare providers were from a range of disciplines including registered nurses, graduate nursing students, medical residents, a general practitioner, a nutritionist, and a behaviour therapist. These three groups were chosen because they vary in their experience of medical training. Previous studies [8], [12] have shown that professional $\mathrm{{HCPs}}$ like to use clinical notes to support their decisions, but we wanted to see if the amount of training can play a role in how HCP like their patient data presented. The first group were freshly starting medical school and lacked exposure to traditional patient data summarizations such as clinical notes. The second group of students had slightly more practical medical training with perhaps clinical rounds so they had more experience in handling patient data than the first group. Finally, the third group were individuals working in the field and had the most exposure and experience in handling patient data. Age, gender and years of experience were collected to ensure that there was a representative sample of the target populations, and years of experience were also used as a grouping variable. The distribution of gender, age, and years of experience can be seen in Figures 6-8. It is important to note that in gender distribution (Figure 6), although, there were 48 participants, 1 participant chose not to indicate their gender. All participants were given a small token of appreciation. + +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_4_180_619_625_367_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_4_180_619_625_367_0.jpg) + +Figure 6: Gender of participants. + +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_4_173_1062_633_387_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_4_173_1062_633_387_0.jpg) + +Figure 7: Age distribution of participants. + +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_4_169_1544_636_448_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_4_169_1544_636_448_0.jpg) + +Figure 8: Medical experience distribution of participants. + +### 3.3 Study design + +Each study lasted around 90 minutes and began with a pre-study questionnaire that gathered demographic data as well as comments on the healthcare provider's/medical student's current routine of practice. This was followed by a short training period where the user was introduced to the visualization system as well as a patient vignette and scenario. The vignette explained the patient's condition and their experiences to the user, whereas the scenario set the tone for the user study explaining what exactly the user will be required to do. Participants were then invited to complete ten tasks while thinking aloud followed by a short semi-structured interview that gathered their opinion of the visualizations. Three versions of the ten user tasks were created, and the participant was randomly assigned a version. Each version had the same user tasks but in a different order to eliminate sequence bias. The tasks were designed so that the user had to use the application and the visualizations to answer the questions. For example, one task was "On which day did the patient complain about pain the most?" Notes were also generated on what visualizations the participants used the most for the study tasks, and which ones they were struggling to understand. The study ended with a post-study questionnaire that allowed participants to rate the system usability, reflect on their experiences, discuss what they liked/disliked about the system, and make recommendations. In this paper, the results of the post-study questionnaire and observational notes are reported. + +### 3.4 Data collection + +The post-study questionnaires consisted of ten System Usability Scale [32] questions, two questions that allowed participants to choose which visualization(s) they liked the most and which they liked the least, and one 4-point rating questions on the perceived usefulness of each visualization. Usefulness was rated with four possible responses: Very useful, Useful, Somewhat Useful, Not useful at all. There were also five open-ended questions that allowed participants to use freeform text to write about their opinion and interest in working with the system and its visualizations. + +### 3.5 Data analysis + +Due to the use of categorical and ordinal data, non-parametric analyses were used. The questionnaire responses were statistically analyzed using non-parametric statistical methods. A Pearson's chi-squared test was used to determine the significance of ratings provided by the questions on the usefulness of each visualization (see Table 1). Then, a Kruskal-Wallis non-parametric analysis of variance was used to determine whether there were significant differences between the three participant groups for usefulness and ratings of each visualization. Finally, the strength of association between the ratings of different visualization was assessed using Kendall's tau. + +## 4 Results + +The mean SUS score is 74.90 (SD=13.80). According to Bangor et al., [33] a score above 68 demonstrates average usability obtained across a range of studies. In addition, ${60.4}\%$ of the participants were very likely (rating of 5 on a 1-5 scale) to recommend this system to a friend or colleague. + +### 4.1 Statistical analysis + +A Pearson chi-square test was used to determine the significance of usefulness ratings for each visualization. There was a significant difference in usefulness ratings between the visualizations $(p < {0.05}$ ; see Table 1 for chi-square values). A Kruskal-Wallis test was then used to determine whether there was a significant difference in usefulness ratings for each visualization between the participant groups. There was no significant difference between groups $\left( {p > {0.05}}\right)$ . In addition, a contingency table analysis was performed between each visualization, but the results were not statistically significant at the .05 level. + +### 4.2 Frequency responses + +Figures 9 and 10 show which visualizations were preferred or disliked for the different participant groups. Overall, the most preferred visualizations were Text Summary and Text Graph. The most disliked graph was the Line Graph. Figure 11 illustrates the frequency of usefulness ratings for each visualization. Text summary and Text Graph were also rated the most useful visualizations (rating of very useful or $4/4$ ), and the word cloud and bar graph are rated as useful (rating of $3/4$ ). The line graph was rated as somewhat useful (rating of $2/4$ ) by 18 participants. + +Table 1: Pearson's chi-square analysis results for each visualization. + +
Word CloudBar GraphLine GraphText GraphText Summary
Pearson${\chi }^{2}\left( 3\right) =$${\chi }^{2}\left( 3\right) =$${\chi }^{2}\left( 3\right) =$${\chi }^{2}\left( 2\right) =$${\chi }^{2}\left( 2\right) =$
chi-16.667,21.883,10.277,10.511,28.625,
square$p = {0.01}$$p < {0.05}$$p = {0.016}$$p = {0.005}$$p < {0.05}$
+ +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_5_165_998_698_318_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_5_165_998_698_318_0.jpg) + +Figure 9: Presentation of most liked visualizations by each group. + +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_5_180_1418_667_311_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_5_180_1418_667_311_0.jpg) + +Figure 10: Presentation of most disliked visualizations by each group. + +### 4.3 Written responses + +All 48 participants commented on their experience and opinion of the visualizations, and the application. Most participants had positive reviews of MHMR and its visualization techniques. Participants mentioned that the Text Graph or Text Summary were the most useful visualization to work with, e.g., "Text Graph, Text Summary [are the most useful] because they give a better and a quick picture [answering] my questions" (P18). Some participants mentioned that what they liked least about the system was the Line Graph e.g., P6 wrote "Line graphs, I think it is a lot more time consuming and comparatively less helpful than the other techniques." P46 also mentioned that "I think some of the visualizations were redundant (the text graph was a better version of the line graph)". With design and layout, some participants had negative comments on the x-axis labels on the Line or Text Graph because they found them "unclear" (P28) or did not understand the difference between the open and closed circle (P17). + +Participants stated that they would be willing to use this system in their practice especially when monitoring a patient's condition over time, or prescribing medication. For example, P13 wrote, "I do weekly check-ins with my clients, it would help me to see their progress as well as help me to pinpoint where changes in their nutritional and exercise plans need to be made." P31 stated, "As a nurse, you can understand at what time of the day the [patient] experiences more pain, and you can advocate for the [patient] to get pain meds prescribed at certain times of the day." But there were concerns on how compliant patients would be with using MHMR, and how they would be encouraged to record their symptoms as often as possible (P45). + +![01963e87-56e1-7e99-9ce1-32b7c99ab5f1_5_942_856_661_427_0.jpg](images/01963e87-56e1-7e99-9ce1-32b7c99ab5f1_5_942_856_661_427_0.jpg) + +Figure 11: Presentation of usefulness rating for each visualization. + +## 5 Discussion + +This study evaluated opinions about, and acceptance of, the data visualizations presented in the MHMR application. In the ratings of preference, usefulness (Figures 9-11) and written responses, the Text Summary and Text Graph visualizations tended to be more preferred. + +In their research, Sultanum et al., [12] concluded that text is a familiar method of generating and consuming information about patients for doctors (or HCP). Thus, it is not surprising then that the Text Summary was preferred as it resembles clinical notes the type of information provided in clinical notes. + +Given that word clouds are used for searching, browsing, impression forming and recognizing or matching [22] it was anticipated that participants would find them useful in finding patterns for matching specific health conditions. For some participants this was the case, and they found the Word Cloud to be useful for this purpose, either alone or along with Text Graph or Text Summary (e.g., P4, P14, P24, and P34). However, for others there was too much disorganized information contained in the word cloud, e.g., one participant (P12) stated: "[Word Cloud] appeared scattered and packed", and thus they did not find it as useful as the text summary and text graph in gaining information about the patient's condition. + +The Bar Graph was the only visualization that had a high usefulness rating (Figure 11) and no negative comments associated with it. This could be due to familiarity with its style and the information it conveyed. For example, P37 liked the Bar Graph because it displayed "the most words said and how many times the patient actually said them." Plus, the Bar Graph has a filter that allows participants to sort from highest to lowest frequency words. The usage of this feature was repeatedly observed in the user studies and as an example, the Bar Graph was liked by P11 because "it gives you [the] highest symptoms experienced vs. lowest". + +The Line Graph was collectively the least favourite visualization among the participants. This could be because it requires attention to understand the trend and to investigate each point, or because participants had less experience with interpreting line graphs. For example, P16 said "I think there should be more detail to the line graph because I couldn't understand that graph much", and P30 said "I think the line graph was a bit difficult to read". + +The Text Graph provided more or less the same information as the Line Graph but combined text labels with the graphical, time-based representation of the data. Yet, the usefulness of the Text Graph was rated much higher than the Line Graph. 11 participants that rated Text Graph as a 4/4 (Very useful) rated the Line Graph as a 2/4 (Somewhat useful). Carpenter and Shah [19] found that, when viewing line graphs, individuals spend more time relating the different graphical features axis and data point labels to make sense of the data and less time viewing the pattern or trends. The Text Graph may have helped participants make sense of the data because it highlights the important information for them, and so they can focus more on understanding the overall pattern. In addition, The Text Graph allowed users to drill down to extract more detailed information by viewing specific videos related to those data points. This may have provided the additional detail as suggested by P16 or it may have added a sufficient amount of text to take advantage of the familiarity of text favoured in the Text Summary. For example, P32 said "in my opinion, the most useful technique is text-graph [because] it shows day to day variation of patient symptoms...it will help me get a better understanding of my patient [to] evaluate necessary management". + +Researchers have found that healthcare providers prefer the notes section in a patient record [8], [12]. The Text Summary in MHMR was similar to clinical notes so it was expected that most participants would show a preference for the Text Summary and would find it useful. Nonetheless, participants also saw benefits of other visualizations, particularly the Text Graph, and formed mainly positive opinions them. The Text Graph was newly developed for this research and was new to all participants. The Text Graph was designed to exploit the preference for notes and the benefits of visually representing patterns over time as a Line Graph. The Text Graph and Text Summary had very similar ratings as well. 16 participants rated both visualizations as 4/4 (Very useful) and 12 rated both graphs as 3/4 (Useful) indicating that both visualizations were useful in extracting information about patients' status and conditions. The Word Cloud was also text-based but it was mainly rated to be "useful" (3/4) rather than "very useful" (4/4) like the Text Summary and Text Graph. This could be because the word cloud was something new for them and they were more comfortable with the Text Summary and Text Graph but were open to trying the word cloud as well. + +### 5.1 Limitations + +This study evaluated the acceptance and opinion of data visualizations presented in the MHMR application. The statistical analysis of post-study questionnaires showed no significant differences between the groups (HCP, first or second-year medical students, third-year or greater medical students). One of the reasons for this could be that there is not enough data to work with because although data of 48 participants was being analyzed, there were only around 16 participants in each group. + +#### 5.1.1 Demographics + +There were no differences in the usefulness ratings between groups, and we suggest that a larger sample may elicit differences. The HCP were mostly nurses who may have different experiences than doctors or other types of HCP. Future studies should incorporate a more diverse set of participants varying in roles. The students recruited were mainly from the same geographic location and thus diverse geographic samples, and the impact of different training regimes between different jurisdictions should also be studied. + +#### 5.1.2 Online study + +One technical limitation was that this was an online study. Technical difficulties such as Internet issues with several participants slowed the process of viewing and interpreting visualizations causing frustration and impatience by participants. This may have impacted their views and they may have been distracted by the technical issues. Another limitation of the online study was that the MHMR was intended as a mobile application but was evaluated using a web application. The user interface resembled the look of a mobile application to mimic what the user may see in a mobile app. Because users were using screens with different aspect ratios, the user interface could appear widened or disproportional depending on the size of the screen, and sometimes the participants were not able to see the entire application at once and had to scroll up or down. This may have had an impact on the participants' view of the application and the visualizations as some information may have been hidden or not clearly visible on their screen. Future studies need a consistent screen display for all participants to eliminate this limitation. + +#### 5.1.3 Visualizations + +Another limitation of the study was the number of visualizations presented. Our study presented five visualizations, however, there are a number of other ways to present data. Other common graphs include scatter plots, pie charts, histograms, etc. In addition, there are a number of ways to add or remove details from graphs to create variation. The Text Graph added text labels on top of a time-based line graph for more information but even simply removing data labels from the Bar Graph can potentially create a difference in understanding of the graph. Future studies should incorporate different types of visualization techniques and assess how different graphical features play a role in the understanding of the data. + +## 6 CONCLUSION + +The design and implementation of five visualizations depicting patient-generated data were presented in this paper. Participant data from three different groups representing a spectrum of healthcare providers in terms of their education and experience was evaluated for comparison and correlation. Quantitatively, there was no difference between the groups and their preference and opinion of the visualizations. The results, however, did show positive attitudes towards the visualizations, particularly the Text Graph and Text Summary. The Text Summary was similar to the notes section in a patient record so, as anticipated it was the most preferred and was rated to be the most useful by the users. However, the Text Graph, despite being something the users have not seen before, was as useful as the Text Summary. Many participants were also interested in using this application in their future clinical practice. Future work needs to incorporate a large sample size and a diverse group of participants. The visualizations also need to be automated and tested for their accuracy in depicting the correct words spoken by the patient and associating the correct sentiment to those words. + +## ACKNOWLEDGMENT + +Details omitted for anonymous reviewing. + +## REFERENCES + +[1] Statistics Canada, "Health Fact Sheets - Primary health care providers, 2017," Stat. Canada, Cat. no.82-625-X, no. 6, 2019, [Online]. Available: https://www150.statcan.gc.ca/n1/en/pub/82-625- x/2019001/article/00001-eng.pdf?st=h7UdG51j. + +[2] J. C. Wyatt and F. Sullivan, "What is health information?," Bmj, vol. 331, no. 7516, p. 566, 2005, doi: 10.1136/bmj.331.7516.566. + +[3] C. R. Keenan, H. H. Nguyen, and M. Srinivasan, "Using Technology to Teach Clinical Care Electronic Medical Records and Their Impact on Resident and Medical Student Education," 2006. Accessed: Aug. 03, 2020. [Online]. Available: http://ap.psychiatryonline.org. + +[4] M. Mintz, H. J. Narvarte, K. E. O'Brien, K. K. Papp, M. Thomas, and S. J. Durning, "Use of electronic medical records by physicians and students in academic internal medicine settings," Acad. Med., vol. 84, no. 12, pp. 1698-1704, 2009, doi: 10.1097/ACM. 0b013e3181bf9d45. + +[5] M. Shapiro, D. Johnston, J. Wald, and D. Mon, "Patient-Generated Health Data," no. April, 2012. + +[6] L. M. Feehan et al., "Accuracy of fitbit devices: Systematic review and narrative syntheses of quantitative data," JMIR mHealth and uHealth, vol. 6, no. 8. JMIR Publications, Aug. 01, 2018, doi: 10.2196/10527. + +[7] O. Farri, D. S. Pieckiewicz, A. S. Rahman, T. J. Adam, S. V. Pakhomov, and G. B. Melton, "A qualitative analysis of EHR clinical document synthesis by clinicians.," AMIA Annu. Symp. Proc., vol. 2012, pp. 1211-1220, 2012, Accessed: Jul. 03, 2020. [Online]. Available: /pmc/articles/PMC3540510/?report=abstract. + +[8] D. Reichert, D. Kaufman, B. Bloxham, H. Chase, and N. Elhadad, "Cognitive analysis of the summarization of longitudinal patient records.," AMIA ... Annu. Symp. proceedings. AMIA Symp., vol. 2010, pp. 667-71, Nov. 2010, Accessed: Mar. 16, 2020. [Online]. Available: http://www.ncbi.nlm.nih.gov/pubmed/21347062. + +[9] W. Hsu, R. K. Taira, S. El-Saden, H. Kangarloo, and A. A. T. Bui, "Context-Based Electronic Health Record: Toward Patient Specific Healthcare," IEEE Trans. Inf. Technol. Biomed., vol. 16, no. 2, 2012, doi: 10.1109/TITB.2012.2186149. + +[10] R. Pivovarov and N. Elhadad, "Automated methods for the summarization of electronic health records," J. Am. Med. Informatics Assoc., vol. 22, no. 5, pp. 938-947, 2015, doi: 10.1093/jamia/ocv032. + +[11] C. Bossen and L. G. Jensen, "How physicians 'achieve overview': A case-based study in a hospital ward," in Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW, 2014, pp. 257-268, doi: 10.1145/2531602.2531620. + +[12] N. Sultanum, M. Brudno, D. Wigdor, and F. Chevalier, "More text please! understanding and supporting the use of visualization for clinical text overview," in Conference on Human Factors in Computing Systems - Proceedings, Apr. 2018, vol. 2018-April, doi: 10.1145/3173574.3173996. + +[13] Details omitted for anonymous reviewing. + +[14] S. Few, "Data Visualization - Past, Present, and Future," 2007. + +[15] A. Arcia et al., "Method for the development of data visualizations for community members with varying levels of health literacy.," 2013. + +[16] R. Rau, C. Bohk-Ewald, M. M. Muszyńska, and J. W. Vaupel, "Introduction: Why Do We Visualize Data and What Is This Book About?," Springer, Cham, 2018, pp. 1-4. + +[17] D. Slutsky, "The Effective Use of Graphs," J. Wrist Surg., vol. 03, no. 02, pp. 067-068, May 2014, doi: 10.1055/s-0034-1375704. + +[18] D. Peebles and N. Ali, "Expert interpretation of bar and line graphs: The role of graphicacy in reducing the effect of graph format," Front. Psychol., vol. 6, no. OCT, pp. 1-11, 2015, doi: 10.3389/fpsyg.2015.01673. + +[19] P. A. Carpenter and P. Shah, "A Model of the Perceptual and Conceptual Processes in Graph Comprehension." American Psychology Association, 1998. + +[20] F. Heimerl, S. Lohmann, S. Lange, and T. Ertl, "Word cloud explorer: Text analytics based on word clouds," in Proceedings of the Annual Hawaii International Conference on System Sciences, 2014, pp. 1833-1842, doi: 10.1109/HICSS.2014.231. + +[21] B. B. Sellars, D. R. Sherrod, and L. Chappel-Aiken, "Using word clouds to analyze qualitative data in clinical settings," Nurs. Manage., vol. 49, no. 10, pp. 51-53, Oct. 2018, doi: 10.1097/01.NUMA.0000546207.70574.c3. + +[22] A. W. Rivadeneira, D. M. Gruen, M. J. Muller, and D. R. Millen, "Getting our head in the clouds: Toward evaluation studies of tagclouds," Conf. Hum. Factors Comput. Syst. - Proc., no. January, pp. 995-998, 2007, doi: 10.1145/1240624.1240775. + +[23] E. Tufte and S. M. Powsner, "Graphical summary of patient status," 1994. + +[24] C. Plaisant, R. Mushlin, A. Snyder, J. Li, D. Heller, and B. Shneiderman, "LifeLines: using visualization to enhance navigation and analysis of patient records.," 1998. doi: 10.1016/b978- 155860915-0/50038-x. + +[25] N. Sultanum, D. Singh, M. Brudno, and F. Chevalier, "Doccurate: A Curation-Based Approach for Clinical Text Visualization," IEEE Trans. Vis. Comput. Graph., vol. 25, no. 1, pp. 142-151, Jan. 2019, doi: 10.1109/TVCG.2018.2864905. + +[26] S. M. Powsner and E. R. Tufte, "Graphical summary of patient status," Lancet, vol. 344, no. 8919, pp. 386-389, Aug. 1994, doi: 10.1016/S0140-6736(94)91406-0. + +[27] J. S. Hirsch et al., "HARVEST, a longitudinal patient record summarizer," J. Am. Med. Informatics Assoc., vol. 22, no. 2, pp. 263- 274, 2015, doi: 10.1136/amiajnl-2014-002945. + +[28] "Video Conferencing, Web Conferencing, Webinars, Screen Sharing - Zoom." https://zoom.us/ (accessed Dec. 18, 2020). + +[29] "GitHub Pages | Websites for you and your projects, hosted directly from your GitHub repository. Just edit, push, and your changes are live." https://pages.github.com/ (accessed Dec. 18, 2020). + +[30] "Watson Text to Speech Demo.' https://www.ibm.com/demos/live/tts-demo/self-service/home (accessed Dec. 18, 2020). + +[31] "Create word clouds - WordItOut," 2020. https://worditout.com/word-cloud/create (accessed Dec. 13, 2020). + +[32] J. Brooke, "SUS: a "quick and dirty'usability," Usability Eval. Ind., p. 189, 1996. + +[33] A. Bangor, P. Kortum, and J. Miller, "Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale," 2009. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/iSSYEihSvdY/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/iSSYEihSvdY/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..2b89d38b05ec3fbebc9439c83657781d465f1dcd --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/iSSYEihSvdY/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,222 @@ +§ EXPLORING ALTERNATIVE METHODS OF VISUALIZING PATIENT DATA + +Author 1* Author 2† Author 3 ‡ + +§ ABSTRACT + +Patient data visualization can help healthcare providers gain an overview of their patient's condition and assist in decision-making about the next steps on management and communication. We explore the acceptance and opinion of five different visualizations that can be used to summarize patient data, including a Text Summary, text and frequency-based Word Cloud, a Bar Graph, a time-based Line Graph and a newly developed Text Graph that combines text and time-based distribution. A user study with 15 professional healthcare providers, 16 first- or second-year medical students, and 17 third or greater year medical students was conducted to test the preference and opinion of visualizations between the three groups. The main findings in the ratings of usefulness of each visualization were that most visualizations were found to be useful and had positive feedback from the users. However, the Text Summary and Text Graph were rated to be the most useful visualizations by all three groups in extracting patient health information. + +Keywords: Healthcare data visualization, Health information, Patient management. + +Index Terms: Human-centered computing -Visualizations-Visualization techniques - Graph drawings. + +§ 1 INTRODUCTION + +Health care providers (HCPs) are health professionals whom a person sees when they are in need of medical care or advice. This may include general practitioners, medical specialists, nurses, etc., [1]. Part of the continuity of this care/advice is documenting the interactions with, and measurements of, the people they see, which becomes the patient's medical record. HCPs use the information found in a patient's medical record to support their decisions on patient care and management [2]. Patient medical records consist of clinical notes and patient data including demographics, laboratory results, radiographic images, problem lists, medication lists, etc., that have been gathered via official requisitions, and use approved, validated measurement techniques [3], [4]. A recent trend of medical record data collection is patient-generated data where patients or their caregivers record or gather their own health data. The information collected may include health symptoms, lifestyle choices, biometric data, etc., [5]. Specialized technology that can either be provided by an $\mathrm{{HCP}}$ or is publicly available, such as a FitBit ${}^{\mathrm{{TM}}}$ , is often used for collecting patient-generated data. However, these data are often considered to be informal and less trustworthy because of their periodic inaccuracies [6]. The quantity of patient data collected can be overwhelming to process by HCPs as there can be many individual data items in different styles and formats from a large variety of sources [7]-[9] The information overload increases even further for patients with chronic illnesses as patient data accumulates over time [8], [10]. In addition, sorting through and interpreting patient records can be time consuming and intensive. HCPs are under constant time pressure due to the amount and complexity of patient cases they have under their care. Gathering information, interpreting it, and deciding the next step for their patient must be done as quickly as possible [11], [12]. There is a need to have patient data presented in a concise and summarized manner allowing healthcare providers to efficiently access relevant data and it to manage their patient care [9]. + +MyHealthMyRecord (MHMR) is designed to allow patients to self-produce brief audio-video recordings of their experiences in-between visits to their healthcare provider [13]. A feature of this system discussed in this paper presents and summarizes the patient-generated data in a variety of graphical and text formats; as a Text-Summary, text-based Word Cloud, frequency and time-based graphs, and a newly developed Text Graph that combines text and time-based distribution. + +The research questions we aim to answer are: 1) How do healthcare providers interpret visual summaries of patient-generated data presented in the different MHMR visualization formats? and 2) What are the preferences and acceptability of these visualizations for managing patient care? In this paper, we will present the design and implementation of five MHMR visualizations that include a Text Summary, Word Cloud, Bar Graph, Line Graph, Text Graph, and Text Summary. We will then present and discuss our findings from a qualitative evaluation with 15 professional HCPs and 33 medical students. Because these individuals vary in their level of medical training, we want to investigate whether there is a difference between how they perceive the five visualizations and how useful they find them in extracting patient health information. The results and discussion cover the users' rating of different visualizations based on usefulness and their opinions of the visualizations. + +§ 2 BACKGROUND + +Data visualization is the use of graphics to illustrate information [14] and the best visualizations are those that "convey complex ideas in a concise and intuitive way" [15, p.2]. Rau et al., [16] explain that the purpose of data visualization is to explore, confirm, and present data. Exploring data enables one to efficiently find trends and outliers and understand what the data is suggesting. Using visualization to confirm means to accept or reject a hypothesis based on the visually depicted data. Finally, findings can be presented and communicated to others enabling them to understand and make sense of the data. + +§ 2.1 METHODS OF VISUALIZING DATA + +One common method of visualizing data is using a graph that displays a relationship between two or more variables within a dataset [17]. One type of graph that is often used to depict continuous data is a line graph. A line graph connects data points displayed on a two-dimensional scale. An advantage of a line graph is that it can highlight trend [18], and multiple continuous datasets can be plotted on the same graph for comparison. But, when reading line graphs, individuals spend less time viewing the trends and more time relating different graphical features such as axis and graph titles or data point labels to one another to make sense of the graph [19]. Another example of a common graph is a bar graph, created with the use of vertical or horizontal columns. A bar graph compares a single variable (often the dependent variable) against several variables, and each column represents one group. The greater the length of the column, the greater the value with respect to the dependent variable [17]. + +* email: author 1 + +† email: author 2 + +$\ddagger$ email: author 3 + +Word clouds are another method of data visualization. They summarize a body of text by illustrating the words that occur most frequently [20]. A high-frequency word will be shown in the word cloud in large font, and any words mentioned less frequently in comparison will be displayed in a smaller font or not included at all [21]. Text features and word placements are often used to create a word cloud [22]. Text features describe the font colours, font-weight, and font size. Word placement then describes the layout of the word cloud ranging from sorted (e.g., alphabetically), to semantically clustered (e.g., placing all nouns together), and to spatially laid out (i.e., unordered placement of words) [22]. Word clouds are useful for four main activities: searching, browsing, impression forming and recognizing or matching [22]. When an individual uses the word cloud to search, they look for cues such as font size or colour to get a sense of the organization and frequencies of words as proxies for underlying concepts. When users browse a word cloud, they get an overview of the text properties, forming impressions of which concepts are important and inferring information about the text data underlying the world cloud, and in some cases identifying themes that emerge within the dataset [22]. + +§ 2.2 HEALTHCARE DATA VISUALIZATION + +Some examples of early work in healthcare data visualization include a one-page detailed graphical summary proposed by Powsner and Tufte [23], and Lifelines by Plaisant et al., [24]. The graphical summary could reveal patient condition status to physicians by plotting numerical patient data as a variation of a scatter plot, then adding doctor's notes and relevant medical images on the same page [23]. Lifelines display a patient's history as a timeline where patient visitation dates are on the horizontal axis and information such as patient concerns, diagnosis, medications etc. are presented on the vertical axis as dots or horizontal lines depending on their duration [24]. Both of these studies only have one type of visualization for users. The graphical summary by Powsner and Tufte displays the patient data as a variation of a scatter plot and Lifelines has a single mode of the display as a timeline. The MHMR system provides five different visualizations for the users to be able to choose what is best for them in viewing and understanding patient data. + +In recent work, Sultanum et al., [25] developed the Doccurate system. It presents patient records on a timeline similar to the Lifelines system but there is also a text panel showing associated clinical notes for each patient session. The text panel was added because the clinical text is the physician's primary source of information as they use it to obtain a general understanding of the patient condition, to recall information, and while answering patient-related questions [12]. Since text-based patient records and annotations are a dominant part of physicians' practice, they prefer text as it is a "familiar place to return to" [12, p. 10]. In Doccurate, a filtering system allows physicians to select particular medical condition terms. As more mentions of these filtering terms occur in the written notes, the terms appear in larger fonts on the timeline. A user study with one physician and five residents was conducted to evaluate the system. The participants were asked to compare the patient's information gathered with the system using a set of filter terms they generated themselves and with a set of predefined terms. They attempted to gather information using both sets of terms for two patients. The main findings from this study were that the participants were content with the system but generally had a low level of trust in the automation used in Doccurate as it made classification errors [25]. The use of different font sizes to highlight words in Doccurate is similar to MHMR's Word Cloud and Text Graph features. The difference is that in Doccurate the font sizes draw attention to certain visitation dates and in Word Cloud and Text Graph they draw attention to the symptoms themselves. Like the graphical summaries [26] and Lifelines [24], Doccurate also has one method of visualization (a timeline), limiting the user's choice in how they want the patient data displayed. + +The use of a timeline and different font sizes to emphasize instances is also seen in the Harvest system [27]. Harvest displays patient data as a longitudinal timeline, problem cloud, and doctor's notes. The problem clouds (word clouds) demonstrate concepts extracted from the notes based on the frequency of mention. This system was based on the work of Reichert et al., [8] who asked physicians to create patient record summaries. The aim was to determine on which section of the patient record the physicians would spend most of their time to create these summaries. Similar to the findings of Sultanum et al., [12], the notes section was used the most by the participants indicating that perhaps reading text summaries is the easiest way for physicians to achieve an overview. Since users preferred text in that study, the researchers kept the essence of that in the form of the problem clouds and the ability to view notes. The use of word clouds to display concerns of the patients is similar to the Word Cloud visualization in MHMR that displays the most frequently mentioned words in the patient videos. Harvest, however, does not display a day-to-day variation of a particular symptom. The timeline simply allows users to switch between different appointment dates. The Line Graph and Text Graph in MHMR illustrates the distribution of a particular symptom over time. + +The MHMR system is a mobile application that patients can use to audio-video record their experiences in-between HCP visits. A case study with one patient was conducted to evaluate the use of MHMR, and explore the topics, and issues that arose during the patient's journey. The patient, who was diagnosed with a chronic disease, used a tablet version of MHMR for three months and documented their frustrations or barriers they faced. This study found that the patient was willing and able to create videos about their experience and that there were readily identifiable themes related to health, pain, and accessibility issues. While the task of making video entries may be doable and worthwhile for patients, the information contained in the videos could also be useful to their HCP in understanding events, activities and issues that arise between visits and that may affect the patient's ability to manage their health conditions. However, patients may generate a large number of video materials. Asking HCP to watch, analyze and understand a large set of videos in the time that is usually allocated to individual patients for an HCP visit is unrealistic and not possible. The next step in developing MHMR is, thus, determining how to organize the large quantity of data so that it can be useful to HCP in managing their patient's care by incorporating the concerns, activities and progress identified in the videos into consultations and decision-making [13]. + +§ 3 METHOD + +A user study was designed to evaluate the usability and visualization preferences from three different groups of HCPs (first- or second-year medical students, third or greater year medical students, and professional healthcare providers). The study was conducted online with a desktop computer using Zoom conferencing services [28]. A prototype web application for the visualization aspects of MHMR was developed and deployed to GitHub Pages [29] for the online study. In addition to the five visualizations, the prototype also had a set of six samples of 30- second personal health videos. These videos represented examples of a patient's perspective on pain over a period of time. This study was approved by the university's ethics board. + +§ 3.1 MHMR VISUALIZATIONS + +Currently, there are five visualizations that are used in the data visualization study for MHMR: Word Cloud, Bar Graph, Line graph, Text Graph, and Text Summary. These are made available in a mobile app that simulates the visualization functionality that could be used in the prototype MHMR application. To generate the visualizations, audio from the patient videos was first transcribed into text using IBM Watson's speech to text feature [30]. Common "stop" words such as "the, and, but, how, a, etc." are eliminated from the transcript using a natural language toolkit inside a python script. The remaining words and their frequency of occurrence in the videos are then used for creating the five visualizations. In this section, the examples show the occurrences of words from a sample patient video set. + +§ 3.1.1 WORD CLOUD + +The Word Cloud used in this MHMR data visualization study was created using an online tool, WordItOut [31] (see Figure 1). The tool allowed for the selection of font colour and style that would match the other visualizations. All words extracted from the transcript were added to WordItOut. Within the MHMR data visualization tool, the user was able to sort and organize the word cloud by selecting a minimum number of word occurrences e.g., 5 or more instances, 10 or more instances and more than 30 instances. The user was also able to filter between "All words" and "Physical health symptoms". The "All words" option displayed all words that were present in the transcript providing a birds-eye-view of all of the experiences reported by the patient during the recording period. When the word cloud was limited to only "Physical health symptoms", HCP could focus only on health-related issues. + +§ 3.1.2 BAR GRAPH + +The Bar Graph was generated using the ten most frequently mentioned words in the videos and displayed them with five sorting and filtering options (see Figure 2). The x-axis represents the words, and the y-axis represents the frequency of the word in the videos. The filters allowed users to choose between the top ten words mentioned, physical health symptoms, or the top ten words with additional adjectives or nouns. For example, the word "swollen" from the "Top ten words mentioned" graph became "Ankle swollen" in the "More description" graph. The user was able to sort the order of the word frequencies in the graph alphabetically, highest-to-lowest-mention, group by positive or negative sentiments, and group-by-ranges. Grouping by positive and negative organized the bar graph based on the sentiment of the videos. Colour was used to indicate whether the word had a positive or negative sentiment (orange is used for negative and green for positive). For example, one of the words in the bar graph was "walking"; once a user applied the positive/negative sentiment filter, the bar became green indicating that this word was associated with positive sentiments in the videos. The group-by-ranges filter grouped words together based on frequency size. For example, all words between 10-29 mentions were grouped together, 30-49 in a second grouping and finally 50 plus in a third. In Figure 2, a dotted rectangle appears surrounding the "Pain" bar. This rectangle represents a button that takes the user to the associated Text Graph of a certain word. Since there was only one Text Graph in the prototype and it represents pain, this Bar Graph had only one button. + + < g r a p h i c s > + +Figure 1: Word Cloud example used in the MHMR data visualization study. + + < g r a p h i c s > + +Figure 2: Bar Graph example used in the MHMR data visualization study. + +§ 3.1.3 LINE GRAPH + +The Line Graph (see Figure 3) represents the data as the occurrences of words over specific time intervals. The example in Figure 3 shows the number of negative mentions of pain per day over a two-month recording period. The x-axis represents the time interval in days over the two-month period, and the y-axis represented the number of times the word pain is mentioned with negative sentiment in the videos for a particular day. The clear or black-filled circles plotted at the 0 points are either when there is no video created that day or no videos where there were negative pain words respectively (the legend below the graph indicates the meaning of the clear and black-filled circle). The two points surrounded by the dotted rectangle indicate a button that when clicked will take users to the videos created for that particular day. + + < g r a p h i c s > + +Figure 3: Line Graph example used in the MHMR data visualization study. + +§ 3.1.4 TEXT GRAPH + +The newly developed Text Graph adds text markers to maxima and minima points on the time-based Line Graph. In the example shown in Figure 4, the maximal and minimal of the graph are labelled either "More pain" or "Less pain" indicating whether the video(s) for that day mentioned pain in a positive or negative manner. The font-sizes of these text vary with the frequency of mention like in the Word Cloud. The graph also has filters for users to choose to show positive only points, negative only points, or both. In addition, users are able to toggle between a coloured and black/white representation. We wanted to determine whether colour could help people interpret a graph populated with the large variety of information displayed on the Text Graph (e.g., lines, labels/text, points, and button indicators). When there is a day where no video is created, or there is no mention of pain, the graph shows clear or black filled-in circles, respectively. Points on the graph shown in Figure 4 with a dotted rectangle are buttons that when selected lead users to the videos created for that day offering users a drill-down option. + +§ 3.1.5 TEXT SUMMARY + +The Text Summary (see Figure 5) is a general summary of the videos presented as text-based bullet points created from high-frequency words. It is created using the data from the transcripts. The purpose of the summary is to briefly describe the main patient experiences over the duration of the entire video set. For example, "aches" and "pain" were mentioned often in conjunction with the word "morning". The Text Summary then shows "The patient complained of pain and aches multiple times...On most occasions, complaints of pain and aches were mentioned with "morning"". The user is able to toggle between a general summary and a quantitative summary. The quantitative summary displays the number data in the text. For example, "59/72 videos mentioned "pain" with a total account of 90 mentions". + + < g r a p h i c s > + +Figure 4: Text Graph example used in the MHMR data visualization study. + +Text Summary + +Date Range: 2 Months + + * General summary + +○ Quantitative summary + + * The patient complained of pain and aches multiple times throughout the past 2 months - On most occasions, complaints of pain and aches were + +mentioned with "morning' + + * The patient complained a lot about being tired + + * The patient mentioned walking being easy multiple times + +Figure 5: Text Summary example used in the MHMR data visualization study. + +§ 3.2 PARTICIPANTS + +A total of 48 healthcare providers and individuals in medical school were recruited for the user study. The three groups were: students currently in their first or second year of medical school (16 in total), students in their third or greater year of medical school (17 in total), and finally healthcare providers with two or more years of work experience in the healthcare industry (15 in total). The healthcare providers were from a range of disciplines including registered nurses, graduate nursing students, medical residents, a general practitioner, a nutritionist, and a behaviour therapist. These three groups were chosen because they vary in their experience of medical training. Previous studies [8], [12] have shown that professional $\mathrm{{HCPs}}$ like to use clinical notes to support their decisions, but we wanted to see if the amount of training can play a role in how HCP like their patient data presented. The first group were freshly starting medical school and lacked exposure to traditional patient data summarizations such as clinical notes. The second group of students had slightly more practical medical training with perhaps clinical rounds so they had more experience in handling patient data than the first group. Finally, the third group were individuals working in the field and had the most exposure and experience in handling patient data. Age, gender and years of experience were collected to ensure that there was a representative sample of the target populations, and years of experience were also used as a grouping variable. The distribution of gender, age, and years of experience can be seen in Figures 6-8. It is important to note that in gender distribution (Figure 6), although, there were 48 participants, 1 participant chose not to indicate their gender. All participants were given a small token of appreciation. + + < g r a p h i c s > + +Figure 6: Gender of participants. + + < g r a p h i c s > + +Figure 7: Age distribution of participants. + + < g r a p h i c s > + +Figure 8: Medical experience distribution of participants. + +§ 3.3 STUDY DESIGN + +Each study lasted around 90 minutes and began with a pre-study questionnaire that gathered demographic data as well as comments on the healthcare provider's/medical student's current routine of practice. This was followed by a short training period where the user was introduced to the visualization system as well as a patient vignette and scenario. The vignette explained the patient's condition and their experiences to the user, whereas the scenario set the tone for the user study explaining what exactly the user will be required to do. Participants were then invited to complete ten tasks while thinking aloud followed by a short semi-structured interview that gathered their opinion of the visualizations. Three versions of the ten user tasks were created, and the participant was randomly assigned a version. Each version had the same user tasks but in a different order to eliminate sequence bias. The tasks were designed so that the user had to use the application and the visualizations to answer the questions. For example, one task was "On which day did the patient complain about pain the most?" Notes were also generated on what visualizations the participants used the most for the study tasks, and which ones they were struggling to understand. The study ended with a post-study questionnaire that allowed participants to rate the system usability, reflect on their experiences, discuss what they liked/disliked about the system, and make recommendations. In this paper, the results of the post-study questionnaire and observational notes are reported. + +§ 3.4 DATA COLLECTION + +The post-study questionnaires consisted of ten System Usability Scale [32] questions, two questions that allowed participants to choose which visualization(s) they liked the most and which they liked the least, and one 4-point rating questions on the perceived usefulness of each visualization. Usefulness was rated with four possible responses: Very useful, Useful, Somewhat Useful, Not useful at all. There were also five open-ended questions that allowed participants to use freeform text to write about their opinion and interest in working with the system and its visualizations. + +§ 3.5 DATA ANALYSIS + +Due to the use of categorical and ordinal data, non-parametric analyses were used. The questionnaire responses were statistically analyzed using non-parametric statistical methods. A Pearson's chi-squared test was used to determine the significance of ratings provided by the questions on the usefulness of each visualization (see Table 1). Then, a Kruskal-Wallis non-parametric analysis of variance was used to determine whether there were significant differences between the three participant groups for usefulness and ratings of each visualization. Finally, the strength of association between the ratings of different visualization was assessed using Kendall's tau. + +§ 4 RESULTS + +The mean SUS score is 74.90 (SD=13.80). According to Bangor et al., [33] a score above 68 demonstrates average usability obtained across a range of studies. In addition, ${60.4}\%$ of the participants were very likely (rating of 5 on a 1-5 scale) to recommend this system to a friend or colleague. + +§ 4.1 STATISTICAL ANALYSIS + +A Pearson chi-square test was used to determine the significance of usefulness ratings for each visualization. There was a significant difference in usefulness ratings between the visualizations $(p < {0.05}$ ; see Table 1 for chi-square values). A Kruskal-Wallis test was then used to determine whether there was a significant difference in usefulness ratings for each visualization between the participant groups. There was no significant difference between groups $\left( {p > {0.05}}\right)$ . In addition, a contingency table analysis was performed between each visualization, but the results were not statistically significant at the .05 level. + +§ 4.2 FREQUENCY RESPONSES + +Figures 9 and 10 show which visualizations were preferred or disliked for the different participant groups. Overall, the most preferred visualizations were Text Summary and Text Graph. The most disliked graph was the Line Graph. Figure 11 illustrates the frequency of usefulness ratings for each visualization. Text summary and Text Graph were also rated the most useful visualizations (rating of very useful or $4/4$ ), and the word cloud and bar graph are rated as useful (rating of $3/4$ ). The line graph was rated as somewhat useful (rating of $2/4$ ) by 18 participants. + +Table 1: Pearson's chi-square analysis results for each visualization. + +max width= + +X Word Cloud Bar Graph Line Graph Text Graph Text Summary + +1-6 +Pearson ${\chi }^{2}\left( 3\right) =$ ${\chi }^{2}\left( 3\right) =$ ${\chi }^{2}\left( 3\right) =$ ${\chi }^{2}\left( 2\right) =$ ${\chi }^{2}\left( 2\right) =$ + +1-6 +chi- 16.667, 21.883, 10.277, 10.511, 28.625, + +1-6 +square $p = {0.01}$ $p < {0.05}$ $p = {0.016}$ $p = {0.005}$ $p < {0.05}$ + +1-6 + + < g r a p h i c s > + +Figure 9: Presentation of most liked visualizations by each group. + + < g r a p h i c s > + +Figure 10: Presentation of most disliked visualizations by each group. + +§ 4.3 WRITTEN RESPONSES + +All 48 participants commented on their experience and opinion of the visualizations, and the application. Most participants had positive reviews of MHMR and its visualization techniques. Participants mentioned that the Text Graph or Text Summary were the most useful visualization to work with, e.g., "Text Graph, Text Summary [are the most useful] because they give a better and a quick picture [answering] my questions" (P18). Some participants mentioned that what they liked least about the system was the Line Graph e.g., P6 wrote "Line graphs, I think it is a lot more time consuming and comparatively less helpful than the other techniques." P46 also mentioned that "I think some of the visualizations were redundant (the text graph was a better version of the line graph)". With design and layout, some participants had negative comments on the x-axis labels on the Line or Text Graph because they found them "unclear" (P28) or did not understand the difference between the open and closed circle (P17). + +Participants stated that they would be willing to use this system in their practice especially when monitoring a patient's condition over time, or prescribing medication. For example, P13 wrote, "I do weekly check-ins with my clients, it would help me to see their progress as well as help me to pinpoint where changes in their nutritional and exercise plans need to be made." P31 stated, "As a nurse, you can understand at what time of the day the [patient] experiences more pain, and you can advocate for the [patient] to get pain meds prescribed at certain times of the day." But there were concerns on how compliant patients would be with using MHMR, and how they would be encouraged to record their symptoms as often as possible (P45). + + < g r a p h i c s > + +Figure 11: Presentation of usefulness rating for each visualization. + +§ 5 DISCUSSION + +This study evaluated opinions about, and acceptance of, the data visualizations presented in the MHMR application. In the ratings of preference, usefulness (Figures 9-11) and written responses, the Text Summary and Text Graph visualizations tended to be more preferred. + +In their research, Sultanum et al., [12] concluded that text is a familiar method of generating and consuming information about patients for doctors (or HCP). Thus, it is not surprising then that the Text Summary was preferred as it resembles clinical notes the type of information provided in clinical notes. + +Given that word clouds are used for searching, browsing, impression forming and recognizing or matching [22] it was anticipated that participants would find them useful in finding patterns for matching specific health conditions. For some participants this was the case, and they found the Word Cloud to be useful for this purpose, either alone or along with Text Graph or Text Summary (e.g., P4, P14, P24, and P34). However, for others there was too much disorganized information contained in the word cloud, e.g., one participant (P12) stated: "[Word Cloud] appeared scattered and packed", and thus they did not find it as useful as the text summary and text graph in gaining information about the patient's condition. + +The Bar Graph was the only visualization that had a high usefulness rating (Figure 11) and no negative comments associated with it. This could be due to familiarity with its style and the information it conveyed. For example, P37 liked the Bar Graph because it displayed "the most words said and how many times the patient actually said them." Plus, the Bar Graph has a filter that allows participants to sort from highest to lowest frequency words. The usage of this feature was repeatedly observed in the user studies and as an example, the Bar Graph was liked by P11 because "it gives you [the] highest symptoms experienced vs. lowest". + +The Line Graph was collectively the least favourite visualization among the participants. This could be because it requires attention to understand the trend and to investigate each point, or because participants had less experience with interpreting line graphs. For example, P16 said "I think there should be more detail to the line graph because I couldn't understand that graph much", and P30 said "I think the line graph was a bit difficult to read". + +The Text Graph provided more or less the same information as the Line Graph but combined text labels with the graphical, time-based representation of the data. Yet, the usefulness of the Text Graph was rated much higher than the Line Graph. 11 participants that rated Text Graph as a 4/4 (Very useful) rated the Line Graph as a 2/4 (Somewhat useful). Carpenter and Shah [19] found that, when viewing line graphs, individuals spend more time relating the different graphical features axis and data point labels to make sense of the data and less time viewing the pattern or trends. The Text Graph may have helped participants make sense of the data because it highlights the important information for them, and so they can focus more on understanding the overall pattern. In addition, The Text Graph allowed users to drill down to extract more detailed information by viewing specific videos related to those data points. This may have provided the additional detail as suggested by P16 or it may have added a sufficient amount of text to take advantage of the familiarity of text favoured in the Text Summary. For example, P32 said "in my opinion, the most useful technique is text-graph [because] it shows day to day variation of patient symptoms...it will help me get a better understanding of my patient [to] evaluate necessary management". + +Researchers have found that healthcare providers prefer the notes section in a patient record [8], [12]. The Text Summary in MHMR was similar to clinical notes so it was expected that most participants would show a preference for the Text Summary and would find it useful. Nonetheless, participants also saw benefits of other visualizations, particularly the Text Graph, and formed mainly positive opinions them. The Text Graph was newly developed for this research and was new to all participants. The Text Graph was designed to exploit the preference for notes and the benefits of visually representing patterns over time as a Line Graph. The Text Graph and Text Summary had very similar ratings as well. 16 participants rated both visualizations as 4/4 (Very useful) and 12 rated both graphs as 3/4 (Useful) indicating that both visualizations were useful in extracting information about patients' status and conditions. The Word Cloud was also text-based but it was mainly rated to be "useful" (3/4) rather than "very useful" (4/4) like the Text Summary and Text Graph. This could be because the word cloud was something new for them and they were more comfortable with the Text Summary and Text Graph but were open to trying the word cloud as well. + +§ 5.1 LIMITATIONS + +This study evaluated the acceptance and opinion of data visualizations presented in the MHMR application. The statistical analysis of post-study questionnaires showed no significant differences between the groups (HCP, first or second-year medical students, third-year or greater medical students). One of the reasons for this could be that there is not enough data to work with because although data of 48 participants was being analyzed, there were only around 16 participants in each group. + +§ 5.1.1 DEMOGRAPHICS + +There were no differences in the usefulness ratings between groups, and we suggest that a larger sample may elicit differences. The HCP were mostly nurses who may have different experiences than doctors or other types of HCP. Future studies should incorporate a more diverse set of participants varying in roles. The students recruited were mainly from the same geographic location and thus diverse geographic samples, and the impact of different training regimes between different jurisdictions should also be studied. + +§ 5.1.2 ONLINE STUDY + +One technical limitation was that this was an online study. Technical difficulties such as Internet issues with several participants slowed the process of viewing and interpreting visualizations causing frustration and impatience by participants. This may have impacted their views and they may have been distracted by the technical issues. Another limitation of the online study was that the MHMR was intended as a mobile application but was evaluated using a web application. The user interface resembled the look of a mobile application to mimic what the user may see in a mobile app. Because users were using screens with different aspect ratios, the user interface could appear widened or disproportional depending on the size of the screen, and sometimes the participants were not able to see the entire application at once and had to scroll up or down. This may have had an impact on the participants' view of the application and the visualizations as some information may have been hidden or not clearly visible on their screen. Future studies need a consistent screen display for all participants to eliminate this limitation. + +§ 5.1.3 VISUALIZATIONS + +Another limitation of the study was the number of visualizations presented. Our study presented five visualizations, however, there are a number of other ways to present data. Other common graphs include scatter plots, pie charts, histograms, etc. In addition, there are a number of ways to add or remove details from graphs to create variation. The Text Graph added text labels on top of a time-based line graph for more information but even simply removing data labels from the Bar Graph can potentially create a difference in understanding of the graph. Future studies should incorporate different types of visualization techniques and assess how different graphical features play a role in the understanding of the data. + +§ 6 CONCLUSION + +The design and implementation of five visualizations depicting patient-generated data were presented in this paper. Participant data from three different groups representing a spectrum of healthcare providers in terms of their education and experience was evaluated for comparison and correlation. Quantitatively, there was no difference between the groups and their preference and opinion of the visualizations. The results, however, did show positive attitudes towards the visualizations, particularly the Text Graph and Text Summary. The Text Summary was similar to the notes section in a patient record so, as anticipated it was the most preferred and was rated to be the most useful by the users. However, the Text Graph, despite being something the users have not seen before, was as useful as the Text Summary. Many participants were also interested in using this application in their future clinical practice. Future work needs to incorporate a large sample size and a diverse group of participants. The visualizations also need to be automated and tested for their accuracy in depicting the correct words spoken by the patient and associating the correct sentiment to those words. + +§ ACKNOWLEDGMENT + +Details omitted for anonymous reviewing. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/tufnZGWrWjR/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/tufnZGWrWjR/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..6fbac28b4ebfad90d2abab1ded31d1e27185297f --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/tufnZGWrWjR/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,339 @@ +# Embodied Third-Person Locomotion using a Single Depth Camera + +## Abstract + +Third-person is a popular perspective for video games, but virtual reality (VR) is primarily experienced from a first-person view. While a first-person POV generally offers the highest presence, a third-person POV allows for users to see their avatar; which might allow for a better bond, while the higher vantage point generally increases spatial awareness and navigation. Third-person locomotion is generally implemented using a controller or keyboard, with users often sitting down; an approach that is considered to offer a low presence and embodiment. We present a novel third-person locomotion method that enables a high avatar embodiment by integrating skeletal tracking with head-tilt based input to enable omnidirectional navigation beyond the confines of available tracking space. By interpreting movement relative to the avatar, the user will always face the the camera which optimizes skeletal tracking and only requires a single depth camera. A user study compares the performance, usability, VR sickness and avatar embodiment of our method to using a controller for a navigation task that involves interacting with objects. Though a controller offers a higher performance and usability, our locomotion method offered a significantly higher avatar embodiment. Because VR sickness incidence for both methods was low, it suggests that perspective might play a role in VR sickness. + +Index Terms: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism-Virtual Reality + +## 1 INTRODUCTION + +One unique feature of virtual reality (VR) is that it can let you experience being a person of a different race, gender or age. Embodiment illusion research explores creating illusions of ownership over a virtual body, which is a promising intervention technique to reduce biases. For example, a seminal study [36] demonstrated that when light skinned participants experienced being a dark-skinned avatar (for example: Figure 1) this reduced implicit racial bias against dark-skinned people. A more recent study [47] explored swapping genders and found this to lead to reduce gender-stereotypical beliefs. These findings demonstrate the vast potential of VR as a tool to improve the world that could address many current day social issues regarding race, age and gender. Because VR is predominantly experienced from a first-person-perspective (1PP), to establish the embodiment illusion, a virtual mirror is required [17] which allows subjects to fully see themselves. Requiring a stationary mirror to maintain the embodiment illusion, limits what kind of scenarios can be explored. It might be interesting to explore non-stationary scenarios (i.e., are you treated differently based on skin color, gender, or age when walking through a busy street?). But that would require the ability for the user to move around in a virtual environment while still being able to see themselves. + +Presence, i.e., a sense of being in VR [28], and embodiment, i.e., a sense of being your avatar [25] are important yet closely intertwined qualities for VR that to a large extent are defined by the graphical perspective used [18]. With 1PP users see the world through the eyes of their avatar -which provides the highest sense that they are their avatar $\left\lbrack {{38},{45}}\right\rbrack$ (i.e., embodiment). Because the user and avatar are collocated, this view is most natural to us, and most optimal for motor accuracy tasks [31,43]. + +![01963e8a-07ed-73f5-9230-a01532176802_0_1069_465_437_464_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_0_1069_465_437_464_0.jpg) + +Figure 1: Our 3PP locomotion method integrates full-body skeletal tracking using a single depth camera (Azure Kinect) with head-tilt based input using inertial as to enable omnidirectional navigation beyond the confines of available tracking space. With the depth camera at the location of the avatar and interpreting movement relative to the avatar, the user always faces the camera which optimizes full-body tracking and keeps instrumentation to a minimum. + +A third-person perspective (3PP) allows users to see their avatar from an over-the-shoulder view. Though a 3PP offers a lower embodiment than 1PP [11]; when users can see their avatar it creates a stronger bond than when using 1PP and the ability to customize the avatar can create a strong identity which reinforces body ownership [12]. The higher vantage point improves spatial awareness $\left\lbrack {{40},{44}}\right\rbrack$ , though avatar occlusion can interfere with precise interaction like aiming [43]. Letting users switch between 1PP and 3PP offers the benefits of each perspective [42]. Because the user and their avatar are not collocated, 3PP generally offers a lower embodiment than 1PP [45]. This can be increased through agency, i.e., when the user and virtual avatar are linked to move synchronously [26]. + +Locomotion is an essential part of VR interaction [?] and defines presence [8]. Full body tracking enables visuo-motor synchronicity between the user and the avatar and when combined with real walking it achieves a high presence and sense of embodiment $\left\lbrack {{18},{31}}\right\rbrack$ . Though real walking offers the highest presence [52], it is bounded by available tracking- and physical space [7]. To implement omnidirectional 3PP locomotion, full-body tracking requires multiple cameras [31] or extensive user instrumentation [18]. Current consumer VR systems do not feature full-body tracking and positional tracking is limited to tracking the head mounted display (HMD) and both controllers. To implement 1PP, limited tracking is not an issue as users only see their hands, but for 3PP, tracking only 3 joints is not sufficient for animating an avatar that offers high embodiment [18]. + +Because users need to be able to navigate beyond the confines of available walking space, 3PP locomotion is typically implemented with a controller. A controller offers low presence and embodiment which is further exacerbated by the fact that users will often sit down when positional tracking is not fully used [58]. Currently few VR experiences use 3PP, though it is a popular perspective for non-VR games (e.g., Fortnite). Though some have suggested that 3PP is not a suitable perspective for VR [31], there are benefits of using 3PPs that have not fully been investigated yet. Locomotion facilitated using a controller (i.e., joystick or touchpad) is more likely to induce VR sickness $\left\lbrack {{22},{52}}\right\rbrack$ ; a major barrier to the success of VR. Especially sudden movements, e.g., jumping, falling or users taking damage without corresponding head motions can exacerbate visual-vestibular conflict; a major cause of VR sickness [33]. One remedy is to use a rest frame [39], i.e., part of the screen with no optical flow, for example, a virtual nose [55]. Visual discomfort can also result from a vergence-accommodation conflict (VAC) [21]. VR headsets use a flat screen to simulate depth of field, which creates a disparity between the focal point of objects in the virtual world (vergence) and the actual physical surface of the screen (accommodation). + +For 3PP, because the camera follows the avatar from behind, it can largely dampen out sudden motions [48] -like a steady-cam. Because the avatar is likely the focus of a user's gaze during locomotion it serves as a rest frame. When using 1PP users are looking at objects in their entire field of view, but the higher vantage point of 3PP basically confines objects to a plane at the lower half of the users' view, which might alleviate VAC [49]. Developing a better understanding how perspective affects visual discomfort and VR sickness is important for VR [5]. + +We present a hybrid 3PP locomotion method that offers a high sense of embodiment by integrating real walking -implemented using full-body tracking- with tilt-based omnidirectional locomotion. Our goal was to bring a popular gaming perspective to the domain of VR while preserving a high embodiment. Our interface could enable embodiment illusion research [36] from a 3PP, and thus would not require users to look at themselves in a mirror using 1PP. It advances over existing 3PP methods $\left\lbrack {{18},{31}}\right\rbrack$ as locomotion isn't confined by available tracking space and removes the need to holding a non immersive controller [13]. Due to the unique implementation of steering, the user always faces the camera and our approach is minimal in terms of required instrumentation. In addition to understanding how perspective affects embodiment and motor accuracy, we investigate VR sickness. + +## 2 RELATED WORK + +A number of studies have investigated how perspective affects presence, motor accuracy and embodiment [25] for HMD based VR. Factors that can affect embodiment include: location of the body, body ownership, agency and motor control; and external appearance [16]. + +Salamin et al. [42] was one of the first studies to investigate perspective for HMD based VR and found some preliminary evidence that $3\mathrm{{PP}}$ is better for navigation while motor actions like opening a door or putting a ball in a cup had better performance in 1PP. A follow up study by the same authors [41] found no difference in error rate between perspectives for a stationary ball catching task though a 3PP offered better distance estimation. Slater et al. [45] evaluated how perspective affects the body transfer illusion and found 1PP offered the highest embodiment. Debarba et al. [10] conducted an extensive study investigating perspective and synchronous/asynchronous avatar rendering on the performance for a target reaching task. Full-body motion capturing was implemented using an optical tracking system with wearable markers. Synchronous rendering of an avatar offered the highest performance and embodiment in terms of body ownership and self location with no difference between perspectives. A follow up study by the same authors [15] evaluated perspective for a similar target reaching task and found that giving users the option to switch between 1PP and 3PP offers a strong sense of embodiment, though subjective body ownership was strongest in 1PP. Gorisse et al. [18] evaluates perspective on performance, presence and embodiment for an object perception and deflection task as well as a navigation and interaction task. No difference between perspectives on presence or agency were found though a 1PP enables more accurate interactions with objects while a 3PP provides better spatial awareness. Medeiros et al. [31] performed an extensive study that investigated perspective and avatar realism on navigation performance and embodiment. Navigation tasks using real walking included avoiding objects and going through a tunnel. Full body motion capture was implemented using an array of 3D depth cameras (Kinect). A 3PP can offer the same sense of embodiment, spatial awareness and navigation performance as a 1PP when using a realistic representation, but did worse without realism. + +Focusing on 3PP locomotion methods that enable navigation at scale the following approaches are closely related. Hamalainen et al. [20] presents Kick Ass Kung-Fu; a martial arts installation that captures the user with a regular camera and embeds their graphics and translates their movements to an avatar in a 2D fighting game. Oshita [34] presents a motion capture framework for 3PP locomotion for large screen VR. Full body motion capture is implemented using an optical tracking system and a combination of walking-in-place and arm swinging allows for navigating beyond tracking space confines. Omnidirectional navigation is not supported due to the requirement to keep facing the screen. No user studies results were reported. Work by the same author [35] explored using hand gestures to control an avatar in 3PP on a large screen. Locomotion is achieved using walking in place with the fingers controlled by the user's right hand. A user study demonstrated the feasibility of this approach but revealed issues with locomotion at scale. + +Cmentowski et al. [9] presents a VR locomotion method called Outstanding that lets users switch between 1PP and 3PP. Other than physical walking using positional tracking there is limited travel in 1PP and users switch to 3PP to travel beyond the limits of available tracking space. The camera remains stationary to avoid optical flow generation that could lead to VR sickness. When in 3PP, users navigate their avatar by pointing at a destination using a raycast with their motion sensing controller. A user study comparing outstanding to regular teleport found a significant increase in spatial orientation, with no VR sickness or difference in presence. + +A very similar approach was presented by Griffin et al. [19] which was published at the same time; called Out-of-body locomotion. A difference with outstanding is that in $3\mathrm{{PP}}$ the users can steer their avatar using the touchpad on their controller which offers more precise navigation flexibility. If the user breaks line of sight with their avatar it automatically switches back to 1PP. A user study compares out-of-body locomotion to regular teleportation using an obstacle navigation task and found that it required significantly fewer viewpoint transitions with no difference in performance or VR sickness incidence. + +3PP-R [13] brings 3PP to VR by rendering a miniature world that orbits with the user's viewpoint and which shows a miniature avatar. The avatar ability to mimic the user's motions are limited because only the hands and HMD are tracked. Users navigate their avatar primarily using a controller and though positional tracking is supported this is bounded by tracking space boundaries. Using a user study authors found 3PP-R to lead to less VR sickness than 1PP. + +## 3 DESIGN OF EMBODIED 3PP LOCOMOTION + +Prior studies $\left\lbrack {{18},{31}}\right\rbrack$ show that a $3\mathrm{{PP}}$ can offer high embodiment but it requires visuo-motor synchronicity between the user and avatar. This is generally facilitated using full-body motion capturing which either requires extensive user instrumentation $\left\lbrack {{10},{15},{18}}\right\rbrack$ and (or) an array of (depth) cameras [31]. For locomotion, full-body tracking facilitates real walking which offers high presence [18] but this is generally bounded by available tracking space (i.e., when using external cameras) or available physical space (i.e., when using wearable sensors). + +On most consumer VR systems, users rely on a combination of real walking and an alternative locomotion technique (ALT) like teleportation. Though teleportation allows for navigating at scale, it offers a low presence [8] which is a problem for when using it for 3PP locomotion as it can be assumed that this would offer a low embodiment. Other hybrid locomotion techniques have been proposed that aim to offer high presence by combining real walking with an ALT, e.g., walking-in-place (WIP) [6], arm swinging [30] or head-tilt [50] that offer high presence because they generate some of the proprioceptive/vestibular cues that are generated during real walking. + +To facilitate 3PP locomotion with a high sense of embodiment, we propose a hybrid locomotion method that combines full body tracking based real-walking with head-tilt input. Head-tilt is a subset of leaning input (i.e., whole body tilt), a type of input that has been popularized by hover-boards [50]. Leaning has been explored for virtual locomotion where it has been found to offer high presence $\left\lbrack {{29},{54}}\right\rbrack$ . The choice for head-tilt as opposed to other high presence ALTs like WIP or arm swinging was motivated by positive results from earlier studies. For an obstacle navigation task, head-tilt outperformed both WIP and a controller, while there was no significant difference in presence compared to WIP. There was also no significant difference in VR sickness compared to a controller [50]. For a bimanual target acquisition/deflection task, head-tilt offered a significantly higher presence than using a controller or teleportation though its performance was lower than using teleport [?]. No significant difference in VR sickness was found for head-tilt, WIP and using a controller. An earlier study also found no significant difference in VR sickness incidence between leaning input and a controller [29]. We have not found any research that found head-tilt input to increase VR sickness, and there is evidence that head-tilt can reduce car sickness [53], which is closely related to VR sickness. From an implementation perspective, head-tilt can be implemented with a high accuracy using an inertial measurement unit (IMU) that is present in the HMD. Though WIP can be implemented with an IMU [51], arm swinging either requires using controllers [30] or skeletal tracking which are more likely to be less accurate. Head-tilt also allows the user to retain independent control of their hands. This is useful, for example, when grabbing or punching an object or enemy while running. A limitation of using head-tilt is that it impedes the user's ability to freely look around while locomoting [51]. + +WIP and arm swinging generally do not support omnidirectional navigation (e.g., moving laterally or backwards). For WIP this can be achieved in a handsfree way when combining WIP with head-tilt [50]. A closely related 3PP implementation [34] integrates WIP with positional tracking but this setup does not allow for omnidirectional navigation and requires users to keep facing the camera. Full-body motion tracking generally requires using expensive cameras or extensive user instrumentation. Consumer depth cameras are low cost and require no user instrumentation but to allow for occlusion free skeletal tracking from all directions, multiple cameras from different viewpoints [31] must be integrated. Some rudimentary skeletal tracking is possible using wearable sensors (i.e., VIVE trackers) but since they only track torso and feet this doesn't allow for accurately animating a $3\mathrm{D}$ avatar (e.g., elbows & knee joints are missing). + +Given these hardware and tracking considerations, we decided to implement our 3PP locomotion method to only require use of a single depth camera. To enable this we were inspired by a now abandoned 3PP control system popularly known as tank control in which the user controls their avatar movement relative to the coordinate system of the avatar. Up/Down input moves the avatar forwards/backwards in the direction it was currently facing while left/right rotates the avatar (clockwise/counterclockwise) This differs from current 3PP control schemes where movement is defined relative to the virtual camera’s look direction. Early 3D games used 3D objects in combination with 2D pre-rendered backgrounds, which looked better but required using multiple predefined fixed in-game camera perspectives. This introduced a usability problem because if the user was navigating their avatar in a particular direction, cutting to a different camera perspective could make the game interpret user input differently from what was intended. Tank control didn't have this problem as it interprets movement relative to the avatar independent of a camera. When 3D rendering capabilities improved this control scheme was largely abandoned [37]. + +![01963e8a-07ed-73f5-9230-a01532176802_2_1026_152_470_419_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_2_1026_152_470_419_0.jpg) + +Figure 2: Head-tilt implemented using orientations acquired from an IMU. + +In a 3PP, the virtual camera is placed behind the avatar facing the avatar's back. Movement is issued relative to the virtual camera's look direction. Where in non-VR 3PP experiences the camera is typically controlled using a mouse or joystick, similar to most VR experiences, we pair the virtual camera to the position and orientation of the user's head to reduce visual-vestibular conflict. We then implement tank controls based on head-tilt for locomotion for the following reason. For optimal full-body tracking users must keep facing the camera to avoid occlusion. Tilt-based tank control ensures the user will always face the camera during locomotion as when users steer they rotate the virtual world relative to their avatar. When a user turns their head, the avatar will move out of the user's field of view as the avatar's location doesn't change. Our tank control scheme further ensures users will always be facing the camera by imposing a constraint that the user must be looking at their avatar to engage in locomotion. + +If we used camera based movement, when the avatar rotates, we would have to rotate the camera as well to maintain an over the shoulder 3PP view. The optical flow from the camera orbiting around the avatar would be higher than when using tank controls and might induce VR sickness. + +We combine tracking data from three different sources to implement our technique: + +- Positional tracking data from the VR HMD is used to place and orient the virtual camera. The user can freely move around in the available tracking space and a grid is shown to indicate to the user that they are approaching the tracking boundary. + +- A depth camera estimates full-body joint positions which are used to animate the virtual avatar in real time. If the user walks around in the available tracking space of the depth camera, the virtual avatar will do the same. For this to work in conjunction with the positional tracking input of the VR system, the two tracked spaces must overlap. The user also must adhere to the tracking space constraints. But since they are tethered to their avatar there is a risk of pushing or dragging the avatar outside the tracking space. We prevent this from happening by not letting the avatar cross the tracking boundary, which acts as a warning system to the user. For example, if a user is walking forward towards the camera, the avatar will stop when it reaches the tracking boundary and users must take care to return inside. + +![01963e8a-07ed-73f5-9230-a01532176802_3_199_145_620_569_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_3_199_145_620_569_0.jpg) + +Figure 3: A finite state diagram of the Tilt Locomotion. w_min, s_min and $t\_ \min$ represents the walking, strafing and turning thresholds respectively with the assumption that the user is facing the virtual avatar. + +- Inertial sensing, acquired using the HMD's IMU is used to enable head-tilt locomotion. Head-tilt is calculated from the three possible degrees of rotation of the HMD; pitch, roll and yaw (see Figure 2). The pitch of the head dictates forward or backward movement, while the roll of the head is used for strafing when the user is not moving. We combine pitch and roll to implement tank controls when the user is moving forward. To allow users to be able to look around freely without engaging in movement, a dead zone has been defined and roll or yaw must exceed a certain threshold to activate movement. Roll and pitch can further be coupled to the avatar's locomotion speed to support variable locomotion speeds. A known limitation of head-tilt based locomotion [50] is that it limits the user's ability to freely look around as this changes the direction of locomotion. Users can freely look around when standing still while not looking at their avatar. During locomotion users can still look around with their eyes without moving their head, though this is constrained by the limited field-of-view of VR HMDs. + +Figure 3 depicts a finite state machine diagram of the supported movement types. First, when standing still, as long as the user's head roll and pitch remain below their thresholds, the user can just look around freely and even turn their head ${180}^{ \circ }$ to look behind them. If the head tilt goes beyond threshold, the following can happen. If the user tilts their head forwards or backwards and passes the threshold, their avatar will walk forward or backward. If the avatar is standing still, tilting left or right will make the avatar strafe left or right. On the other hand, when the user tilts forward and then tilts left or right they will steer their avatar and the virtual world will be rotated around the avatar. + +Tilt based locomotion implementation seamlessly integrates with positional tracking using the depth camera. Taking tracking constraints into account the avatar will be moved by any amount of observed skeletal displacement while a fixed distance between the camera and the avatar is maintained (except when approaching the tracking boundaries). Figure 4 provides an overview of all possible forms of locomotion and required corresponding head tilt inputs. + +Additionally complex types of motion can be supported like jumping or crouching which can be activated using a gesture. For example, a short hop detected using inertial sensing can be used to trigger a much higher jump. A study has shown that switching between 1PP and 3PP still enables a high embodiment [15] and we believe transitioning from real walking to using head-tilt for locomotion is a smaller change than a 1PP to 3PP transition and will preserve a high embodiment. + +## 4 USER STUDY + +The goal of the user study was to evaluate the performance, usability, sense of embodiment and VR sickness incidence of our embodied locomotion method and to compare it to using a controller. + +### 4.1 Instrumentation + +Full-body skeletal tracking was accomplished using a Microsoft Azure Kinect DK. For our experiment, it operated with a resolution of ${640} \times {576}$ pixels at 30 frames per second. Latency was measured at ${35}\mathrm{\;{ms}}$ which we deemed acceptable [2]. We placed this camera at a height of $1\mathrm{\;m}$ on a tripod stand, which based on preliminary trials seemed to be most optimal for skeletal tracking with a user located at around $2\mathrm{\;m}$ distance. From our experience, we found the Kinect sensor to prefer certain distances depending on the user's height and body type. That's why, before conducting the experiment, we made sure that the Kinect sensor was able to properly track each user at the distance they were standing and made necessary adjustments if needed. + +For our HMD, we used the Oculus Rift S, a popular PC VR platform that allows full inside-out tracking of the HMD and two controllers using multiple cameras housed in the headset. The Oculus Rift S was specifically chosen because of its inside out tracking capability. Because the Kinect sensor projects infrared (IR) dots, VR systems that also rely on tracking using infrared light can cause interference. Specifically, we tested the Vive Pro and it wasn't compatible with our setup. + +The Oculus Rift S offers a 1440x1280 per-eye resolution at ${80}\mathrm{\;{Hz}}$ and a variable field of view of around ${110}^{ \circ }$ . We used a High end PC (Ryzen 7 1700X, 16GB RAM, NVIDIA GTX 1080Ti) to run our VR application. For our study, we configured our tracking space to have a size of ${2.5}\mathrm{\;m} \times {2.5}\mathrm{\;m}$ , which is an average sized tracking space [3]. The Kinect camera was configured to operate within a depth range of approximately 3 meters(0.5 - 3.86m). The ${75}^{ \circ }$ field of view of the camera means that the width of the tracked space decreases as we move closer to the camera. As a result the tracked width was slightly lower compared to the VR system near the camera. The two tracking spaces were aligned to have maximum overlap. SteamVR's chaperone system keeps the user within the available tracking space and thus also keeps them visible to the Kinect camera most of the time. + +For our study, since we are assessing locomotion performance, we compare our technique to using a Microsoft Xbox one wireless gamepad. Though trackpad or thumbstick input is available on VR motion sensing controllers, most commercially successful 3PP VR experiences (e.g., Lucky's Tale) are primarily experienced seated using a gamepad and participants are also most likely to be more familiar with a gamepad. + +### 4.2 Virtual Environment + +For our navigation task, we designed a virtual environment with a path for the user's avatar to follow. Path based navigation tasks have been used in closely related studies on VR sickness [4,14]. We designed a winding path in an open environment that was demarcated by wooden boards (see Figure 5. The path contains a few sharp angles and turns requiring fast and precise controls. It has been a criticism [19] of existing studies, that most locomotion methods are evaluated in use cases that only involve navigation and not interaction with the environment. Though this seems to be a quite common use case for many VR experiences like games. + +![01963e8a-07ed-73f5-9230-a01532176802_4_251_140_1293_585_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_4_251_140_1293_585_0.jpg) + +Figure 4: Examples of how particular head-tilt motion is interpreted into avatar locomotion. + +Since we were interested in evaluating the embodiment of our 3PP locomotion method, we designed an obstacle course that requires navigation but also interaction with objects. We made sure that it was long enough to take at least 7-10 minutes of time to run from start to finish. A study on VR sickness found that 2 minutes of optical flow exposure using a VR HMD [46] is already enough to elicit VR sickness symptoms in participants susceptible to it. + +We placed 22 obstacles in the form of $\log$ stacks on the path. Users were tasked with jumping over these obstacles. We also put 136 balloons along the path with at least 5 meters between each balloon. 68 balloons were placed to the left and 68 to the right of the center of the path to compensate for handedness. We asked the users to pop the balloons by hitting them using the avatar's hands and balloons only pop when there is a collision with the hands. Figure 6 shows both tasks in the virtual environment. + +We developed the environment in Unity 2019.1.11f1. SteamVR plugin version v2.5 was used to implement the VR functionality. A $\delta$ of 1.0 was used so a 1.0 meter displacement in the real world corresponded to a 1.0 meter viewpoint translation in the virtual environment. We used the 3D avatar that came with an Azure Kinect example package for Unity [1]. To follow the avatar from a 3PP, we implemented a follow camera. A point at a height of ${1.8}\mathrm{\;m}$ and at distance of ${1.65}\mathrm{\;m}$ behind the avatar was selected as the target for the follow camera based on preliminary trials. The camera is always trying to reach this target location smoothly using a sigmoid function which helps dampen out motions from the user's avatar jumping. The camera is also rotating smoothly with the goal of matching the avatar's forward direction. We implemented the two locomotion methods in 3PP. + +Embodied locomotion. As described in the design section this method combines outputs from the HMD positional tracking, and IMU and Azure Kinect sensor. Roll and pitch are interpreted to support navigation in any of the four egocentric directions that are easy to interpret by the user as this maps to joystick controls. Navigation by means of head tilt is enabled only when the user is facing the avatar. Whether the user is facing the avatar is detected by calculating the angle between the HMD's forward vector and the vector towards the avatar from the HMD's position. If this angle is below a threshold (which in our study was set to ${15}^{ \circ }$ ), the user is considered to be facing the avatar. Also, we only engage in movement when roll or pitch exceed a minimum threshold. This allows users a greater freedom to freely look around. + +![01963e8a-07ed-73f5-9230-a01532176802_4_926_832_722_469_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_4_926_832_722_469_0.jpg) + +Figure 5: Virtual environment showing the path to be navigated. + +Figure 4 presents a simplified finite state diagram of our embodied locomotion system. The user starts in the idle state. Given they are facing the avatar, the users then have the option to make the avatar walk forward/backward or strafe sideways. Walking forward is enabled by a forward head tilt above threshold (threshold, w_min $= {14}^{ \circ }$ ). When walking backwards, ${\mathrm{w}}_{ - }\mathrm{{min}}$ was set to $- {11}^{ \circ }$ . While the avatar is walking, If the user rolls their head to the left or right, the avatar will turn in the left or right direction respectively. (threshold, $\left. {{\mathrm{t}}_{ - }\mathrm{{min}} = {20}^{ \circ }}\right)$ . On the other hand, if the avatar is standing still, the head roll will make the avatar strafe to the left or right (threshold, s_min $= {20}^{ \circ }$ ). Values for these thresholds were determined experimentally from a small number of preliminary trials. Each of the threshold values w_min, t_min and s_min is accompanied by a maximum value which are ${\mathrm{w}}_{ - }\max ,{\mathrm{t}}_{ - }\max$ and ${\mathrm{s}}_{ - }$ max respectively (also determined experimentally). We use these minimum and maximum values with an inverse linear interpolation function to get our final input values in the range of 0 to 1 . These input values are used to linearly interpolate between movement animations. We used the 'root motion' feature of our movement animations. This means that the avatar locomotion speed is coupled with the particular animation being played and it's speed. In our implementation the locomotion speed ranges between 0 and ${4.5}\mathrm{\;m}/\mathrm{s}$ . + +We implement jumping by calculating the headset's speed in the global up direction and comparing it against a predefined threshold. We maintain a moving average $\left( {\mathrm{n} = 4}\right)$ of the speed to smooth out the data and avoid accidental jump commands. While the avatar is in air, the head tilt can be used to manipulate how far and in what direction the avatar jumps to an extent. This is done by applying a physics force to the avatar that we scale using the tilt input. + +The Kinect sensor is used for body skeletal tracking. We map the joint orientation and position data to the avatar. Thus the movements of the user and the avatar are coupled. The Kinect is capable of estimating 32 body joint positions. To implement body tracking functionality in Unity we used the Azure Kinect Example Project asset as the basis. This package, however, had to be modified to achieve our desired functionality, e.g., masking part of the body to be controlled by body tracking while other parts by animation. The tracked skeleton by Kinect can show signs of jitters in the joints. To mitigate such anomalies, the example project comes with a 'smooth factor' option that linearly interpolates between previous joint positions to create a more stable skeleton. We set this to a value of 10 for our study. + +When the user is locomoting using head-tilt, we animate the legs using a default animation clip and only the upper body will match any motions made by the user. This breaks visuo-motor synchronicity which could be detrimental to embodiment. However, with no animation it looked like the avatar was flying while dragging their feet which in preliminary trials seems to induce a lower sense of embodiment. Users can still move their avatar arms while walking forward and interact with objects etc. When not locomoting there is full body visuo-motor synchronicity and users can walk around as long as they remain visible to the sensor. + +Controller based locomotion. This uses a standard 3PP control scheme where the left analog stick of the controller is used for avatar movement. Instead of using the right analog stick for rotating the camera as is common in non-VR 3D experiences, the users' HMD controls the camera which minimizes visual-vestibular conflict. The left and right bumpers of the controller are used to activate the left hand and right hand punch respectively. To activate jumping, we used one of the buttons (A) on the controller. In this control scheme, the avatar locomotion speed and animations work the same way as the Embodied scheme. The difference is that instead of head tilt determining the input, we instead use the left analog stick's input. Pushing the stick left-right is analogous to head roll while pushing it forward-backward is comparable to head tilt. + +### 4.3 Experiment Design + +The experiment was a $1\mathrm{\;X}2$ design with locomotion method as the independent variable (two levels: embodied and controller). We inspect the effect of this factor on task performance, usability, embodiment and VR sickness. To account for order effects, half of the participants started with the embodied condition (Group A) while the remaining half started with the controller condition (Group B) to compensate for any learning effects. Because the effects of VR sickness can linger for up to 24 hours; to minimize the transfer of VR sickness symptoms across sessions, we conducted each session on a separate day with at least 24 hours of rest between sessions. + +### 4.4 Procedure and Data Collection + +The experiment was conducted in a user study space that was free of noises and physical obstacles. When participants arrived for the first session they were briefed on the goal of the study, the outline of the experiment, the risks involved, the data collected, and the details of the training and experiment sessions. The distance between the VR HMD's lenses was adjusted to match the participant's interpupillary distance (IPD). Participants were then asked to stand in the middle of the tracking space. We first made sure that the Kinect sensor could track each participant properly. Then the users were assisted with putting on the VR headset so that they could start the training session. + +![01963e8a-07ed-73f5-9230-a01532176802_5_925_148_717_467_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_5_925_148_717_467_0.jpg) + +Figure 6: Tasks that users were required to perform during the navigation task. Left, punch a balloon, Right, jump over an obstacle. + +The goal of the training session was to familiarize the participant with the controls used for the traditional 3PP control scheme and the techniques used for the embodied 3PP locomotion method. Participants were given an opportunity to try out both locomotion techniques in a short task that was similar to the experiment task. + +Upon completing the training task, participants started the first experiment session where they were instructed to follow the obstacle course at their own pace. During the experiment session we recorded the number of balloons popped, number of obstacles jumped over, total duration of the experiment session and the amount of time spent walking on the path. + +After completing each experiment session, participants were asked to fill out three questionnaires: 1) a Simulator Sickness Questionnaire (SSQ) [24] which is a standardized questionnaire used to measure the incidence of VR sickness, 2) a usability questionnaire which allowed participants to provide qualitative feedback about usability of the technique they just experienced, and 3) a standardized avatar embodiment questionnaire [16] to measure the embodiment of the avatar. As recommended by the authors of the SSQ [23] we use it only to assess post exposure VR sickness symptoms. + +In this study, we use the standardized avatar embodiment questionnaire to address three aspects of virtual embodiment that are applicable to our experiment: body ownership, agency and motor control, and location of the body [16]. This questionnaire was developed for assessing embodiment in 1PP, while here we try to adopt it for avatar embodiment in 3PP which is different. Thus, following the recommendations of the standardized questionnaire, we only use a subset of the questions (e.g., Q1 to Q14) that are needed to calculate the metrics for these three aspects given whether we thought the questions were relevant to $3\mathrm{{PP}}$ and the navigation task we had participants perform. The responses to the individual questions are combined into these three metrics based on the formulae provided in [16]. + +Finally, after completing both experiment sessions, participants were asked to fill out a post-study questionnaire which was used to collect demographic information that included their age and sex; and their frequency of playing video games, familiarity with controller based third person navigation, frequency of using VR, and tendency of being motion and/or VR sick using a five-point Likert scale. On average, the whole study took about 45 minutes to complete in two sessions. All participants were compensated with a $\$ {15}$ Amazon gift card for their time, and the user study was approved by an IRB. + +![01963e8a-07ed-73f5-9230-a01532176802_6_154_150_712_436_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_6_154_150_712_436_0.jpg) + +Figure 7: Summary of participants ratings of their frequency of playing video games, familiarity with 3P navigation using a controller, frequency of using VR and their tendency of getting motion or VR sick on a scale of 1 (never) to 5 (very frequently). The results are reported in the form of percentage (count). + +### 4.5 Participants + +Recruitment of participants was significantly impeded by the Covid- 19 pandemic and our University shutting down as a result of it. Nevertheless before the shutdown we were able to recruit fifteen participants for our study. However, one participant could not complete the study because she could not be properly tracked by the Kinect sensor. Fourteen participants ( 4 females, 10 males, average age=24.9, SD=4.6) were able to complete both sessions and their data is analyzed in this study. + +## 5 RESULTS + +Participants were asked to rate their frequency of playing video games, familiarity with controller based third person navigation, frequency of using VR, and tendency of being motion and/or VR sick on a scale of 1 (never) to 5 (very frequently). The results are summarized in Table 7. To measure task performance, we logged the position of the avatar in the virtual environment, time stamps, number of balloons popped, number of obstacles jumped and the percentage of time participants spent on the path (e.g., if participants didn’t deviate from the path this number would be 100%). + +We analyzed these quantitative results using a one way repeated measures MANOVA. For qualitative results, all participants answered an avatar embodiment questionnaire, an SSQ and a usability questionnaire after each trial. The responses collected through the embodiment and usability questionnaires were analyzed using nonparametric methods (Wilcoxon signed rank paired-test). + +### 5.1 Task Performance + +
Locomotion typeEmbodied (SD)Controller (SD)
Total time (s)${519.47}\left( {104.3}\right)$423.25 (6.6 )
% targets hit89.86 (6.4 )95.75 (3.7 )
% obstacles jumped92.53 (1.1)99.35 (1.7 )
% on track95.60 (3.8 )97.59 (.5 )
+ +Table 1: Quantitative results for each locomotion method. Standard deviation listed between parentheses. + +Table 1 lists the task performance results for both methods. For our analysis we used (1) total time, (2) % of targets hit, (3) % obstacles jumped, and (4) $\%$ of time spent on the track. A one-way repeated measures MANOVA found a statistically significant difference between locomotion techniques on the linear combination of the dependent variables $\left( {{\mathbf{F}}_{4,{10}} = {3.689}, p = {.043}}\right.$ , Wilk’s $\lambda =$ .404, partial ${\varepsilon }^{2} = {.596}$ ). Mauchly’s test of sphericity indicated that the assumption of sphericity had been met. + +Follow up univariate tests found statistically significant differences between locomotion methods for total time transitions $\left( {{\mathbf{F}}_{1,{13}} = {12.710}, p = {.003}\text{, partial}{\varepsilon }^{2} = {.494}}\right)$ , targets hit $\left( {{\mathbf{F}}_{1,{13}} = }\right.$ ${11.910}, p = {.004}$ , partial ${\varepsilon }^{2} = {.478}$ ), obstacles jumped $\left( {{\mathbf{F}}_{1,{13}} = }\right.$ ${5.571}, p = {.035}$ , partial $\left. {{\varepsilon }^{2} = {.300}}\right)$ . However, there was no statistically significant difference between locomotion methods for time spent on track $\left( {{\mathbf{F}}_{1,{13}} = {3.828}, p < {.072}\text{, partial}{\varepsilon }^{2} = {.227}}\right)$ . + +![01963e8a-07ed-73f5-9230-a01532176802_6_945_480_697_637_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_6_945_480_697_637_0.jpg) + +Figure 8: Diverging stacked bar chart of the percentages of the Likert scores for the subjective usability rankings of each locomotion method. + +### 5.2 Usability + +After completing each session participants were asked to rate the locomotion method they just tested in terms of accuracy, efficiency, learnability and likeability using a 5 point Likert scale ranking from 1: strongly disagree to 5: strongly agree. The results are summarized in Figure 8. A Wilcoxon Signed-Rank test was used to analyze for differences in Likert scores. We found statistically significant difference for accuracy $\left( {Z = - {2.341}, p = {.019}}\right)$ , efficiency $\left( {Z = - {2.46}, p = {.014}}\right)$ , learnability $\left( {Z = - {2.124}, p = {.034}}\right)$ , and likeability $\left( {Z = - {2.077}, p = {.038}}\right)$ . + +### 5.3 Simulator Sickness + +We used the SSQ results to calculate the SSQ subscores: total score, nausea, oculomotor and discomfort as described in [56]. A one-way repeated measures MANOVA did not find a statistically significant difference between locomotion techniques on the linear combination of the SSQ subscores $\left( {{\mathbf{F}}_{3,{11}} = {0.185}, p = {.360}}\right.$ , Wilk’s $\lambda = {.756}$ , partial $\left. {{\eta }^{2} = {.244}}\right)$ . Mauchly’s test of sphericity indicated that the assumption of sphericity had been met. + +### 5.4 Avatar Embodiment + +Analyzing the avatar embodiment questionnaire responses, we found that participants preferred the embodied locomotion method over the controller method. The results are summarized in Figure 10. Participants reported a significantly higher ownership of the virtual avatar when using the embodied locomotion method $\left( {Z = - {2.482}, p = {.013}}\right)$ than when using the controller based locomotion method. Participants also reported significantly higher scores of agency and motor control when using the embodied locomotion method $\left( {Z = - {2.485}, p = {.013}}\right)$ compared to the controller based locomotion method. Additionally, as measured by the location of the body metric, participants thought the embodied locomotion method provided significantly higher embodiment illusion $\left( {Z = - {2.131}, p = {.033}}\right)$ compared to the controller based locomotion method. + +![01963e8a-07ed-73f5-9230-a01532176802_7_225_155_566_419_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_7_225_155_566_419_0.jpg) + +Figure 9: Summary (means) of the four subscores of the SSQ score: (N) ausea score (O) culomotor discomfort, (D) isorientation score and the (T)otal (S)everity score. Error bars show standard error of the mean. + +![01963e8a-07ed-73f5-9230-a01532176802_7_224_759_565_537_0.jpg](images/01963e8a-07ed-73f5-9230-a01532176802_7_224_759_565_537_0.jpg) + +Figure 10: Embodiment scores for both methods measured by the metrics ownership of the body (Ownership), agency and motor control (Agency), and location of the body (Location). + +## 6 DISCUSSION AND FUTURE WORK + +Performance. Not surprisingly, in terms of performance using a controller performed significantly better than our embodied locomotion method. Controllers require very little physical effort to be used and prior studies have repeatedly found that a controller is faster and easier to use mostly because most users are highly familiar with this type of input. The differences in performance seem quite reasonable, e.g., total time (22% slower), balloons hit (6% lower), obstacles jumped (7% lower) and time on track (2 % lower). + +Usability. A controller was found to be more accurate, efficient, easier to use and overall preferred to our embodied locomotion method. The significantly higher familiarity with using a controller (as evident due to the ${100}\%$ agree or higher score for learnabil-ity) largely explains why users found a controller more accurate, efficient and better liked than our embodied locomotion method. However, to contextualize these usability results it is important to distinguish locomotion from interaction (e.g., interacting with objects). To hit a balloon and jump over an obstacle our embodied locomotion method required real physical movements (i.e., punching and jumping), which is slower and more error prone than pressing a button. Though the Kinect tracking is pretty accurate there was a small amount of latency -especially affecting jumping- which would sometimes cause participants to run into the obstacle and come to a standstill, rather than them jumping over it. Hitting a balloon while running also required precise timing and was just harder to perform with our embodied method. Some participants were observed to navigate backwards when they missed hitting a balloon so they could try hitting it again, which added to their time. Looking at locomotion efficiency, there was no significant difference in percentage of time on track for both locomotion methods. Overall these factors contributed to a worse rated usability for our embodied method, which was further exacerbated by participants' high familiarity with using a controller. Though participants were given enough time to familiarize themselves with our embodied interface, over time with greater proficiency rated performance and usability could increase. + +Embodiment. Our study did find evidence that our embodied locomotion method offered a significantly higher avatar embodiment than when using a controller, which was the main objective of our method. The motivation to compare our method to a controller was made largely for benchmark purposes with no reasonable expectation that our embodied locomotion method would outperform a controller for performance or usability (users were more familiar with a controller). To enable embodiment illusion research [36], locomotion performance may not seem to be an important factor, but usability probably is. We did not explore using our 3PP locomotion method for embodiment illusion research, which is something we hope to explore in collaboration with experts in this area. + +Though our approach lets users see their avatar, existing embodiment illusion research uses 1PP with a virtual mirror which allows for face-to-face interaction, which is important [17]. Because our approach uses a fixed camera from behind, you only see the avatar's back, but we aim to develop a hands free control scheme that lets users rotate their camera to allow users to see their avatar and their avatar's face from different angles. Our study did not assess presence as this is determined by many factors including the VR experience itself, but we hope to substantiate this in future work. + +VR Sickness. There was no significant difference in VR sickness incidence as measured using the SSQ between both locomotion methods. Head-tilt generates some of the vestibular cues that are present in walking, which are notably absent when using a controller and so there was the possibility this could alleviate visual-vestibular conflict. However, we did not find any differences and this corroborates an earlier study that compared a controller to using head-tilt input for locomotion (using a 1PP) that also did not find a significant difference in VR sickness as measured using SSQ [50]. An important finding was that VR sickness incidence was low. Six participants out of the fourteen were asymptomatic, and the overall observed average total SSQ scores were very low (i.e., 33/36 out of a maximum of 235) which corresponds to very mild VR sickness. Though experimental conditions differ, many prior studies $\left\lbrack {{14},{27},{32},{57}}\right\rbrack$ have found that a controller generally induces moderate to high levels of VR sickness. A notable difference is that prior studies all used a 1PP where we used 3PP. Low VR sickness could be the result of looking at an avatar during locomotion, which could serve as a rest frame [39]. Because no studies have investigated how perspective affects VR sickness, this is something we certainly aim to investigate in future work. + +Limitations. Our user study involved a low number of participants $\left( {\mathrm{n} = {14}}\right)$ as our University stopped human subject research halfway through recruitment due to COVID-19. Our embodied locomotion method will only work with HMD's that feature non-IR inside-out tracking as this does not interfere with the IR used by the depth camera. Another limitation imposed by the specifications of the Kinect Azure camera that we used is that in order to be always visible, the tracking space must be defined within the depth range of the camera. Larger tracking spaces could possibly be supported using multiple Kinect cameras and if the user is always visible from every angle, tank controls can be abandoned. A related issue is that the camera doesn't track a rectangular region. So getting a perfect match between the tracking space of the VR system and the Kinect wasn't possible. Multiple cameras can solve this issue as well by covering a larger space than what the VR system does. We have been able to integrate positionally tracked controllers into our method, which improves skeletal tracking and provides rotational information for the hand joints, which is useful for example when holding an object. Our study only evaluated navigation and limited interaction with objects. Our study did not require participants to navigate using positional tracking input (e.g., real walking), though that certainly was possible. However, this increases the likelihood of users stepping outside of the tracking space where visuo-motor synchronicity between the user and avatar cannot be assured which is likely detrimental to embodiment. + +## 7 CONCLUSION + +While 3PP is a popular perspective for non-VR games, most VR experiences are experienced from 1PP. 3PP for VR is typically implemented using a controller which offers a low embodiment. Embodiment illusion research can reduce biases regarding race, gender and age, but because it uses 1PP it requires using a mirror. We present a novel embodied locomotion method that blends real walking using full body skeletal tracking with head-tilt based locomotion. In addition to being able to see your avatar our locomotion method allows users to navigate beyond available tracking space constraints and is minimal in terms of required sensors (e.g., a single depth camera). Not surprisingly, our user study found that controller input was better in terms of performance and usability but we did not find a difference in VR sickness incidence. Our method did offer a significantly higher avatar embodiment than using a controller, which is an important finding for games as well as embodiment illusion applications. Given the low VR sickness scores for both methods, it suggests that perspective could play a role in VR sickness incidence. + +## REFERENCES + +[1] Azure kinect examples. https://assetstore.unity.com/packages/tools/integration/ azure-kinect-examples-for-unity-149700. + +[2] Azure kinect latency https://github.com/microsoft/ Azure-Kinect-Sensor-SDK/issues/816. + +[3] Vr roomsize size survey, https://www.reddit.com/r/Vive/ comments/4fqq4a/vr_roomscale_room_size_survey_ answers_analysis/. + +[4] I. B. Adhanom, N. N. Griffin, P. MacNeilage, and E. Folmer. The effect of a foveated field-of-view restrictor on VR sickness. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 645-652. IEEE, 2020. + +[5] M. Al Zayer, I. B. Adhanom, P. MacNeilage, and E. Folmer. The effect of field-of-view restriction on sex bias in VR sickness and spatial navigation performance. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2019. + +[6] J. Bhandari, P. MacNeilage, and E. Folmer. Teleportation without spatial disorientation using optical flow cues. In Proceedings of Graphics Interface 2018, GI 2018, pp. 162 - 167. Canadian Human-Computer Communications Society / Société canadienne du dialogue humain-machine, 2018. doi: 10.20380/GI2018.22 + +[7] J. Bhandari, S. Tregillus, and E. Folmer. Legomotion: Scalable walking-based virtual locomotion. In Proceedings of the 23rd ACM Symposium + +on Virtual Reality Software and Technology, VRST '17, pp. 18:1-18:8. ACM, New York, NY, USA, 2017. doi: 10.1145/3139131.3139133 + +[8] D. Bowman, D. Koller, and L. F. Hodges. Travel in immersive virtual + +environments: An evaluation of viewpoint motion control techniques. In Virtual Reality Annual International Symposium, 1997., IEEE 1997, pp. 45-52. IEEE, 1997. + +[9] S. Cmentowski, A. Krekhov, and J. Krüger. Outstanding: A multi-perspective travel approach for virtual reality games. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, pp. 287-299, 2019. + +[10] H. G. Debarba, E. Molla, B. Herbelin, and R. Boulic. Characterizing embodied interaction in first and third person perspective viewpoints. In 3D User Interfaces (3DUI), 2015 IEEE Symposium on, pp. 67-72. IEEE, 2015. + +[11] A. Denisova and P. Cairns. First person vs. third person perspective in digital games. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15, 2015. doi: 10. 1145/2702123.2702256 + +[12] N. Ducheneaut, M.-H. Wen, N. Yee, and G. Wadley. Body and mind: a study of avatar personalization in three virtual worlds. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. ${1151} - {1160},{2009}$ . + +[13] I. Evin, T. Pesola, M. D. Kaos, T. M. Takala, and P. Hämäläinen. 3pp-r: Enabling natural movement in 3rd person virtual reality. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ' 20, pp. 438-449. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3410404.3414239 + +[14] A. S. Fernandes and S. K. Feiner. Combating VR sickness through subtle dynamic field-of-view modification. In 2016 IEEE Symposium on 3D User Interfaces (3DUI), pp. 201-210. IEEE, 2016. + +[15] H. Galvan Debarba, S. Bovet, R. Salomon, O. Blanke, B. Herbelin, and R. Boulic. Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality. PLOS ONE, 12(12):1-19, 12 2017. doi: 10.1371/journal.pone. 0190109 + +[16] M. Gonzalez-Franco and T. C. Peck. Avatar embodiment. towards a standardized questionnaire. Frontiers in Robotics and AI, 5:74, 2018. doi: 10.3389/frobt.2018.00074 + +[17] M. Gonzalez-Franco, D. Perez-Marcos, B. Spanlang, and M. Slater. The contribution of real-time mirror reflections of motor actions on virtual body ownership in an immersive virtual environment. In 2010 IEEE virtual reality conference (VR), pp. 111-114. IEEE, 2010. + +[18] G. Gorisse, O. Christmann, E. A. Amato, and S. Richir. First-and third-person perspectives in immersive virtual environments: Presence and performance analysis of embodied users. Frontiers in Robotics and AI, 4:33, 2017. + +[19] N. N. Griffin and E. Folmer. Out-of-body locomotion: Vectionless navigation with a continuous avatar representation. In 25th ACM Symposium on Virtual Reality Software and Technology, pp. 1-8, 2019. + +[20] P. Hämäläinen, T. Ilmonen, J. Höysniemi, M. Lindholm, and A. Nykänen. Martial arts in artificial reality. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 781- 790, 2005. + +[21] D. M. Hoffman, A. R. Girshick, K. Akeley, and M. S. Banks. Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. Journal of vision, 8(3):33-33, 2008. + +[22] B. K. Jaeger and R. R. Mourant. Comparison of simulator sickness using static and dynamic walking simulators. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 45, pp. 1896-1900. SAGE Publications, 2001. + +[23] R. Kennedy, M. Lilienthal, K. Berbaum, D. Baltzley, and M. McCauley. Simulator sickness in us navy flight simulators. Aviation, Space, and Environmental Medicine, 60(1):10-16, 1989. + +[24] R. S. Kennedy, N. E. Lane, K. S. Berbaum, and M. G. Lilienthal. Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness. The international journal of aviation psychology, 3(3):203-220, 1993. + +[25] K. Kilteni, R. Groten, and M. Slater. The sense of embodiment in virtual reality. Presence: Teleoperators and Virtual Environments, 21(4):373-387, 2012. + +[26] E. Kokkinara and M. Slater. Measuring the effects through time of the + +influence of visuomotor and visuotactile synchronous stimulation on a virtual body ownership illusion. Perception, 43(1):43-58, 2014. + +[27] J. Lee, M. Kim, and J. Kim. A study on immersion and vr sickness in + +walking interaction for immersive virtual reality applications. Symmetry, 9(5):78, 2017. + +[28] K. M. Lee. Presence, explicated. Communication theory, 14(1):27-50, 2004. + +[29] M. Marchal, J. Pettré, and A. Lécuyer. Joyman: A human-scale joystick for navigating in virtual worlds. In 3D User Interfaces (3DUI), 2011 IEEE Symposium on, pp. 19-26. IEEE, 2011. + +[30] M. McCullough, H. Xu, J. Michelson, M. Jackoski, W. Pease, W. Cobb, W. Kalescky, J. Ladd, and B. Williams. Myo arm: swinging to explore a ve. In Proceedings of the ACM SIGGRAPH Symposium on Applied Perception, pp. 107-113, 2015. + +[31] D. Medeiros, R. K. dos Anjos, D. Mendes, J. M. Pereira, A. Raposo, and J. Jorge. Keep my head on my shoulders! why third-person is bad for navigation in vr. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, pp. 1-10, 2018. + +[32] J. Munafo, M. Diedrick, and T. A. Stoffregen. The virtual reality head-mounted display oculus rift induces motion sickness and is sexist in its effects. Exp. Brain Research, pp. 1-13, 2016. + +[33] C. M. Oman. Motion sickness: a synthesis and evaluation of the sensory conflict theory. Canadian journal of physiology and pharmacology, 68(2):294-303, 1990. + +[34] M. Oshita. Motion-capture-based avatar control framework in third-person view virtual environments. In Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology, pp. 2-es, 2006. + +[35] M. Oshita, Y. Senju, and S. Morishige. Character motion control interface with hand manipulation inspired by puppet mechanism. In Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, pp. 131-138, 2013. + +[36] T. C. Peck, S. Seinfeld, S. M. Aglioti, and M. Slater. Putting yourself in the skin of a black avatar reduces implicit racial bias. Consciousness and cognition, 22(3):779-787, 2013. + +[37] M. Perez. Pc gamer: A eulogy for tank controls, https://www.pcgamer.com/a-eulogy-for-tank-controls/. + +[38] V. I. Petkova, M. Khoshnevis, and H. H. Ehrsson. The perspective matters! multisensory integration in ego-centric reference frames determines full-body ownership. Frontiers in psychology, 2:35, 2011. + +[39] J. D. Prothero, M. H. Draper, D. Parker, M. Wells, et al. The use of an independent visual background to reduce simulator side-effects. Aviation, space, and environmental medicine, 70(3 Pt 1):277-283, 1999. + +[40] R. Rouse. What's your perspective? ACM SIGGRAPH Computer Graphics, 33(3):9-12, Aug 1999. doi: 10.1145/330572.330575 + +[41] P. Salamin, T. Tadi, O. Blanke, F. Vexo, and D. Thalmann. Quantifying effects of exposure to the third and first-person perspectives in virtual-reality-based training. IEEE Transactions on Learning Technologies, 3(3):272-276, 2010. + +[42] P. Salamin, D. Thalmann, and F. Vexo. The benefits of third-person perspective in virtual and augmented reality? In Proceedings of the ACM symposium on Virtual reality software and technology, pp. 27-30, 2006. + +[43] P. Salamin, D. Thalmann, and F. Vexo. Improved third-person perspective: a solution reducing occlusion of the 3pp? In Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, pp. 1-6, 2008. + +[44] E. L. Schuurink and A. Toet. Effects of third person perspective on affective appraisal and engagement: Findings from second life. Simulation & Gaming, 41(5):724-742, 2010. + +[45] M. Slater, B. Spanlang, M. V. Sanchez-Vives, and O. Blanke. First person experience of body transfer in virtual reality. PloSone, 5(5):e10564, 2010. + +[46] A. Somrak, I. Humar, M. S. Hossain, M. F. Alhamid, M. A. Hossain, and J. Guna. Estimating vr sickness and user experience using different hmd technologies: An evaluation study. Future Generation Computer Systems, 94:302-316, 2019. + +[47] P. Tacikowski, J. Fust, and H. H. Ehrsson. Fluidity of gender identity + +induced by illusory body-sex change. bioRxiv, 2020. + +[48] S. Thompson. Third person vr - first person god, https://medium.com/kids-digital/third-person-vr-948aaa@c3ac6. + +[49] J. A. Thomson. Is continuous visual monitoring necessary in visually guided locomotion? Journal of Experimental Psychology: Human Perception and Performance, 9(3):427, 1983. + +[50] S. Tregillus, M. Al Zayer, and E. Folmer. Handsfree omnidirectional VR navigation using head tilt. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 4063-4068. ACM, 2017. + +[51] S. Tregillus and E. Folmer. VR-STEP: Walking-in-place using inertial sensing for hands free navigation in mobile VR environments. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 1250-1255. ACM, 2016. + +[52] M. Usoh, K. Arthur, M. C. Whitton, R. Bastos, A. Steed, M. Slater, and F. P. Brooks Jr. Walking> walking-in-place> flying, in virtual environments. In Proc. of the 26th conference on Computer graphics and interactive techniques, pp. 359-364, 1999. + +[53] T. Wada, H. Konno, S. Fujisawa, and S. Doi. Can passengers' active head tilt decrease the severity of carsickness? effect of head tilt on severity of motion sickness in a lateral acceleration environment. Human factors, 54(2):226-234, 2012. + +[54] J. Wang and R. W. Lindeman. Comparing isometric and elastic surfboard interfaces for leaning-based travel in $3\mathrm{\;d}$ virtual environments. In 3D User Interfaces (3DUI), 2012 IEEE Symposium on, pp. 31-38. IEEE, 2012. + +[55] D. M. Whittinghill, B. Ziegler, T. Case, and B. Moore. Nasum virtu-alis: A simple technique for reducing simulator sickness. In Games Developers Conference (GDC), p. 74, 2015. + +[56] B. G. Witmer and M. J. Singer. Measuring presence in virtual environments: A presence questionnaire. Presence: Teleoperators and virtual environments, 7(3):225-240, 1998. + +[57] D. Zielasko, S. Horn, S. Freitag, B. Weyers, and T. W. Kuhlen. Evaluation of hands-free hmd-based navigation techniques for immersive data analysis. In 3D User Interfaces (3DUI), 2016 IEEE Symposium on, pp. 113-119. IEEE, 2016. + +[58] D. Zielasko and B. E. Riecke. Either give me a reason to stand or an opportunity to sit in vr. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 283-284. IEEE, 2020. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/tufnZGWrWjR/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/tufnZGWrWjR/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..9a50d9d104e858067d991ca4c19f181ae7476182 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/tufnZGWrWjR/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,227 @@ +§ EMBODIED THIRD-PERSON LOCOMOTION USING A SINGLE DEPTH CAMERA + +§ ABSTRACT + +Third-person is a popular perspective for video games, but virtual reality (VR) is primarily experienced from a first-person view. While a first-person POV generally offers the highest presence, a third-person POV allows for users to see their avatar; which might allow for a better bond, while the higher vantage point generally increases spatial awareness and navigation. Third-person locomotion is generally implemented using a controller or keyboard, with users often sitting down; an approach that is considered to offer a low presence and embodiment. We present a novel third-person locomotion method that enables a high avatar embodiment by integrating skeletal tracking with head-tilt based input to enable omnidirectional navigation beyond the confines of available tracking space. By interpreting movement relative to the avatar, the user will always face the the camera which optimizes skeletal tracking and only requires a single depth camera. A user study compares the performance, usability, VR sickness and avatar embodiment of our method to using a controller for a navigation task that involves interacting with objects. Though a controller offers a higher performance and usability, our locomotion method offered a significantly higher avatar embodiment. Because VR sickness incidence for both methods was low, it suggests that perspective might play a role in VR sickness. + +Index Terms: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism-Virtual Reality + +§ 1 INTRODUCTION + +One unique feature of virtual reality (VR) is that it can let you experience being a person of a different race, gender or age. Embodiment illusion research explores creating illusions of ownership over a virtual body, which is a promising intervention technique to reduce biases. For example, a seminal study [36] demonstrated that when light skinned participants experienced being a dark-skinned avatar (for example: Figure 1) this reduced implicit racial bias against dark-skinned people. A more recent study [47] explored swapping genders and found this to lead to reduce gender-stereotypical beliefs. These findings demonstrate the vast potential of VR as a tool to improve the world that could address many current day social issues regarding race, age and gender. Because VR is predominantly experienced from a first-person-perspective (1PP), to establish the embodiment illusion, a virtual mirror is required [17] which allows subjects to fully see themselves. Requiring a stationary mirror to maintain the embodiment illusion, limits what kind of scenarios can be explored. It might be interesting to explore non-stationary scenarios (i.e., are you treated differently based on skin color, gender, or age when walking through a busy street?). But that would require the ability for the user to move around in a virtual environment while still being able to see themselves. + +Presence, i.e., a sense of being in VR [28], and embodiment, i.e., a sense of being your avatar [25] are important yet closely intertwined qualities for VR that to a large extent are defined by the graphical perspective used [18]. With 1PP users see the world through the eyes of their avatar -which provides the highest sense that they are their avatar $\left\lbrack {{38},{45}}\right\rbrack$ (i.e., embodiment). Because the user and avatar are collocated, this view is most natural to us, and most optimal for motor accuracy tasks [31,43]. + + < g r a p h i c s > + +Figure 1: Our 3PP locomotion method integrates full-body skeletal tracking using a single depth camera (Azure Kinect) with head-tilt based input using inertial as to enable omnidirectional navigation beyond the confines of available tracking space. With the depth camera at the location of the avatar and interpreting movement relative to the avatar, the user always faces the camera which optimizes full-body tracking and keeps instrumentation to a minimum. + +A third-person perspective (3PP) allows users to see their avatar from an over-the-shoulder view. Though a 3PP offers a lower embodiment than 1PP [11]; when users can see their avatar it creates a stronger bond than when using 1PP and the ability to customize the avatar can create a strong identity which reinforces body ownership [12]. The higher vantage point improves spatial awareness $\left\lbrack {{40},{44}}\right\rbrack$ , though avatar occlusion can interfere with precise interaction like aiming [43]. Letting users switch between 1PP and 3PP offers the benefits of each perspective [42]. Because the user and their avatar are not collocated, 3PP generally offers a lower embodiment than 1PP [45]. This can be increased through agency, i.e., when the user and virtual avatar are linked to move synchronously [26]. + +Locomotion is an essential part of VR interaction [?] and defines presence [8]. Full body tracking enables visuo-motor synchronicity between the user and the avatar and when combined with real walking it achieves a high presence and sense of embodiment $\left\lbrack {{18},{31}}\right\rbrack$ . Though real walking offers the highest presence [52], it is bounded by available tracking- and physical space [7]. To implement omnidirectional 3PP locomotion, full-body tracking requires multiple cameras [31] or extensive user instrumentation [18]. Current consumer VR systems do not feature full-body tracking and positional tracking is limited to tracking the head mounted display (HMD) and both controllers. To implement 1PP, limited tracking is not an issue as users only see their hands, but for 3PP, tracking only 3 joints is not sufficient for animating an avatar that offers high embodiment [18]. + +Because users need to be able to navigate beyond the confines of available walking space, 3PP locomotion is typically implemented with a controller. A controller offers low presence and embodiment which is further exacerbated by the fact that users will often sit down when positional tracking is not fully used [58]. Currently few VR experiences use 3PP, though it is a popular perspective for non-VR games (e.g., Fortnite). Though some have suggested that 3PP is not a suitable perspective for VR [31], there are benefits of using 3PPs that have not fully been investigated yet. Locomotion facilitated using a controller (i.e., joystick or touchpad) is more likely to induce VR sickness $\left\lbrack {{22},{52}}\right\rbrack$ ; a major barrier to the success of VR. Especially sudden movements, e.g., jumping, falling or users taking damage without corresponding head motions can exacerbate visual-vestibular conflict; a major cause of VR sickness [33]. One remedy is to use a rest frame [39], i.e., part of the screen with no optical flow, for example, a virtual nose [55]. Visual discomfort can also result from a vergence-accommodation conflict (VAC) [21]. VR headsets use a flat screen to simulate depth of field, which creates a disparity between the focal point of objects in the virtual world (vergence) and the actual physical surface of the screen (accommodation). + +For 3PP, because the camera follows the avatar from behind, it can largely dampen out sudden motions [48] -like a steady-cam. Because the avatar is likely the focus of a user's gaze during locomotion it serves as a rest frame. When using 1PP users are looking at objects in their entire field of view, but the higher vantage point of 3PP basically confines objects to a plane at the lower half of the users' view, which might alleviate VAC [49]. Developing a better understanding how perspective affects visual discomfort and VR sickness is important for VR [5]. + +We present a hybrid 3PP locomotion method that offers a high sense of embodiment by integrating real walking -implemented using full-body tracking- with tilt-based omnidirectional locomotion. Our goal was to bring a popular gaming perspective to the domain of VR while preserving a high embodiment. Our interface could enable embodiment illusion research [36] from a 3PP, and thus would not require users to look at themselves in a mirror using 1PP. It advances over existing 3PP methods $\left\lbrack {{18},{31}}\right\rbrack$ as locomotion isn't confined by available tracking space and removes the need to holding a non immersive controller [13]. Due to the unique implementation of steering, the user always faces the camera and our approach is minimal in terms of required instrumentation. In addition to understanding how perspective affects embodiment and motor accuracy, we investigate VR sickness. + +§ 2 RELATED WORK + +A number of studies have investigated how perspective affects presence, motor accuracy and embodiment [25] for HMD based VR. Factors that can affect embodiment include: location of the body, body ownership, agency and motor control; and external appearance [16]. + +Salamin et al. [42] was one of the first studies to investigate perspective for HMD based VR and found some preliminary evidence that $3\mathrm{{PP}}$ is better for navigation while motor actions like opening a door or putting a ball in a cup had better performance in 1PP. A follow up study by the same authors [41] found no difference in error rate between perspectives for a stationary ball catching task though a 3PP offered better distance estimation. Slater et al. [45] evaluated how perspective affects the body transfer illusion and found 1PP offered the highest embodiment. Debarba et al. [10] conducted an extensive study investigating perspective and synchronous/asynchronous avatar rendering on the performance for a target reaching task. Full-body motion capturing was implemented using an optical tracking system with wearable markers. Synchronous rendering of an avatar offered the highest performance and embodiment in terms of body ownership and self location with no difference between perspectives. A follow up study by the same authors [15] evaluated perspective for a similar target reaching task and found that giving users the option to switch between 1PP and 3PP offers a strong sense of embodiment, though subjective body ownership was strongest in 1PP. Gorisse et al. [18] evaluates perspective on performance, presence and embodiment for an object perception and deflection task as well as a navigation and interaction task. No difference between perspectives on presence or agency were found though a 1PP enables more accurate interactions with objects while a 3PP provides better spatial awareness. Medeiros et al. [31] performed an extensive study that investigated perspective and avatar realism on navigation performance and embodiment. Navigation tasks using real walking included avoiding objects and going through a tunnel. Full body motion capture was implemented using an array of 3D depth cameras (Kinect). A 3PP can offer the same sense of embodiment, spatial awareness and navigation performance as a 1PP when using a realistic representation, but did worse without realism. + +Focusing on 3PP locomotion methods that enable navigation at scale the following approaches are closely related. Hamalainen et al. [20] presents Kick Ass Kung-Fu; a martial arts installation that captures the user with a regular camera and embeds their graphics and translates their movements to an avatar in a 2D fighting game. Oshita [34] presents a motion capture framework for 3PP locomotion for large screen VR. Full body motion capture is implemented using an optical tracking system and a combination of walking-in-place and arm swinging allows for navigating beyond tracking space confines. Omnidirectional navigation is not supported due to the requirement to keep facing the screen. No user studies results were reported. Work by the same author [35] explored using hand gestures to control an avatar in 3PP on a large screen. Locomotion is achieved using walking in place with the fingers controlled by the user's right hand. A user study demonstrated the feasibility of this approach but revealed issues with locomotion at scale. + +Cmentowski et al. [9] presents a VR locomotion method called Outstanding that lets users switch between 1PP and 3PP. Other than physical walking using positional tracking there is limited travel in 1PP and users switch to 3PP to travel beyond the limits of available tracking space. The camera remains stationary to avoid optical flow generation that could lead to VR sickness. When in 3PP, users navigate their avatar by pointing at a destination using a raycast with their motion sensing controller. A user study comparing outstanding to regular teleport found a significant increase in spatial orientation, with no VR sickness or difference in presence. + +A very similar approach was presented by Griffin et al. [19] which was published at the same time; called Out-of-body locomotion. A difference with outstanding is that in $3\mathrm{{PP}}$ the users can steer their avatar using the touchpad on their controller which offers more precise navigation flexibility. If the user breaks line of sight with their avatar it automatically switches back to 1PP. A user study compares out-of-body locomotion to regular teleportation using an obstacle navigation task and found that it required significantly fewer viewpoint transitions with no difference in performance or VR sickness incidence. + +3PP-R [13] brings 3PP to VR by rendering a miniature world that orbits with the user's viewpoint and which shows a miniature avatar. The avatar ability to mimic the user's motions are limited because only the hands and HMD are tracked. Users navigate their avatar primarily using a controller and though positional tracking is supported this is bounded by tracking space boundaries. Using a user study authors found 3PP-R to lead to less VR sickness than 1PP. + +§ 3 DESIGN OF EMBODIED 3PP LOCOMOTION + +Prior studies $\left\lbrack {{18},{31}}\right\rbrack$ show that a $3\mathrm{{PP}}$ can offer high embodiment but it requires visuo-motor synchronicity between the user and avatar. This is generally facilitated using full-body motion capturing which either requires extensive user instrumentation $\left\lbrack {{10},{15},{18}}\right\rbrack$ and (or) an array of (depth) cameras [31]. For locomotion, full-body tracking facilitates real walking which offers high presence [18] but this is generally bounded by available tracking space (i.e., when using external cameras) or available physical space (i.e., when using wearable sensors). + +On most consumer VR systems, users rely on a combination of real walking and an alternative locomotion technique (ALT) like teleportation. Though teleportation allows for navigating at scale, it offers a low presence [8] which is a problem for when using it for 3PP locomotion as it can be assumed that this would offer a low embodiment. Other hybrid locomotion techniques have been proposed that aim to offer high presence by combining real walking with an ALT, e.g., walking-in-place (WIP) [6], arm swinging [30] or head-tilt [50] that offer high presence because they generate some of the proprioceptive/vestibular cues that are generated during real walking. + +To facilitate 3PP locomotion with a high sense of embodiment, we propose a hybrid locomotion method that combines full body tracking based real-walking with head-tilt input. Head-tilt is a subset of leaning input (i.e., whole body tilt), a type of input that has been popularized by hover-boards [50]. Leaning has been explored for virtual locomotion where it has been found to offer high presence $\left\lbrack {{29},{54}}\right\rbrack$ . The choice for head-tilt as opposed to other high presence ALTs like WIP or arm swinging was motivated by positive results from earlier studies. For an obstacle navigation task, head-tilt outperformed both WIP and a controller, while there was no significant difference in presence compared to WIP. There was also no significant difference in VR sickness compared to a controller [50]. For a bimanual target acquisition/deflection task, head-tilt offered a significantly higher presence than using a controller or teleportation though its performance was lower than using teleport [?]. No significant difference in VR sickness was found for head-tilt, WIP and using a controller. An earlier study also found no significant difference in VR sickness incidence between leaning input and a controller [29]. We have not found any research that found head-tilt input to increase VR sickness, and there is evidence that head-tilt can reduce car sickness [53], which is closely related to VR sickness. From an implementation perspective, head-tilt can be implemented with a high accuracy using an inertial measurement unit (IMU) that is present in the HMD. Though WIP can be implemented with an IMU [51], arm swinging either requires using controllers [30] or skeletal tracking which are more likely to be less accurate. Head-tilt also allows the user to retain independent control of their hands. This is useful, for example, when grabbing or punching an object or enemy while running. A limitation of using head-tilt is that it impedes the user's ability to freely look around while locomoting [51]. + +WIP and arm swinging generally do not support omnidirectional navigation (e.g., moving laterally or backwards). For WIP this can be achieved in a handsfree way when combining WIP with head-tilt [50]. A closely related 3PP implementation [34] integrates WIP with positional tracking but this setup does not allow for omnidirectional navigation and requires users to keep facing the camera. Full-body motion tracking generally requires using expensive cameras or extensive user instrumentation. Consumer depth cameras are low cost and require no user instrumentation but to allow for occlusion free skeletal tracking from all directions, multiple cameras from different viewpoints [31] must be integrated. Some rudimentary skeletal tracking is possible using wearable sensors (i.e., VIVE trackers) but since they only track torso and feet this doesn't allow for accurately animating a $3\mathrm{D}$ avatar (e.g., elbows & knee joints are missing). + +Given these hardware and tracking considerations, we decided to implement our 3PP locomotion method to only require use of a single depth camera. To enable this we were inspired by a now abandoned 3PP control system popularly known as tank control in which the user controls their avatar movement relative to the coordinate system of the avatar. Up/Down input moves the avatar forwards/backwards in the direction it was currently facing while left/right rotates the avatar (clockwise/counterclockwise) This differs from current 3PP control schemes where movement is defined relative to the virtual camera’s look direction. Early 3D games used 3D objects in combination with 2D pre-rendered backgrounds, which looked better but required using multiple predefined fixed in-game camera perspectives. This introduced a usability problem because if the user was navigating their avatar in a particular direction, cutting to a different camera perspective could make the game interpret user input differently from what was intended. Tank control didn't have this problem as it interprets movement relative to the avatar independent of a camera. When 3D rendering capabilities improved this control scheme was largely abandoned [37]. + + < g r a p h i c s > + +Figure 2: Head-tilt implemented using orientations acquired from an IMU. + +In a 3PP, the virtual camera is placed behind the avatar facing the avatar's back. Movement is issued relative to the virtual camera's look direction. Where in non-VR 3PP experiences the camera is typically controlled using a mouse or joystick, similar to most VR experiences, we pair the virtual camera to the position and orientation of the user's head to reduce visual-vestibular conflict. We then implement tank controls based on head-tilt for locomotion for the following reason. For optimal full-body tracking users must keep facing the camera to avoid occlusion. Tilt-based tank control ensures the user will always face the camera during locomotion as when users steer they rotate the virtual world relative to their avatar. When a user turns their head, the avatar will move out of the user's field of view as the avatar's location doesn't change. Our tank control scheme further ensures users will always be facing the camera by imposing a constraint that the user must be looking at their avatar to engage in locomotion. + +If we used camera based movement, when the avatar rotates, we would have to rotate the camera as well to maintain an over the shoulder 3PP view. The optical flow from the camera orbiting around the avatar would be higher than when using tank controls and might induce VR sickness. + +We combine tracking data from three different sources to implement our technique: + + * Positional tracking data from the VR HMD is used to place and orient the virtual camera. The user can freely move around in the available tracking space and a grid is shown to indicate to the user that they are approaching the tracking boundary. + + * A depth camera estimates full-body joint positions which are used to animate the virtual avatar in real time. If the user walks around in the available tracking space of the depth camera, the virtual avatar will do the same. For this to work in conjunction with the positional tracking input of the VR system, the two tracked spaces must overlap. The user also must adhere to the tracking space constraints. But since they are tethered to their avatar there is a risk of pushing or dragging the avatar outside the tracking space. We prevent this from happening by not letting the avatar cross the tracking boundary, which acts as a warning system to the user. For example, if a user is walking forward towards the camera, the avatar will stop when it reaches the tracking boundary and users must take care to return inside. + + < g r a p h i c s > + +Figure 3: A finite state diagram of the Tilt Locomotion. w_min, s_min and $t\_ \min$ represents the walking, strafing and turning thresholds respectively with the assumption that the user is facing the virtual avatar. + + * Inertial sensing, acquired using the HMD's IMU is used to enable head-tilt locomotion. Head-tilt is calculated from the three possible degrees of rotation of the HMD; pitch, roll and yaw (see Figure 2). The pitch of the head dictates forward or backward movement, while the roll of the head is used for strafing when the user is not moving. We combine pitch and roll to implement tank controls when the user is moving forward. To allow users to be able to look around freely without engaging in movement, a dead zone has been defined and roll or yaw must exceed a certain threshold to activate movement. Roll and pitch can further be coupled to the avatar's locomotion speed to support variable locomotion speeds. A known limitation of head-tilt based locomotion [50] is that it limits the user's ability to freely look around as this changes the direction of locomotion. Users can freely look around when standing still while not looking at their avatar. During locomotion users can still look around with their eyes without moving their head, though this is constrained by the limited field-of-view of VR HMDs. + +Figure 3 depicts a finite state machine diagram of the supported movement types. First, when standing still, as long as the user's head roll and pitch remain below their thresholds, the user can just look around freely and even turn their head ${180}^{ \circ }$ to look behind them. If the head tilt goes beyond threshold, the following can happen. If the user tilts their head forwards or backwards and passes the threshold, their avatar will walk forward or backward. If the avatar is standing still, tilting left or right will make the avatar strafe left or right. On the other hand, when the user tilts forward and then tilts left or right they will steer their avatar and the virtual world will be rotated around the avatar. + +Tilt based locomotion implementation seamlessly integrates with positional tracking using the depth camera. Taking tracking constraints into account the avatar will be moved by any amount of observed skeletal displacement while a fixed distance between the camera and the avatar is maintained (except when approaching the tracking boundaries). Figure 4 provides an overview of all possible forms of locomotion and required corresponding head tilt inputs. + +Additionally complex types of motion can be supported like jumping or crouching which can be activated using a gesture. For example, a short hop detected using inertial sensing can be used to trigger a much higher jump. A study has shown that switching between 1PP and 3PP still enables a high embodiment [15] and we believe transitioning from real walking to using head-tilt for locomotion is a smaller change than a 1PP to 3PP transition and will preserve a high embodiment. + +§ 4 USER STUDY + +The goal of the user study was to evaluate the performance, usability, sense of embodiment and VR sickness incidence of our embodied locomotion method and to compare it to using a controller. + +§ 4.1 INSTRUMENTATION + +Full-body skeletal tracking was accomplished using a Microsoft Azure Kinect DK. For our experiment, it operated with a resolution of ${640} \times {576}$ pixels at 30 frames per second. Latency was measured at ${35}\mathrm{\;{ms}}$ which we deemed acceptable [2]. We placed this camera at a height of $1\mathrm{\;m}$ on a tripod stand, which based on preliminary trials seemed to be most optimal for skeletal tracking with a user located at around $2\mathrm{\;m}$ distance. From our experience, we found the Kinect sensor to prefer certain distances depending on the user's height and body type. That's why, before conducting the experiment, we made sure that the Kinect sensor was able to properly track each user at the distance they were standing and made necessary adjustments if needed. + +For our HMD, we used the Oculus Rift S, a popular PC VR platform that allows full inside-out tracking of the HMD and two controllers using multiple cameras housed in the headset. The Oculus Rift S was specifically chosen because of its inside out tracking capability. Because the Kinect sensor projects infrared (IR) dots, VR systems that also rely on tracking using infrared light can cause interference. Specifically, we tested the Vive Pro and it wasn't compatible with our setup. + +The Oculus Rift S offers a 1440x1280 per-eye resolution at ${80}\mathrm{\;{Hz}}$ and a variable field of view of around ${110}^{ \circ }$ . We used a High end PC (Ryzen 7 1700X, 16GB RAM, NVIDIA GTX 1080Ti) to run our VR application. For our study, we configured our tracking space to have a size of ${2.5}\mathrm{\;m} \times {2.5}\mathrm{\;m}$ , which is an average sized tracking space [3]. The Kinect camera was configured to operate within a depth range of approximately 3 meters(0.5 - 3.86m). The ${75}^{ \circ }$ field of view of the camera means that the width of the tracked space decreases as we move closer to the camera. As a result the tracked width was slightly lower compared to the VR system near the camera. The two tracking spaces were aligned to have maximum overlap. SteamVR's chaperone system keeps the user within the available tracking space and thus also keeps them visible to the Kinect camera most of the time. + +For our study, since we are assessing locomotion performance, we compare our technique to using a Microsoft Xbox one wireless gamepad. Though trackpad or thumbstick input is available on VR motion sensing controllers, most commercially successful 3PP VR experiences (e.g., Lucky's Tale) are primarily experienced seated using a gamepad and participants are also most likely to be more familiar with a gamepad. + +§ 4.2 VIRTUAL ENVIRONMENT + +For our navigation task, we designed a virtual environment with a path for the user's avatar to follow. Path based navigation tasks have been used in closely related studies on VR sickness [4,14]. We designed a winding path in an open environment that was demarcated by wooden boards (see Figure 5. The path contains a few sharp angles and turns requiring fast and precise controls. It has been a criticism [19] of existing studies, that most locomotion methods are evaluated in use cases that only involve navigation and not interaction with the environment. Though this seems to be a quite common use case for many VR experiences like games. + + < g r a p h i c s > + +Figure 4: Examples of how particular head-tilt motion is interpreted into avatar locomotion. + +Since we were interested in evaluating the embodiment of our 3PP locomotion method, we designed an obstacle course that requires navigation but also interaction with objects. We made sure that it was long enough to take at least 7-10 minutes of time to run from start to finish. A study on VR sickness found that 2 minutes of optical flow exposure using a VR HMD [46] is already enough to elicit VR sickness symptoms in participants susceptible to it. + +We placed 22 obstacles in the form of $\log$ stacks on the path. Users were tasked with jumping over these obstacles. We also put 136 balloons along the path with at least 5 meters between each balloon. 68 balloons were placed to the left and 68 to the right of the center of the path to compensate for handedness. We asked the users to pop the balloons by hitting them using the avatar's hands and balloons only pop when there is a collision with the hands. Figure 6 shows both tasks in the virtual environment. + +We developed the environment in Unity 2019.1.11f1. SteamVR plugin version v2.5 was used to implement the VR functionality. A $\delta$ of 1.0 was used so a 1.0 meter displacement in the real world corresponded to a 1.0 meter viewpoint translation in the virtual environment. We used the 3D avatar that came with an Azure Kinect example package for Unity [1]. To follow the avatar from a 3PP, we implemented a follow camera. A point at a height of ${1.8}\mathrm{\;m}$ and at distance of ${1.65}\mathrm{\;m}$ behind the avatar was selected as the target for the follow camera based on preliminary trials. The camera is always trying to reach this target location smoothly using a sigmoid function which helps dampen out motions from the user's avatar jumping. The camera is also rotating smoothly with the goal of matching the avatar's forward direction. We implemented the two locomotion methods in 3PP. + +Embodied locomotion. As described in the design section this method combines outputs from the HMD positional tracking, and IMU and Azure Kinect sensor. Roll and pitch are interpreted to support navigation in any of the four egocentric directions that are easy to interpret by the user as this maps to joystick controls. Navigation by means of head tilt is enabled only when the user is facing the avatar. Whether the user is facing the avatar is detected by calculating the angle between the HMD's forward vector and the vector towards the avatar from the HMD's position. If this angle is below a threshold (which in our study was set to ${15}^{ \circ }$ ), the user is considered to be facing the avatar. Also, we only engage in movement when roll or pitch exceed a minimum threshold. This allows users a greater freedom to freely look around. + + < g r a p h i c s > + +Figure 5: Virtual environment showing the path to be navigated. + +Figure 4 presents a simplified finite state diagram of our embodied locomotion system. The user starts in the idle state. Given they are facing the avatar, the users then have the option to make the avatar walk forward/backward or strafe sideways. Walking forward is enabled by a forward head tilt above threshold (threshold, w_min $= {14}^{ \circ }$ ). When walking backwards, ${\mathrm{w}}_{ - }\mathrm{{min}}$ was set to $- {11}^{ \circ }$ . While the avatar is walking, If the user rolls their head to the left or right, the avatar will turn in the left or right direction respectively. (threshold, $\left. {{\mathrm{t}}_{ - }\mathrm{{min}} = {20}^{ \circ }}\right)$ . On the other hand, if the avatar is standing still, the head roll will make the avatar strafe to the left or right (threshold, s_min $= {20}^{ \circ }$ ). Values for these thresholds were determined experimentally from a small number of preliminary trials. Each of the threshold values w_min, t_min and s_min is accompanied by a maximum value which are ${\mathrm{w}}_{ - }\max ,{\mathrm{t}}_{ - }\max$ and ${\mathrm{s}}_{ - }$ max respectively (also determined experimentally). We use these minimum and maximum values with an inverse linear interpolation function to get our final input values in the range of 0 to 1 . These input values are used to linearly interpolate between movement animations. We used the 'root motion' feature of our movement animations. This means that the avatar locomotion speed is coupled with the particular animation being played and it's speed. In our implementation the locomotion speed ranges between 0 and ${4.5}\mathrm{\;m}/\mathrm{s}$ . + +We implement jumping by calculating the headset's speed in the global up direction and comparing it against a predefined threshold. We maintain a moving average $\left( {\mathrm{n} = 4}\right)$ of the speed to smooth out the data and avoid accidental jump commands. While the avatar is in air, the head tilt can be used to manipulate how far and in what direction the avatar jumps to an extent. This is done by applying a physics force to the avatar that we scale using the tilt input. + +The Kinect sensor is used for body skeletal tracking. We map the joint orientation and position data to the avatar. Thus the movements of the user and the avatar are coupled. The Kinect is capable of estimating 32 body joint positions. To implement body tracking functionality in Unity we used the Azure Kinect Example Project asset as the basis. This package, however, had to be modified to achieve our desired functionality, e.g., masking part of the body to be controlled by body tracking while other parts by animation. The tracked skeleton by Kinect can show signs of jitters in the joints. To mitigate such anomalies, the example project comes with a 'smooth factor' option that linearly interpolates between previous joint positions to create a more stable skeleton. We set this to a value of 10 for our study. + +When the user is locomoting using head-tilt, we animate the legs using a default animation clip and only the upper body will match any motions made by the user. This breaks visuo-motor synchronicity which could be detrimental to embodiment. However, with no animation it looked like the avatar was flying while dragging their feet which in preliminary trials seems to induce a lower sense of embodiment. Users can still move their avatar arms while walking forward and interact with objects etc. When not locomoting there is full body visuo-motor synchronicity and users can walk around as long as they remain visible to the sensor. + +Controller based locomotion. This uses a standard 3PP control scheme where the left analog stick of the controller is used for avatar movement. Instead of using the right analog stick for rotating the camera as is common in non-VR 3D experiences, the users' HMD controls the camera which minimizes visual-vestibular conflict. The left and right bumpers of the controller are used to activate the left hand and right hand punch respectively. To activate jumping, we used one of the buttons (A) on the controller. In this control scheme, the avatar locomotion speed and animations work the same way as the Embodied scheme. The difference is that instead of head tilt determining the input, we instead use the left analog stick's input. Pushing the stick left-right is analogous to head roll while pushing it forward-backward is comparable to head tilt. + +§ 4.3 EXPERIMENT DESIGN + +The experiment was a $1\mathrm{\;X}2$ design with locomotion method as the independent variable (two levels: embodied and controller). We inspect the effect of this factor on task performance, usability, embodiment and VR sickness. To account for order effects, half of the participants started with the embodied condition (Group A) while the remaining half started with the controller condition (Group B) to compensate for any learning effects. Because the effects of VR sickness can linger for up to 24 hours; to minimize the transfer of VR sickness symptoms across sessions, we conducted each session on a separate day with at least 24 hours of rest between sessions. + +§ 4.4 PROCEDURE AND DATA COLLECTION + +The experiment was conducted in a user study space that was free of noises and physical obstacles. When participants arrived for the first session they were briefed on the goal of the study, the outline of the experiment, the risks involved, the data collected, and the details of the training and experiment sessions. The distance between the VR HMD's lenses was adjusted to match the participant's interpupillary distance (IPD). Participants were then asked to stand in the middle of the tracking space. We first made sure that the Kinect sensor could track each participant properly. Then the users were assisted with putting on the VR headset so that they could start the training session. + + < g r a p h i c s > + +Figure 6: Tasks that users were required to perform during the navigation task. Left, punch a balloon, Right, jump over an obstacle. + +The goal of the training session was to familiarize the participant with the controls used for the traditional 3PP control scheme and the techniques used for the embodied 3PP locomotion method. Participants were given an opportunity to try out both locomotion techniques in a short task that was similar to the experiment task. + +Upon completing the training task, participants started the first experiment session where they were instructed to follow the obstacle course at their own pace. During the experiment session we recorded the number of balloons popped, number of obstacles jumped over, total duration of the experiment session and the amount of time spent walking on the path. + +After completing each experiment session, participants were asked to fill out three questionnaires: 1) a Simulator Sickness Questionnaire (SSQ) [24] which is a standardized questionnaire used to measure the incidence of VR sickness, 2) a usability questionnaire which allowed participants to provide qualitative feedback about usability of the technique they just experienced, and 3) a standardized avatar embodiment questionnaire [16] to measure the embodiment of the avatar. As recommended by the authors of the SSQ [23] we use it only to assess post exposure VR sickness symptoms. + +In this study, we use the standardized avatar embodiment questionnaire to address three aspects of virtual embodiment that are applicable to our experiment: body ownership, agency and motor control, and location of the body [16]. This questionnaire was developed for assessing embodiment in 1PP, while here we try to adopt it for avatar embodiment in 3PP which is different. Thus, following the recommendations of the standardized questionnaire, we only use a subset of the questions (e.g., Q1 to Q14) that are needed to calculate the metrics for these three aspects given whether we thought the questions were relevant to $3\mathrm{{PP}}$ and the navigation task we had participants perform. The responses to the individual questions are combined into these three metrics based on the formulae provided in [16]. + +Finally, after completing both experiment sessions, participants were asked to fill out a post-study questionnaire which was used to collect demographic information that included their age and sex; and their frequency of playing video games, familiarity with controller based third person navigation, frequency of using VR, and tendency of being motion and/or VR sick using a five-point Likert scale. On average, the whole study took about 45 minutes to complete in two sessions. All participants were compensated with a $\$ {15}$ Amazon gift card for their time, and the user study was approved by an IRB. + + < g r a p h i c s > + +Figure 7: Summary of participants ratings of their frequency of playing video games, familiarity with 3P navigation using a controller, frequency of using VR and their tendency of getting motion or VR sick on a scale of 1 (never) to 5 (very frequently). The results are reported in the form of percentage (count). + +§ 4.5 PARTICIPANTS + +Recruitment of participants was significantly impeded by the Covid- 19 pandemic and our University shutting down as a result of it. Nevertheless before the shutdown we were able to recruit fifteen participants for our study. However, one participant could not complete the study because she could not be properly tracked by the Kinect sensor. Fourteen participants ( 4 females, 10 males, average age=24.9, SD=4.6) were able to complete both sessions and their data is analyzed in this study. + +§ 5 RESULTS + +Participants were asked to rate their frequency of playing video games, familiarity with controller based third person navigation, frequency of using VR, and tendency of being motion and/or VR sick on a scale of 1 (never) to 5 (very frequently). The results are summarized in Table 7. To measure task performance, we logged the position of the avatar in the virtual environment, time stamps, number of balloons popped, number of obstacles jumped and the percentage of time participants spent on the path (e.g., if participants didn’t deviate from the path this number would be 100%). + +We analyzed these quantitative results using a one way repeated measures MANOVA. For qualitative results, all participants answered an avatar embodiment questionnaire, an SSQ and a usability questionnaire after each trial. The responses collected through the embodiment and usability questionnaires were analyzed using nonparametric methods (Wilcoxon signed rank paired-test). + +§ 5.1 TASK PERFORMANCE + +max width= + +Locomotion type Embodied (SD) Controller (SD) + +1-3 +Total time (s) ${519.47}\left( {104.3}\right)$ 423.25 (6.6 ) + +1-3 +% targets hit 89.86 (6.4 ) 95.75 (3.7 ) + +1-3 +% obstacles jumped 92.53 (1.1) 99.35 (1.7 ) + +1-3 +% on track 95.60 (3.8 ) 97.59 (.5 ) + +1-3 + +Table 1: Quantitative results for each locomotion method. Standard deviation listed between parentheses. + +Table 1 lists the task performance results for both methods. For our analysis we used (1) total time, (2) % of targets hit, (3) % obstacles jumped, and (4) $\%$ of time spent on the track. A one-way repeated measures MANOVA found a statistically significant difference between locomotion techniques on the linear combination of the dependent variables $\left( {{\mathbf{F}}_{4,{10}} = {3.689},p = {.043}}\right.$ , Wilk’s $\lambda =$ .404, partial ${\varepsilon }^{2} = {.596}$ ). Mauchly’s test of sphericity indicated that the assumption of sphericity had been met. + +Follow up univariate tests found statistically significant differences between locomotion methods for total time transitions $\left( {{\mathbf{F}}_{1,{13}} = {12.710},p = {.003}\text{ , partial }{\varepsilon }^{2} = {.494}}\right)$ , targets hit $\left( {{\mathbf{F}}_{1,{13}} = }\right.$ ${11.910},p = {.004}$ , partial ${\varepsilon }^{2} = {.478}$ ), obstacles jumped $\left( {{\mathbf{F}}_{1,{13}} = }\right.$ ${5.571},p = {.035}$ , partial $\left. {{\varepsilon }^{2} = {.300}}\right)$ . However, there was no statistically significant difference between locomotion methods for time spent on track $\left( {{\mathbf{F}}_{1,{13}} = {3.828},p < {.072}\text{ , partial }{\varepsilon }^{2} = {.227}}\right)$ . + + < g r a p h i c s > + +Figure 8: Diverging stacked bar chart of the percentages of the Likert scores for the subjective usability rankings of each locomotion method. + +§ 5.2 USABILITY + +After completing each session participants were asked to rate the locomotion method they just tested in terms of accuracy, efficiency, learnability and likeability using a 5 point Likert scale ranking from 1: strongly disagree to 5: strongly agree. The results are summarized in Figure 8. A Wilcoxon Signed-Rank test was used to analyze for differences in Likert scores. We found statistically significant difference for accuracy $\left( {Z = - {2.341},p = {.019}}\right)$ , efficiency $\left( {Z = - {2.46},p = {.014}}\right)$ , learnability $\left( {Z = - {2.124},p = {.034}}\right)$ , and likeability $\left( {Z = - {2.077},p = {.038}}\right)$ . + +§ 5.3 SIMULATOR SICKNESS + +We used the SSQ results to calculate the SSQ subscores: total score, nausea, oculomotor and discomfort as described in [56]. A one-way repeated measures MANOVA did not find a statistically significant difference between locomotion techniques on the linear combination of the SSQ subscores $\left( {{\mathbf{F}}_{3,{11}} = {0.185},p = {.360}}\right.$ , Wilk’s $\lambda = {.756}$ , partial $\left. {{\eta }^{2} = {.244}}\right)$ . Mauchly’s test of sphericity indicated that the assumption of sphericity had been met. + +§ 5.4 AVATAR EMBODIMENT + +Analyzing the avatar embodiment questionnaire responses, we found that participants preferred the embodied locomotion method over the controller method. The results are summarized in Figure 10. Participants reported a significantly higher ownership of the virtual avatar when using the embodied locomotion method $\left( {Z = - {2.482},p = {.013}}\right)$ than when using the controller based locomotion method. Participants also reported significantly higher scores of agency and motor control when using the embodied locomotion method $\left( {Z = - {2.485},p = {.013}}\right)$ compared to the controller based locomotion method. Additionally, as measured by the location of the body metric, participants thought the embodied locomotion method provided significantly higher embodiment illusion $\left( {Z = - {2.131},p = {.033}}\right)$ compared to the controller based locomotion method. + + < g r a p h i c s > + +Figure 9: Summary (means) of the four subscores of the SSQ score: (N) ausea score (O) culomotor discomfort, (D) isorientation score and the (T)otal (S)everity score. Error bars show standard error of the mean. + + < g r a p h i c s > + +Figure 10: Embodiment scores for both methods measured by the metrics ownership of the body (Ownership), agency and motor control (Agency), and location of the body (Location). + +§ 6 DISCUSSION AND FUTURE WORK + +Performance. Not surprisingly, in terms of performance using a controller performed significantly better than our embodied locomotion method. Controllers require very little physical effort to be used and prior studies have repeatedly found that a controller is faster and easier to use mostly because most users are highly familiar with this type of input. The differences in performance seem quite reasonable, e.g., total time (22% slower), balloons hit (6% lower), obstacles jumped (7% lower) and time on track (2 % lower). + +Usability. A controller was found to be more accurate, efficient, easier to use and overall preferred to our embodied locomotion method. The significantly higher familiarity with using a controller (as evident due to the ${100}\%$ agree or higher score for learnabil-ity) largely explains why users found a controller more accurate, efficient and better liked than our embodied locomotion method. However, to contextualize these usability results it is important to distinguish locomotion from interaction (e.g., interacting with objects). To hit a balloon and jump over an obstacle our embodied locomotion method required real physical movements (i.e., punching and jumping), which is slower and more error prone than pressing a button. Though the Kinect tracking is pretty accurate there was a small amount of latency -especially affecting jumping- which would sometimes cause participants to run into the obstacle and come to a standstill, rather than them jumping over it. Hitting a balloon while running also required precise timing and was just harder to perform with our embodied method. Some participants were observed to navigate backwards when they missed hitting a balloon so they could try hitting it again, which added to their time. Looking at locomotion efficiency, there was no significant difference in percentage of time on track for both locomotion methods. Overall these factors contributed to a worse rated usability for our embodied method, which was further exacerbated by participants' high familiarity with using a controller. Though participants were given enough time to familiarize themselves with our embodied interface, over time with greater proficiency rated performance and usability could increase. + +Embodiment. Our study did find evidence that our embodied locomotion method offered a significantly higher avatar embodiment than when using a controller, which was the main objective of our method. The motivation to compare our method to a controller was made largely for benchmark purposes with no reasonable expectation that our embodied locomotion method would outperform a controller for performance or usability (users were more familiar with a controller). To enable embodiment illusion research [36], locomotion performance may not seem to be an important factor, but usability probably is. We did not explore using our 3PP locomotion method for embodiment illusion research, which is something we hope to explore in collaboration with experts in this area. + +Though our approach lets users see their avatar, existing embodiment illusion research uses 1PP with a virtual mirror which allows for face-to-face interaction, which is important [17]. Because our approach uses a fixed camera from behind, you only see the avatar's back, but we aim to develop a hands free control scheme that lets users rotate their camera to allow users to see their avatar and their avatar's face from different angles. Our study did not assess presence as this is determined by many factors including the VR experience itself, but we hope to substantiate this in future work. + +VR Sickness. There was no significant difference in VR sickness incidence as measured using the SSQ between both locomotion methods. Head-tilt generates some of the vestibular cues that are present in walking, which are notably absent when using a controller and so there was the possibility this could alleviate visual-vestibular conflict. However, we did not find any differences and this corroborates an earlier study that compared a controller to using head-tilt input for locomotion (using a 1PP) that also did not find a significant difference in VR sickness as measured using SSQ [50]. An important finding was that VR sickness incidence was low. Six participants out of the fourteen were asymptomatic, and the overall observed average total SSQ scores were very low (i.e., 33/36 out of a maximum of 235) which corresponds to very mild VR sickness. Though experimental conditions differ, many prior studies $\left\lbrack {{14},{27},{32},{57}}\right\rbrack$ have found that a controller generally induces moderate to high levels of VR sickness. A notable difference is that prior studies all used a 1PP where we used 3PP. Low VR sickness could be the result of looking at an avatar during locomotion, which could serve as a rest frame [39]. Because no studies have investigated how perspective affects VR sickness, this is something we certainly aim to investigate in future work. + +Limitations. Our user study involved a low number of participants $\left( {\mathrm{n} = {14}}\right)$ as our University stopped human subject research halfway through recruitment due to COVID-19. Our embodied locomotion method will only work with HMD's that feature non-IR inside-out tracking as this does not interfere with the IR used by the depth camera. Another limitation imposed by the specifications of the Kinect Azure camera that we used is that in order to be always visible, the tracking space must be defined within the depth range of the camera. Larger tracking spaces could possibly be supported using multiple Kinect cameras and if the user is always visible from every angle, tank controls can be abandoned. A related issue is that the camera doesn't track a rectangular region. So getting a perfect match between the tracking space of the VR system and the Kinect wasn't possible. Multiple cameras can solve this issue as well by covering a larger space than what the VR system does. We have been able to integrate positionally tracked controllers into our method, which improves skeletal tracking and provides rotational information for the hand joints, which is useful for example when holding an object. Our study only evaluated navigation and limited interaction with objects. Our study did not require participants to navigate using positional tracking input (e.g., real walking), though that certainly was possible. However, this increases the likelihood of users stepping outside of the tracking space where visuo-motor synchronicity between the user and avatar cannot be assured which is likely detrimental to embodiment. + +§ 7 CONCLUSION + +While 3PP is a popular perspective for non-VR games, most VR experiences are experienced from 1PP. 3PP for VR is typically implemented using a controller which offers a low embodiment. Embodiment illusion research can reduce biases regarding race, gender and age, but because it uses 1PP it requires using a mirror. We present a novel embodied locomotion method that blends real walking using full body skeletal tracking with head-tilt based locomotion. In addition to being able to see your avatar our locomotion method allows users to navigate beyond available tracking space constraints and is minimal in terms of required sensors (e.g., a single depth camera). Not surprisingly, our user study found that controller input was better in terms of performance and usability but we did not find a difference in VR sickness incidence. Our method did offer a significantly higher avatar embodiment than using a controller, which is an important finding for games as well as embodiment illusion applications. Given the low VR sickness scores for both methods, it suggests that perspective could play a role in VR sickness incidence. \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/uYX0tEWUmTO/Initial_manuscript_md/Initial_manuscript.md b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/uYX0tEWUmTO/Initial_manuscript_md/Initial_manuscript.md new file mode 100644 index 0000000000000000000000000000000000000000..8ea7948252c1fbdb02c97de98a6bef982cca1552 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/uYX0tEWUmTO/Initial_manuscript_md/Initial_manuscript.md @@ -0,0 +1,347 @@ +# Comparing Selection Techniques for Tightly Packed 3D Objects + +Category: Research + +## Abstract + +We report on a controlled user study in which we investigated and compared three selection techniques in discovering and traversing 3D objects in densely packed environments. We apply this to cell division history marking as required by plant biologists who study the development of embryos, for whom existing selection techniques do not work due to the occlusion and tight packing of the cells to be selected. We specifically compared a list-based technique with an additional $3\mathrm{D}$ view, a $3\mathrm{\;D}$ selection technique that relies on an exploded view, and a combination of both techniques. Our results indicate that the combination was most preferred. List selection has advantages for traversing cells, while we did not find differences for surface cells. Our participants appreciated the combination because it supports discovering $3\mathrm{D}$ objects with the $3\mathrm{D}$ explosion technique, while using the lists to traverse 3D cells. + +Index Terms: H.5.2 [Information Interfaces and Presentation]: User Interfaces—Interaction Styles + +## 1 INTRODUCTION + +Selection as an interaction technique is fundamental for data analysis and visualization [49]. In 3D space, selection requires users to find and point out one or more 3D objects (or subspaces), and a sizable amount of research has been carried out on different 3D selection techniques $\left\lbrack {1,2,5,8,{20}}\right\rbrack$ . Among them, ray-casting $\left\lbrack {1,{35},{41}}\right\rbrack$ and ray-pointing $\left\lbrack {1,4,{38}}\right\rbrack$ for object selection as well as lasso techniques $\left\lbrack {{50},{51}}\right\rbrack$ for point clouds or volumetric data are common techniques. These existing techniques come to a limit, however, when data objects are tightly packed and no space exists whatsoever between adjacent data objects so that internal structures are inaccessible. + +Such selection problems in dense environments arise in many scientific domains where researchers deal with data that originates from sampling properties in $3\mathrm{D}$ space. We are motivated, in particular, by botany where cells are densely packed in captured data, virtually without any room between them and half or more of them being enclosed [20] such as in a confocal microscopy dataset of a plant embryo's cellular structure (Fig. 1). With such data, botanists explore the development of plant embryos based on their cellular structure. Using a segmented dataset, they reconstruct the history of the embryo's cellular development [37]. This process requires them to select each cell, one by one, examine its immediate neighborhood, select each potential candidate in the neighborhood to check the shared surface and relative position, and then decide on a likely sister cell that originated from the same parent as the target cell. This process is continued for all cells, and potentially previous assignments are revised if needed. The cells are naturally tightly packed, so we ask the question of how to effectively select 3D objects in such spaces, in particular for realistic datasets with 200 cells or more. + +Currently, botanists use several tools to study cell division, but none of them provides efficient selection interaction techniques for 3D objects in dense packed environments; they are unable, e. g., to filter cells in a view for better selecting or to support marking based on 3D data rather than just 2D (TIFF) images. Researchers currently manually mark the cells, starting by targeting cells for which it is easiest to find the respective sisters. From the set of $2\mathrm{D}$ images, they then identify all neighbors and examine their shapes and that of the surface the two cells share. Based on their past experience, they then decide on the most likely sister for the target cell. + +We thus worked with them to understand their needs and support them with a new approach for interactively deriving the cell division tree. To better investigate the effectiveness of the needed selection techniques in this specific dense packed data scenario, we divided the cell selection into two parts: discovery and traversal. Discovery means to find a specific cell to assign within the whole embryo, while traversal refers to picking a specific range of cells in order. With this definition, we can describe the cell division process as repeatedly discovering target cells and traversing all their neighbors. We then explored three selection techniques: list selection (List), explosion selection (Explosion), and a combination of both (Combination). List provides traditional lists to indirectly select cells, while Explosion displays an explosion view of the embryo and allows to directly select cells. Combination supports both techniques in one interface. We were also interested in how efficient these techniques are when selecting cells in different positions (on the surface and being enclosed). We thus designed a user study to compare the techniques and two cell positions. We measured task completion times, assignment accuracy and clicking ratios (clicking times for each neighbor). We also gathered subjective feedback from participants such as their interaction strategies and preference. + +![01963eb1-f60e-7126-aa51-fd5120a89155_0_928_371_715_188_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_0_928_371_715_188_0.jpg) + +Figure 1: Plant embryo dataset with 201 cells (87 "occluded" cells): (a) a segmented cross section from confocal microscopy, (b) the 3D model, and (c) a part of the desired cell lineage tree-the botanists' goal to be able to study the embryo' development. + +Our results show most participants favored the Combination technique: they preferred to control the cell distance, often discovering targets in the 3D view, and then using the lists to traverse the neighbors. List performed better than Explosion when assigning occluded cells, while there was no clear performance difference between these two techniques for the cells on the surface. With our results on the techniques' performance and people's feedback about interaction, we derived suggestions for future 3D selection technique design and discuss current limitations. In summary, we contribute: + +- a controlled experiment to study selection of dense 3D datasets with traditional input devices, whose results shed light on the performance of three selection techniques, for two cell positions (on the surface or occluded), + +- an analysis of participants' preferred strategies for List, Explosion and Combination as well as the involved two steps (discovery and traversal) of cell selection, and + +- a discussion of selection techniques for dense 3D environments. + +## 2 RELATED WORK + +The actual tasks we employed in our work on selection techniques focus on object discovery and traversal, rather than simple picking. Below we thus first review related work about discovery and accessing techniques for $3\mathrm{D}$ objects. We then discuss general interaction techniques besides selection for dense datasets, especially for desktop-based interaction. We end this section with a small survey of cell visualization applications-our application domain. + +### 2.1 Discovery and access techniques + +3D discovery is essential for finding the target cells among numerous cells. It needs to be able to deal with occlusion, yet should maintain the spatial relationship of an object and its context [20]. Elmqvist and Tsigas [20] summarized a range of techniques to discover objects from densely datasets in virtual environments. They identified five design patterns: multiple viewports, virtual X-ray tools, tour planners, volumetric probes, and projection distorters. One of our approaches (explosion selection) falls into the last of these categories, while our list selection seems to be a separate category as it uses an abstract representation of the elements. + +Though there were ways in dealing with the occlusion problem, the direct interactions including discovering are limited and to completely solve the occlusion, usually multiple techniques would be used [2]. To ease discovery, researchers have also used object highlighting or dimming the remainder of the objects. In the past, space distortion [21-23] and distinguishing the objects in a region [45] have been extensively studied for object highlighting, while object deacentuation has been achieved with transparency $\left\lbrack {{16},{18},{21}}\right\rbrack$ and selective object hiding [21]. These techniques, however, have not been fully tested for discovering a large number of objects such as in our case because the such datasets have high needs for orientation and an extreme lack of visual cues. Here, our application has an advantage: it is guaranteed that the sister cell, at any hierarchy level, is next to its sibling. + +Multiple techniques have also been studied for precise accessing [20], and the spacial occlusion cases are most relevant for us. In 3D environments and, especially, VR, researchers have investigated using dedicated 3D selection tools to address the occlusion issue [2]. The most common techniques are ray-casting [30,34,35], ray-pointing [38], bubble cursor [12, 34], sphere-casting refined by QUAD-menu (SQUAD) [28] and virtual hand [39,40]. Among these four, ray-casting and SQUAD were claimed suitable for dense objects [10] and numerous of studies have explored ways to improve these two techniques. For example, JDCAD [33] allowed people to use the cone selection to freely create the selection volume, which avoided the drawback of the ray-casting that using additional 1D input to select 3D objects. Grossman et al. [24] proposed a ray cursor that provided all the intersected targets and allowed users to choose. Later, Baloup et al. [4] developed RayCursor to automatically highlight the closest target and support manually switching the selection of intersected objects. As for the SQUAD, to offset the cumbersome steps in accessing dense objects, Cashion et al. [10] added a dimension called Expand to enable the sphere to zoom. Furthermore, to help accurately select an object users see, researchers have explored advanced access techniques that could calculate which object users would possibly select. For example, Haan et al.'s [13] IntenSelect technique dynamically calculated a score for objects inside a set volume and allowed people choose from the objects with the highest scores. Similarly, Smart Ray [24] continuously calculated and updated object weights to help users to determine which object to select when multiple targets were intersected. All these techniques are efficient in discovering and accessing objects in sparse datasets, yet are not suitable for the highly dense environments with no space between possible selection targets. Moreover, in practical scenarios people are typically aware of which target to select, while in our cell division application the biologists make the decision by referring to the shared surface between the two cells and thus have to traverse a number of potential targets to assess their suitability. Also, the learning effects of new techniques could be high. + +### 2.2 Interaction techniques for dense datasets + +In virtual 3D cell manipulations, biologists need to precisely select objects from dense sets, without knowing which objects may need to be selected. Previous studies [36] have demonstrated that users tended to stick with the familiar mouse interaction. In addition, past work $\left\lbrack {6,{48}}\right\rbrack$ has shown that low-DoF input devices such as mouse and keyboard can easily achieve such tasks with high accuracy. These supported our decision to study cell division with familiar input devices. Nonetheless, in virtual 3D environments-especially in VR—discovering an enclosed object can consume more time [2], even though the selecting is easier due to better depth perception in stereoscopy. In our dense embryo cells scenario we thus relied on a traditional projected-3D environment with mouse and keyboard input to accommodate our domain's need for high selection accuracy. + +Researchers have also explored various methods for mouse and keyboard input to manipulate the objects. For example, Houde [26] raised the idea of creating a handle box outside the 3D object and, similarly, modern 3D modeling applications such as Blender and Rhino allow users to individually transform the 3D objects with mouse and keyboard. Applications also provide layers for organizing the objects and selecting multiple items from a list. Even though in some controlled environments the object layout can be rearranged to avoid occlusion [44], in our case the cells' spatial relationship must not be changed to provide our users with a faithful representation. + +Past work on selection in dense datasets has focused on structure-aware approaches (e. g., $\left\lbrack {{14},{15},{19},{50},{51}}\right\rbrack$ ). Unlike particle or volumetric data which contains huge amounts of points or a sampled data grid without explicit borders, our embryo cell data has dedicated cells that could be picked-yet are tightly packed to each other such that many are not accessible for traditional picking. Lasso-based selection is also not appropriate because we do not need to enclose regions but need to match two dedicated objects as sister cells. We thus instead require interaction techniques that preserve the respective positioning at least locally and allow us to access all cells in an efficient and effective way. + +### 2.3 Cell visualization + +Cell data visualization has been found to be useful in helping biologists get knowledge about cell development. Various academic tools (e. g., OsiriX [42], Fiji ImageJ [43], OpenWorm [46], and Icy [11]) and commercial software (e. g., Avizo, Imaris) provide advanced live-imaging techniques and computational approaches to allow users to clearly observe and interact with their data. The interaction in these tools, however, remains simple: mouse-clicking the cells on the surface of an embryo provides the users with access to specific variables and actions. For example, MorphoNet [31] uses Unity to visualize diverse types of cell data on a website, allowing users to visually explore cells. They left-click to target a cell, and can rotate and zoom using specific keyboard combinations. This interacting process is smooth for a few cells, while it gets slow and tedious for large datasets (i. e., with $> {100}$ cells). Though the software can hide and show cells, it only provides access to the current outside of the embryo. No single tool among the mentioned software is applicable to the cell division annotation, so we worked to develop and study dedicated selection techniques for the entire embryo. + +## 3 STUDY DESIGN + +To understand how people can best select objects in densely packed 3D settings—in our application domain to discover target cells and traverse their neighbors-and, ultimately, to process large datasets using these interaction techniques, we designed the experiment we describe below. We pre-registered this study (osf.io/yze5n/? view_only=19925d8cfed240f9bd11c24e5bf98995) and it was approved by our institution's ethical review board. + +### 3.1 Interaction techniques + +We chose all the techniques based on previous related work and implementations biologists are using now. From our decisions to focus on desktop settings, an obvious interaction technique to select from a set of segmented objects is to use a list widget (Fig. 2(a)). Participants could discover the target cells from the list only. It has the advantage of mapping the objects distributed in 3D space into a 1D dimension, for a given order in the set. Naturally, there is no such mapping that preserves the objects' original 3D location, but in our use case researchers need to access all of the cells from the set eventually. Moreover, this interaction also lends itself easily to the task of marking the cell division history, as we can algorithmically extract the potential sister cells of a selected target from the segmented dataset and show them in another list widget. For each item in the list, we only show a name because, in the real scenario, biologists refer to such names. In addition, we did not include additional data since they evaluate the shapes and neighborhoods of cells in the 3D view rather than making decisions based on numeric cell property values such as a shared surface area. + +![01963eb1-f60e-7126-aa51-fd5120a89155_2_155_166_1493_271_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_2_155_166_1493_271_0.jpg) + +Figure 2: Three main interaction targets for the techniques compared in the study: (a) List, (b) 3D Explosion, and (c) Combination selection. Target cells are marked in orange and selected cells are red. In all three cases the 3D view was visible to the participants. + +![01963eb1-f60e-7126-aa51-fd5120a89155_2_209_529_604_260_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_2_209_529_604_260_0.jpg) + +Figure 3: The focused view of a target cell and the associated number shown near the neighbor cell's surface (red cell is the target cell and yellow cell is the neighbor cell with its associated number). + +Nonetheless, the 3D location and 3D shape of the respective cells do play a role, both for the initial target selection (as researchers tend to solve the easy cases first) and for the decision on the sister cell (by inspecting the geometry of the shared surface). We thus were also interested in the performance of selection techniques directly in the projected 3D view. We solved the inherent object density and occlusion issues by employing 3D explosion techniques [32,47]. Using this approach we created additional space between the cell objects, both for the initial selection of a target cell in the embryo (e. g., Fig. 2(b)), the examination and, ultimately, selection of the sister cells for this target (e. g., Fig. 3). + +Another fundamental approach to exploring the inside of $3\mathrm{D}$ objects or volumetric datasets in visualization is the use of cutting planes (e. g., [25]). We also explored this technique as a basis for exploration and selection as it conceptually relates to the slices of the confocal microscopy approach in our application domain. With this technique, researchers would be able to move and orient a cutting plane freely in 3D space, and then we would show the intersected cells in an unprojected slice view where they could be clicked for selection. Pilot tests showed, however, that this approach was not promising because it was difficult to reason from the intersected cells to their correct 3D shape and correct selections took a long time, so we did not further pursue this technique in our experiment. + +Instead, we also merged the first two techniques into a Combination technique in which participants had the choice between using List and Explosion selection. Moreover, in all techniques, including in the List selection, we showed the 3D projection of the embryo or a target cell's direct environment as our collaborating biologists always make the decision of which two cells are sisters based on the shape and size of their interface (i. e., the shared surface between the two cells). We thus also used an explosion representation for the List selection technique, to guarantee that our participants can observe the shared surface. In the Explosion and Combination techniques, however, we allow users to freely adjust the explosion degree and to control the amount of space they need for navigating in 3D space. + +### 3.2 Tasks + +With these interaction techniques we aimed to support the practical task of deriving the cell lineage for an entire embryo. We thus modeled the tasks in our experiment based on the approach our collaborating experts (three plant biologists, all with more than 20 years of professional experience) take to derive the cell division history as outlined in, using the tools described in Sect. 2. We followed the same process in our experiment: participants were first asked to select a non-marked target cell from the embryo. We then showed them this cell's immediate neighborhood in the focused view (Fig. 3, both as a 3D view and, in case of List and Combination techniques, as a list), and then asked them to select the correct cell based on which cell is most likely the sister of the target. + +This approach would naturally limit us to participants with years of experience in plant biology cell lineage analysis and the cell division scenario only. To circumvent these restrictions, we implemented a proxy for the biologists' experience: As we show a target cell's neighborhood, we asked participants to select each potential neighbor, after which we showed a pre-defined "likelihood" (an Integer $\in \left\lbrack {1,{99}}\right\rbrack )$ of being the correct sister cell. We chose this number randomly and independent of the specific situation because we were interested in general feedback on selection in dense environments with non-expert participants. We displayed this number in the 3D environment hidden from the current view to force participants to use 3D navigation (i. e., rotation) to reveal the number-this interaction mimicking the $3\mathrm{D}$ evaluation of the interface between two cells that the biologists would do. Participants would then need to find the cell with the highest number to make a correct selection. In addition, this highest number was not necessarily 99 , so that participants would have to examine each potential neighbor at least once. + +### 3.3 Datasets + +We used a real embryo data provided by our collaborators, which contained 201 cells. We chose this dataset due to its realistic size. Experimental time limits, however, meant that participants could not assign sisters for all cells, we thus created three sets of target cells for them to mark, each with 10 cells. We were interested in the influence of the cell position (surface vs. occluded), so we created all three sets with 5 cells on the embryo’s surface and 5 cells that were enclosed by other cells. To reduce learning effects, the three sets did not share a same cell, nor did they share any of the respective neighbors. Each set plus its 1-neighborhood (i. e., direct neighbors) was thus completely distinct from the other sets, plus their respective 1-neighborhoods, which guaranteed that any past assignment (even if done incorrectly) would not affect any future marking. Otherwise, if two target cells would have shared a potential neighbor, then participants marking this neighbor as a sister of either target would means that the other target would lose a sister candidate. + +![01963eb1-f60e-7126-aa51-fd5120a89155_3_152_148_716_443_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_3_152_148_716_443_0.jpg) + +Figure 4: Study interface (combination selection shown). + +### 3.4 Interface + +In three conditions, the interfaces contained three main parts: instruction panel, 3D view and operation panel (see Fig. 4). The operation panel in all techniques contains two buttons. One could be used to auto relocate the whole embryo to center the center of the 3D view, in case participants got lost, and another one enabled participants to jump to the next task. In List and Combination, this panel included a global list of all cells in the left list view and a focused neighbors list, showing only the direct neighbors of a selected target cell. We scaled the interface to completely fill the screen size of participants' computers, with the ratio of each part's size to the interface size being fixed. In the instruction panel, we displayed the study progress state and a brief introduction of the interaction in the task. We placed the 3D view on the left, while we showed the operation panel on the right. We designed the relative to indicate that 3D view was the main reference, and such that it was approximately square. Below the 3D view, we placed a horizontal bar widget to allow participants to control the explosion distance between the cells. We placed the button to mark two cells as sisters on the top and in the center, somewhat in the middle between 3D view and operation panel such that the distances to travel to the button from $3\mathrm{D}$ view or lists were about the same. We also allowed participants to assign cells by pressing the space in the keyboard to further reduce the impact of the actual marking action on completion time. + +For indicating cells from the sets to be marked, we highlighted them in the list via orange icons for List and rendered the cells' 3D shapes in orange in the 3D view for Explosion. In Combination mode, we used both forms of highlighting. When participants clicked on a cell either in the 3D view or the lists, we also showed the corresponding item in the lists and the cell in $3\mathrm{D}$ view would in red (for target cells) or yellow (for neighbor cells) in the 3D view or highlighted in the list as shown in Fig. 4. Finally, we modeled the interaction in the $3\mathrm{D}$ view after commercial $3\mathrm{D}$ modeling software like Rhinoceros or Blender. Participants could hold the right mouse button to rotate, scroll the wheel to scale, and hold the wheel to pan. To distinguish rotating from clicking, the left button of mouse in the 3D view could be used to click and double click the cell. + +### 3.5 Measures + +We assigned a unique participating number to every participant and recorded all data based on this number to guarantee participant anonymity. For all trials, we recorded total completion times, accuracy, every action participants did, and tracked the real-time position of the camera. We started the timer when the program had loaded the visualization for each trial and stopped once the participant triggered the signal of assigning the cell sister (button click or keyboard press). We asked participants to activate the assignment once they found the sister. After choosing the sister for the target, these two cells would disappear in the $3\mathrm{D}$ view and the corresponding items in the lists would also be disabled. We then instructed participants to continue with the next assignment and we restarted the timer. We measured the total trial completion time and accuracy by calculating the ratio of correct assignments in all assignments. Aside from completion time and accuracy, we also recorded the cell selection ratio (clicking times divided by the neighbor count) to better understand the efficiency of different techniques. A more efficient selection technique was likely to have lower clicking ratio, one that is closer to 1 . After participants finished all tasks, the examiner conducted a post-study semi-structured interview, focusing specifically on the following questions: Q1-Sort the three techniques by preference; Q2-What strategies did you use in doing three tasks? and Q3-Do you have any other comments on the interaction? + +### 3.6 Participants + +As our goal was to generally understand object selection in dense datasets and to provide recommendations also for non-botany scenarios, we targeted non-expert participants. We recruited 24 people in total via social networking and our local university's mailing list ( 8 females, 16 males; 24-31 years old, with a mean age of 26.96 years). All participants had at least a master degree, were right-handed, and were well trained in the usage of mouse and keyboard interaction. None of them was color deficient. Twelve of them had previous experience in 3D manipulation including 3D video games playing, and none of them had knowledge about cell division before. The latter aspect is important as it suggests that all participants made their assignments only based on the number we showed, rather than their previous knowledge of cell division patterns. + +### 3.7 Procedure + +We conducted the experiment via remote video calls due to the limitations that arose from the Covid-19 pandemic for our research environment and for the participants. We minimised the remoteness effects by checking in advance whether every participant could smoothly conduct the experiment with their preferred devices. We first explained participants the purpose of our study, asked them to fill in basic demographic information, and sign a consent form if they agreed to participate. Because we conducted the study online, for those participants who preferred not to install our experimental software by themselves, we asked them to use a dedicated remote interaction software to allow them to remotely control the experimenter's computer. The others had downloaded the software and installed the software in advance and shared their screen while they communicated with the researcher via video conferencing. + +We divided the experiment into three blocks, one for each technique. Each block began with a non-timed training session in which the experimenter first explained the task using written instructions in the interface and a study script, and then asked participants to try their best to traverse all the neighbors of a target cell and to find the correct answer as soon as possible. Before transferring to the main task, the experimenter ensured that participants understood the task and were able to conduct the tasks correctly and independently. After finishing all tasks, we conducted the mentioned post-study interview to explore participants' strategies and individual experiences. + +Our first objective with the experiment was to compare the List and Explosion techniques. We thus only presented these two techniques in the first two study blocks. We counter-balanced the order of both techniques to reduce order effects. Our second objective was to assess how participants would interact when having the choice of using the Combination technique, after having experienced the List and Explosion techniques separately. In the third block we thus always presented the Combination technique to participants. In addition, we were interested in the effect of occluded vs. surface cells, so we alternated between these types and also counter-balanced the type a participant would see first. We did not expect an effect of the specific order of cells in the list view, so we always used the same order (by name) for all participants. In List and Explosion tasks, we showed the next target cell in orange after participants had finished the former assignment, while we marked all target cells at the start of a Combination task to explore in which sequence participants would assign them. The order of the specific cell subsets may play a role, so we counter-balanced the order of the three subsets. In total, we thus had a 2 techniques $\times 2$ cell types $\times 3$ data subsets design, resulting in 12 combinations in total, and each possible combination was experienced by two participants. We used 10 trials per technique and the resulting experiment lasted about one hour per participant. + +![01963eb1-f60e-7126-aa51-fd5120a89155_4_149_159_1499_459_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_4_149_159_1499_459_0.jpg) + +Figure 5: Completion time (absolute mean time) for different numbers of cell neighbors in seconds: (a) overall time, (b) List selection, (c) Explosion selection, and (d) Combination selection. + +![01963eb1-f60e-7126-aa51-fd5120a89155_4_148_710_1500_224_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_4_148_710_1500_224_0.jpg) + +Figure 6: Completion time (absolute mean time) in seconds (List in yellow and Explosion in red): (a) the overall results, (b) selection of occluded cells, and (c) selection of surface cells. + +## 4 RESULTS + +We now present our experimental results of completion time, accuracy, and clicking ratio for the two selection techniques List and Explosion. We then individually examine the use of Combination, which we cannot analyze together with the other techniques due to potential order effects. We also compared the performance of the different techniques in assigning cells from two positions (on the surface or occluded). Cells on the surface (surface cells) typically have less neighbors and clearer layers, while enclosed cells (occluded cells) are hidden entirely from an outside view. We also discuss our participants' strategies and subjective feedback. + +We gathered totally 720 trials (24 participants $\times 3$ tasks $\times {10}$ trials). Recent recommendations from the statistics community made us choose an analysis of the results using estimation techniques with confidence intervals (CIs) and effect sizes $\left\lbrack {7,{17},{29}}\right\rbrack$ , instead of using a traditional analysis based on $p$ -values [3], to avoid the dichotomous decisions. We did not find all measurements to be normally distributed, so we used bootstrapping CI [27] to analyze completion time, accuracy, and clicking ratio. We visualized our output distributions to increase the transparency of our reporting. + +### 4.1 Completion Time + +We can naturally assume an impact of neighbor count on completion time and we indeed observed an approximately linear relationship-globally for all tasks (Fig. 5(a)) and also for the individual tasks (Fig. 5(b)-(d)). The mean neighbor count per dataset, however, was approximately similar (10.4 vs. 10.1 vs. 10.8). Moreover, each combination of task with dataset was seen by the same number of participants (fully counter-balanced), so in our remaining global analysis of completion times this relationship does not play a role. + +Techniques. In Fig. 6 we present the absolute mean values of time in seconds for each technique. With List, the average time is ${63.81}\mathrm{\;s}$ (CI [56.25s,74.82s]), while using Explosion, the average time for one target cell is ${69.75}\mathrm{\;s}$ (CI [60.64s,80.26s]). Since the CIs overlap a lot, to better demonstrate the difference in the completion time, we checked the pair-wise ratio for these two techniques (see Fig. 7). The ratio for List/Explosion is 0.91 (CI [0.86,1.01]). As we can see, the upper bound CI of List/Explosion is 1.01, close to but above 1, so there is some evidence that the List selection tool less time than Explosion. The absolute difference, however, is only small as evident in the similar completion times. We also investigated the completion time differences with these two techniques in two task parts: discovery and traversal. For the discovery part (i. e., the accumulated times from the start of a trial to the selection of the target cells), the average mean times are 7.57s (CI [6.79s, 8.52s]) with List and 5.23s (CI [4.31s, 6.36]) with Explosion (see Fig. 8(a)). Since the upper bound of CI in Explosion is smaller than lower bound of CI in List, the Explosion is evidently faster in discovering target cells than List. We also checked the pair-wise ratio of List/Explosion and it is ${1.45}\left( {\mathrm{{CI}}\left\lbrack {{1.27},{1.69}}\right\rbrack }\right)$ , which confirmed that List selection needed more time than Explosion (see Fig. 9(a)) for object discovery. As for traversing (i. e., the accumulated times for checking all neighbors of a cell), the average time for List is 54.84s (CI [47.98s, 65.12s]), while for Explosion it is 62.26s (CI [54.37s, 71.49s]) (see Fig. 8(b)). Because the CIs overlap a lot, we examined the pair-wise ratio to better analyze the difference. As Fig. 9(b) shows, the ratio for List/Explosion is0.88(CI $\left\lbrack {{0.82},{0.98}}\right\rbrack$ ), so there is some evidence that List selection is faster for traversal than Explosion. + +![01963eb1-f60e-7126-aa51-fd5120a89155_5_144_160_1501_145_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_5_144_160_1501_145_0.jpg) + +Figure 7: Pair-wise differences for completion time: (a) the ratio overall, (b) the ratio for occluded cells, and (c) the ratio for surface cells. + +![01963eb1-f60e-7126-aa51-fd5120a89155_5_154_387_714_166_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_5_154_387_714_166_0.jpg) + +Figure 8: Completion time (absolute mean time) in seconds with two steps (List in yellow and Explosion in red): (a) the target cell discovery, + +and (b) neighborhood traversal. + +![01963eb1-f60e-7126-aa51-fd5120a89155_5_174_668_657_367_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_5_174_668_657_367_0.jpg) + +Figure 9: Pair-wise differences for completion time in two steps: the ratios for (a) discovery and (b) traversal. + +Positions. We were also interested in the possible influence of the cell position on performance. We investigated the average completion time for occluded cells (Fig. 6(b)), which was 79.42s (CI [69.83s, 93.52s]) in List and 88.58s (CI [77.43s, 102.33s]) in Explosion. Because this difference of mean completion times is small and the CIs overlap, we again checked the pair-wise ratio, which is 0.90 (CI $\left\lbrack {{0.84},{0.97}}\right\rbrack$ ). The upper bound of the CI is again close to 1.0, so there is some evidence that with List participants could finish the task quicker than Explosion when dealing with occluded cells. We did the same analysis for surface cells. Here, the average times are 51.62s (List; CI [45.05s, 61.23s]) and 54.92s (Explosion; CI [46.87s, 63.27s]), and the pair-wise ratio for List/Explosion is 0.94 (CI [0.86, 1.06]). We thus cannot find much evidence that, in assigning surface cells, List selection would be faster than Explosion. + +### 4.2 Accuracy + +We measured the accuracy of the assignments with two techniques (List and Explosion) and two positions. We calculated the accuracy by dividing the correct assignments count by the total trials count. + +Techniques. We report the absolute mean values of correctness in two techniques in Fig. 10 and the pair-wise ratio for comparison in Fig. 11. The accuracy was high in both techniques so we kept three decimals for a better comparison. For the List, the absolute mean value of accuracy is0.987(CI $\left\lbrack {{0.963},{0.996}}\right\rbrack$ ), while in Explosion, the value is0.933(CI $\left\lbrack {{0.892},{0.958}}\right\rbrack$ ). From Fig. 10(a) we can see that all participants found at least 8 correct sisters (as every participant used each technique to make assignments for 10 cells). In addition, the fact that CIs do not overlap provides evidence that List resulted in more accurate assignments than Explosion. We also analyzed the pair-wise ratio (List/Explosion) to better understand the difference, which was ${1.06}\left( {\mathrm{{CI}}\left\lbrack {{1.03},{1.10}}\right\rbrack }\right)$ . This result provides evidence that List works more accurate then Explosion, although the mean accuracy values are similar and are both high. + +Positions. We also present the absolute mean values of accuracy for the two positions in the two techniques in Fig. 10 and the pairwise ratios between them in Fig. 11. For occluded cells, the absolute mean values of List and Explosion are 1.000 (CI [NA, NA]) and 0.933 (CI [0.858, 0.967]), respectively (Fig. 10(b)). Using the List technique, all participants thus assigned all occluded cells correctly and we can say that the List technique achieved more correct assignments than Explosion. The pair-wise ratio (List/Explosion), which turned out to be1.10(CI $\left\lbrack {{1.03},{1.20}}\right\rbrack$ ), confirms this finding, yet its lower bound being close to 1 makes this result only weak evidence. For the surface cells, the absolute mean values for the two selection techniques (List and Explosion) are 0.975 (CI [0.925, 0.992]) and 0.933(CI [0.883,0.958]). The largely overlapped CIs show limited information for the differences. The pair-wise ratio is ${1.05}(\mathrm{{CI}}\lbrack {1.01}$ , 1.09]) which also only provides weak evidence that List performed more accurately than Explosion for surface cells. + +### 4.3 Clicking Ratio + +We also counted the click events in both the lists and on the 3D view. We separated the clicks needed for rotation in the 3D view for both techniques as these were right clicks-in contrast to the left clicks in the list or $3\mathrm{D}$ view for selection. Thus, we only counted clicks to access specific cells. We define the clicking ratio as the average times participants clicked on every neighbor to get the right answer, i. e., the click counts divided by the number of neighbors. Ideally, participants only click all neighbors once to find the right sister, with a clicking ratio of 1 . In practice, however, participants usually clicked one same cell multiple times. We chose this variable as a factor to evaluate the efficiency of the selection techniques. The more this number deviates from 1 , the worse is the efficiency. + +Techniques. We report the absolute mean values of the clicking ratio for the two techniques in Fig. 12(a). List had the smallest absolute mean value which with 1.37 (CI [1.32,1.45]), while the value for Explosion was ${1.70}\left( {\mathrm{{CI}}\left\lbrack {{1.58},{1.86}}\right\rbrack }\right)$ . Though the $\mathrm{{CIs}}$ are non-overlapping and there is evidence that supports that List has a lower clicking ration than Explosion, to further explore the differences we also calculated the pair-ratio of List/Explosion (Fig. 13(a)). The ratio turned out to be0.84(CI [0.77,0.90]), which provides good evidence that List required less clicks than Explosion. + +Positions. We also examined the absolute mean values of the clicking ratio for the two positions. The absolute mean values for occluded cells are 1.31 (List; CI [1.26, 1.38]) and 1.71 (Explosion; CI [1.56, 1.88]) respectively. The upper bound CI of List being much smaller than the lower bound CI of Explosion provides evidence that List required fewer clicks than Explosion. The pairwise ratio (List/Explosion) being 0.81 (CI [0.73,0.89]) confirms this assessment. For the surface cells, the mean values are 1.45 (List; CI $\left\lbrack {{1.37},{1.56}}\right\rbrack )$ and 1.69 (Explosion; CI [1.58. 1.87]) as shown in Fig. 12(c). The confidence intervals are close to we further checked the pair-wise ratio (List/Explosion), which is 0.88 (CI [0.82, 0.94]). This evidence supports that using List required fewer clicks than Explosion also for surface cells. + +### 4.4 Techniques used in Combination + +We analyzed the Combination technique individually because we presented this technique to participants always last-participants first had to learn the individual techniques. In Combination, participants were able to complete the task freely, with both List and + +![01963eb1-f60e-7126-aa51-fd5120a89155_6_134_154_1519_1122_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_6_134_154_1519_1122_0.jpg) + +Figure 13: Pair-wise differences for clicking ratio: (a) the ratio overall and (b) the ratio for occluded cells (c)the ratio for surface cells. + +![01963eb1-f60e-7126-aa51-fd5120a89155_6_151_1338_723_230_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_6_151_1338_723_230_0.jpg) + +Figure 14: Clicking proportions of List/(List + Explosion) in the Combination task: (a) overall and (b) by neighbor count (for discovery + traversal; $x$ represents the numbers of the cell neighbors, and $y$ represents the clicking proportions). + +Explosion available to them. We were interested in how participants would combine them and whether the neighbor number would influence their choice. We thus calculated the proportions of their click counts in the List condition (over List plus Explosion clicks together) to present the strategy, which we show in Fig. 14(a) (top bar; the Explosion click proportion is the complement of the List proportion). The absolute mean value of the list proportion is ${0.87}(\mathrm{{CI}}({0.85}$ , ${0.90}))$ , meaning that participants clicked more frequently in the list widgets than in the 3D view (for discovery or traversal). We also calculated the proportions for discovery and traversal separately, whose ratios are0.50(CI [0.37,0.63]) and0.79(CI [0.75,0.83]). We also analyzed the list clicking proportion individually by cell neighbor counts (Fig. 14(b)). As we had noted already, however, the numbers of neighbors varied depending on the dataset and some neighbor counts received only few trials. We thus only analyzed those numbers which had more than 10 trials. In all cases, the average values of the percentage are higher than 0.5 , which means participants clicked more often in the list widgets than in the 3D view. Although the differences are small, we observed that the List click proportion increases with a growing number of neighbors. While these numbers suggest a strong preference for list interaction, this observation is skewed by the fact that by far the most clicks naturally happened in the traversal phase (0.082% on average). Looking only at target cell discovery, however, in the post-study interview feedback 13/24 participants stated that, after trying and adjusting their strategies, they finally chose to examine the exploded embryo in the 3D view to find the target cells, while the other 11/24 participants checked the list by scrolling from the top to the bottom. We show this difference of strategies in the click proportions in the two lower bars in Fig. 14(a). We also investigated, for the Combination task, the order participants chose to assign the cells. According to our logs, 8 participants always stuck to the list order, without taking the cells' positions into consideration. Another two participants switched the strategies and finally followed the list order. Others simply clicked on random orange cells they saw. + +### 4.5 Task Strategies + +We were also interested in our participants' approaches to finding target cells and traversing the neighbors, especially for the Explosion, and their choice of methods for the Combination condition. Here we report the strategies based on participants' statements in the post-study interview, combined with our observations of the participants as they interacted during the experiment. In the List condition, all participants scrolled up and down the cell list to find the orange item and then traversed the neighbors by going through the neighbor list. Participants memorized the largest associated number and either the cell name or its position in the list to complete the task. + +Because we provided no lists in the Explosion condition, participants could not rely the same strategies as with the List. We thus specifically asked them about their detailed strategies in the 3D explosion condition, organized their ideas, and grouped similar points. To help with traversal, 8/24 (33.3%) participants stated that they mentally divided neighbors into different layers and zones based on the spatial placement. For staying oriented, 7/24 (29.2%) participants rotated back to the original position every time when they finished checking the associated number of one neighbor, while $4/{24}\left( {{16.7}\% }\right)$ tried to rotate the embryo by only one fixed axis. One participant kept the best candidate cell on top during traversal. Another participant observed the relative positions of the cells and matched them into a special shape like a sphere or triangle. Then he traversed neighbors by referring to his chosen shape's corner cells. Other participants tried to memorize the cell shape, their 3D relative position, and the temporally largest number during the trial. + +During the Combination task, 10/24 (41.7%) participants used the same steps as they did in List because they were afraid to get lost in 3D interaction. One person exclusively used the Explosion interaction in the Combination task because she was bored to scroll the long list. Another 10 participants discovered target cells with Explosion and traversed neighbors with the List technique. Only $3/{24}\left( {{12.5}\% }\right)$ participants chose the techniques based on the number of neighbors. When this number was small, they used Explosion, and otherwise the List technique. Among them, two participants discovered target cells with direct interaction in the 3D view, while the other one searched the target cells in the list. + +### 4.6 Subjective Feedback + +In the post-study interview we asked about participants' preferences for the three techniques and their general thoughts on the interaction. + +As Fig. 15 shows, more than a half of participants (16/24) liked the Combination selection most. Two participants considered the Combination and List to be equally satisfying, while another one favored the Combination and Explosion techniques equally. The remaining 5/24 participants preferred the List technique. For this technique, participants appreciated its item order (e. g., "much easier to follow which have been clicked"). However, the interaction was troublesome (e. g., "was boring to scroll the list," "I had to fast move the mouse cursor between the lists on the right and 3D cells on the left"). Moreover, when the associated number was similar to the cell name by chance, it was easy to get confused (e. g., "I got messed up with the name and associated number. I forgot which one was the temporally best candidate cell."). Meanwhile, they stated that they did not pay attention to information such as the shape and 3D relative position of the cell because they only looked at the associated number in the 3D view and otherwise focused on the list ("[1] only remembered the numbers and did not examine the shape"). In the Explosion condition, participants appreciated the convenience to fast click on the cells (e. g., "all [are] the interactions in the 3D view") and the usefulness of being able to control the distance between two cells (e. g., "spreading out the cells is useful in targeting cells"), but they disliked the need to rotate the view because this led them get lost and forget which cells they had already examined (e. g., "less useful in checking out neighbors," "it was easy to get lost when rotating the embryo ... I am not sure whether I have traversed all the cells or not"). For the Combination, participants liked the freedom to spread out cells and the convenience of the default order in the list ("supports both techniques and I could be quicker"). Nonetheless, some participants would just use the same technique they preferred in the previous two tasks and thought it was useless. Others reported confusion ("I struggled to choose the technique"). One participant also reported being bored and tired in doing the last task. + +![01963eb1-f60e-7126-aa51-fd5120a89155_7_925_149_717_174_0.jpg](images/01963eb1-f60e-7126-aa51-fd5120a89155_7_925_149_717_174_0.jpg) + +Figure 15: Accumulated participant preference ranks. Note that we allowed participants to rank two techniques as their first choice and then counted none as the second, resulting in ranks 1, 1, and 3 . + +Commenting on the whole interaction, participants proposed some changes (e. g., "The interaction is good, and it will be better if there is a mark on the cells I have checked in all techniques," "[I] would like to have more context in the background of the ${3D}$ view to help orientation," "[you should] show the name of cells in 3D view so that I could have a name order to follow," and "hiding the least possible candidate cell manually would accelerate the process"). Some participants thought the two techniques should not be combined. One participant, e. g., stated that "List has an order and ${3D}$ view has another order (layer). These two orders do not have a similar logic or strategy and could not be combined. These two techniques in the same interface will disturb each other's use ... could present a 3D order based on the 3D position and link to ${2D}$ order in the list." Though most participants liked the explosion bar, one argued that horizontally moving the bar, for him, did not intuitively represent the conceptual increase of inter-cell distance. + +## 5 DISCUSSION + +### 5.1 Performance differences + +We found evidence that List led to more efficient (faster, fewer clicks) and more precise input than Explosion overall. This indicates that traditional list-based selection was more familiar to participants, compared with 3D interaction which was unfamiliar to many. Moreover, the List condition provided an order of the potential neighbors of a target, which supported participants in traversing every cell in the list without missing one as well as remembering the cell with the highest associated number, regardless of potential view manipulations in the $3\mathrm{D}$ view. In contrast to the overall results and the results for occluded cells, we did not find clear differences in completion time and accuracy of two techniques for studying surface cells. This finding may due to the fact that surface cells usually have fewer neighbors and a clear arrangement of the cells such that participants had less problems when traversing these in the 3D view. + +We also found that a direct interaction in the 3D view has advantages. While the List condition enabled participants to traverse neighborhoods faster than with the Explosion technique, with the latter participants were faster in discovering the next target. This last point probably is due to the $3\mathrm{D}$ view showing all remaining targets in a single view (with only some rotation necessary), in the lists participants had to use scrolling to get to the next target. In the traversal, in contrast, the lists of potential neighbors had a lot fewer entries than the overall list of cells, so that the participants did not need to scroll and thus their speed improved. Moreover, the need to rotate the $3\mathrm{D}$ view to traverse all neighbors often led to participants losing orientation such that they no longer remembered which cells they had looked at already. + +While this is a problem that was apparent in our pool of participants, the situation may be very different in our envisioned application domain of plant biologists constructing lineage trees. Here, the experts will not look for numbers but instead investigate the potential sister cells based on the cell's overall shape as well as the size and shape of the shared surface between the cells, properties that are essential for making the lineage decision. This means that the plant biologists not only inherently have to focus much more on the 3D view, but they also do not necessarily traverse all neighbors because they can easily reject some candidates based on their shape. Because we had to use a number associated to the cells as a proxy for the biologists' experience, our participants, in contrast, only focused on this abstract property and thus could more easily focus almost entirely on the list as their main reference point, which in turn likely led to the List condition's performance advantage. + +### 5.2 Subjective ratings + +We can also find these assumptions supported by our participants' qualitative feedback. In particular, they preferred the List technique because they felt it led to a lower mental load, requiring less memorization. Essentially, because they were not experts they turned our envisioned spatial decision into an abstract task because they did not need to examine the cell's shape etc. They thus focused on and used the arbitrary order of cells in the List condition. Consequently, our participants also disliked that they had to move back and forth between list and 3D view in the List condition. + +In the Explosion condition, in contrast, participants liked to be able to explode the embryo, to freely explore it, and to have a whole view and direct access to the cells. The downside of this aspect was the lack of a clear order of the elements that they could follow to traverse all neighbors. Moreover, the needed rotations made participants more likely to lose the orientation in the $3\mathrm{D}$ view, and consequently also to forget which of the already visited cells had the highest associated number. Participants had to memorize this intermediate result based on the cell's shape and 3D position, which was much harder for them than memorizing a position or a label in the 1D list. While these aspects made the task more mentally demanding for participants compared to the List condition, experts likely will not suffer from the same problems as we noted above. + +Another problem with the Explosion condition was that the discovery phase and the traversal phase needed different view configurations: in the former participants needed to see all cells of the embryo, while in the latter they needed to focus on only the 1-neighborhood of a single cell. We had specifically ensured that the positions of the cells did not change when switching between overall and focused view to maintain spatial continuity; yet this meant that in the Explosion condition participants had to frequently manipulate the view (adjust the zoom factors). In the List condition, in contrast, we automatically centered the view on a newly selected target because people focused on the overall cell list when selecting targets, which lead to much less need for view adjustments. + +### 5.3 Implications + +One of our main insights is that 3D interaction techniques work best for truly three-dimensional tasks which have no additional informative tags. When we asked participants to perform a purely $3\mathrm{D}$ action such as to discover colored objects among a set of exploded cells of the embryo, e. g., the 3D Explosion technique performed well and our participants used them when they had the choice. In contrast, for tasks like the traversal which our participants converted into an abstract search task as we had discussed, the List technique was faster, more accurate, and preferred. As we discussed in Sect. 5.1, for the realistic task in the biology domain the actual sister cell selection is likely much more a 3D task than our proxy, so we hypothesize that the Explosion technique will be a strong competitor (but this will have to be verified in a separate experiment). + +We also found that the use of explosion techniques as an interaction metaphor makes it possible to access objects in tightly packed 3D environments, such as for selection as in our application. For discovering target cells, our participants increased the distance between two cells and zoomed out to have a clear overview of the embryo and the relative positions of cells, while for traversal, they tended to shorten the distance and zoomed in so that they could examine cells and find a structure to traverse. Also, our participants reported that they would freely adjust the distance between two cells to have a better overview or check cell details. + +Next, the Combination seems to combine the advantages of the single techniques. While we always showed it last to participants and thus cannot rule our order effects for its performance, participants clearly preferred this type of interface over only the (1D) List or the (3D) Explosion interaction. It allows users to freely choose which technique works best for them, for a given task and dataset, and also allows them to transition to a $3\mathrm{\;D}$ interaction as they progress and as 3D aspects become more important. Nonetheless, even though with the Combination both individual interaction methods were available to participants, a constant switching between 3D view and lists is inconvenient. Participants who preferred to use List chose strategies that operating the objects in the right part of the 3D view which is placed close to the lists, while others tried to directly interact in $3\mathrm{D}$ view. + +While we studied the specific scenario of cell division analysis in botany, our results apply to many other settings in which objects need to be selected in dense environments. For instance, machine assemblies [47] and datasets in brain connectomics [9] share similar properties. In such settings experts similarly need to be able to select parts with virtually no space in-between, and have to be able to understand spatial and logical relationships between neighbors. We thus believe that our findings can inform work in such fields. + +### 5.4 Limitations + +Naturally, our work is not without limitations. We already pointed out that, while we aimed to replicate the biologists' spatial analysis task as well as possible in our experimental setting, it turned out that our proxy for "experience" allowed participants to turn the 3D spatial analysis task into an abstract search task, and we have explained the implications of this change in Sect. 5.1. While in the future we plan an empirical validation with experts, we think that our work still sheds valuable light on how we can realize selection and access tasks in tight 3D environments. + +Beyond this point, the fact that we were required by our IRB to conduct our work via video conferencing also may have affected the outcome. Naturally, participants had different types of equipment (screen resolution and size, PC power, general environment, etc.). An on-site experiment may have resulted in a more controlled environment and procedure. Nonetheless, this spread of environment reflects real-world working conditions, so we do not see this point as a strong limitation. Next, our specific choice of application case and, consequently, study dataset is a unique setting: all cells in the dataset were of roughly the same size and were "well" distributed. Other datasets in other application domains-even if they are densely packed-may have different properties and may thus lead to slightly different selection performance. Yet we believe that our general conclusions still hold. Finally, we only tested manual selection techniques. In the future, however, we foresee the use of machine learning (ML) approaches to support the biologists in establishing the cell lineage and, thus, the interaction requirements will change from manual selection to ML supervision and verification. + +## 6 CONCLUSION + +We have advanced our understanding of interaction techniques for the selection of objects in dense 3D environments. We saw that a list-based selection has advantages when the number of elements is large and when the needed information can be represented in (or "projected" to) lists. We also saw, however, that if the relevant criteria are three-dimensional properties then an explosion-based selection can have advantages, in particular when the target audience is familiar with orienting themselves in 3D space. A combination of both techniques, ultimately, provides the best of both worlds. + +[1] F. Argelaguet and C. Andujar. Efficient 3D pointing selection in clut- + +tered virtual environments. IEEE Computer Graphics and Applications, 29(6):34-43, Nov./Dec. 2009. doi: 10.1109/MCG.2009.117 + +[2] F. Argelaguet and C. Andujar. A survey of 3D object selection techniques for virtual environments. Computers & Graphics, 37(3):121- 136, May 2013. doi: 10.1016/j.cag.2012.12.003 + +[3] M. Baker. Statisticians issue warning over misuse of $P$ values. Nature, 531(7593):151, Mar. 2016. doi: 10.1038/nature.2016.19503 + +[4] M. Baloup, T. Pietrzak, and G. Casiez. RayCursor: A 3D pointing facilitation technique based on raycasting. In Proc. CHI, pp. 101:1- 101:12. ACM, New York, 2019. doi: 10.1145/3290605.3300331 + +[5] H. Benko and S. Feiner. Balloon Selection: A multi-finger technique for accurate low-fatigue 3D selection. In Proc. 3DUI, pp. 79-86. IEEE Computer Society, Los Alamitos, 2007. doi: 10.1109/3DUI.2007.340778 + +[6] F. Bérard, J. Ip, M. Benovoy, D. El-Shimy, J. R. Blum, and J. R. Cooperstock. Did "Minority Report" get it wrong? Superiority of the mouse over 3D input devices in a 3D placement task. In Proc. INTERACT, pp. 400-414. Springer, Berlin, Heidelberg, Germany, 2009. doi: 10.1007/978-3-642-03658-3_45 + +[7] L. Besançon and P. Dragicevic. The continued prevalence of dichotomous inferences at CHI. In CHI Extended Abstracts, pp. alt14:1- alt14:11. ACM, New York, 2019. doi: 10.1145/3290607.3310432 + +[8] L. Besançon, M. Sereno, L. Yu, M. Ammi, and T. Isenberg. Hybrid touch/tangible spatial 3D data selection. Computer Graphics Forum, 38(3):553-567, June 2019. doi: 10.1111/cgf.13710 + +[9] J. Beyer, A. Al-Awami, N. Kasthuri, J. W. Lichtman, H. Pfister, and M. Hadwiger. ConnectomeExplorer: Query-guided visual analysis of large volumetric neuroscience data. IEEE Transactions on Visualization and Computer Graphics, 19(12):2868-2877, Dec. 2013. doi: 10.1109/ TVCG.2013.142 + +[10] J. Cashion, C. Wingrave, and J. J. LaViola, Jr. Dense and dynamic 3D selection for game-based virtual environments. IEEE Transactions on Visualization and Computer Graphics, 18(4):634-642, Apr. 2012. doi: 10.1109/TVCG.2012.40 + +[11] F. d. Chaumont, S. Dallongeville, N. Chenouard, N. Hervé, S. Pop, T. Provoost, V. Meas-Yedid, P. Pankajakshan, T. Lecomte, Y. L. Mon-tagner, T. Lagache, A. Dufour, and J.-C. Olivo-Marin. Icy: An open bioimage informatics platform for extended reproducible research. ${Na}$ - ture Methods, 9(7):690-696, July 2012. doi: 10.1038/nmeth.2075 + +[12] M. Choi, D. Sakamoto, and T. Ono. Bubble Gaze Cursor + Bubble Gaze Lens: Applying area cursor technique to eye-gaze interface. In Proc. ETRA. ACM, New York, 2020. doi: 10.1145/3379155.3391322 + +[13] G. De Haan, M. Koutek, and F. H. Post. IntenSelect: Using dynamic object rating for assisting 3D object selection. In Proc. EGVE, pp. 201-209. Eurographics Assoc., Goslar, Germany, 2005. doi: 10.2312/ EGVE/IPT_EGVE2005/201-209 + +[14] H. Dehmeshki and W. Stuerzlinger. Intelligent mouse-based object group selection. In Proc. Smart Graphics, pp. 33-44. Springer, Berlin, Heidelberg, 2008. doi: 10.1007/978-3-540-85412-8_4 + +[15] H. Dehmeshki and W. Stuerzlinger. GPSel: A gestural perceptual-based path selection technique. In Proc. Smart Graphics, pp. 243-252. Springer, Berlin, Heidelberg, 2009. doi: 10.1007/978-3-642-02115-2_21 + +[16] J. Diepstraten, D. Weiskopf, and T. Ertl. Transparency in interactive technical illustrations. Computer Graphics Forum, 21(3):317-325, Sept. 2002. doi: 10.1111/1467-8659.t01-1-00591 + +[17] P. Dragicevic. Fair statistical communication in HCI. In J. Robertson and M. Kaptein, eds., Modern Statistical Methods for HCI, chap. 13, pp. 291-330. Springer International Publishing, Cham, Switzerland, 2016. doi: 10.1007/978-3-319-26633-6_13 + +[18] N. Elmqvist, U. Assarsson, and P. Tsigas. Employing dynamic transparency for 3D occlusion management: Design issues and evaluation. In Proc. INTERACT, pp. 532-545. Springer, Berlin, Heidelberg, 2007. doi: 10.1007/978-3-540-74796-3_54 + +[19] N. Elmqvist, P. Dragicevic, and J.-D. Fekete. Rolling the Dice: Multidimensional visual exploration using scatterplot matrix navigation. IEEE Transactions on Visualization and Computer Graphics, 14(6):1539- 1148, Nov./Dec. 2008. doi: 10.1109/TVCG.2008.153 + +[20] N. Elmqvist and P. Tsigas. A taxonomy of 3D occlusion management + +for visualization. IEEE Transactions on Visualization and Computer Graphics, 14(5):1095-1109, June 2008. doi: 10.1109/TVCG.2008.59 + +[21] N. Elmqvist and M. E. Tudoreanu. Occlusion management in immer- + +sive and desktop 3D virtual environments: Theory and evaluation. The International Journal of Virtual Reality, 6(2):21-32, Mar. 2007. + +[22] S. Fukatsu, Y. Kitamura, T. Masaki, and F. Kishino. Intuitive control of "Bird's Eye" overview images for navigation in an enormous virtual environment. In Proc. VRST, pp. 67-76. ACM, New York, 1998. doi: 10.1145/293701.293710 + +[23] G. W. Furnas. Generalized fisheye views. ACM SIGCHI Bulletin, 17(4):16-23, Apr. 1986. doi: 10.1145/22339.22342 + +[24] T. Grossman and R. Balakrishnan. The design and evaluation of selection techniques for 3D volumetric displays. In Proc. UIST, pp. 3-12. ACM, New York, 2006. doi: 10.1145/1166253.1166257 + +[25] K. Hinckley, R. Pausch, J. C. Goble, and N. F. Kassell. Passive real-world interface props for neurosurgical visualization. In Proc. CHI, pp. 452-458. ACM, New York, 1994. doi: 10.1145/191666.191821 + +[26] S. Houde. Iterative design of an interface for easy 3-D direct manipulation. In Proc. CHI, pp. 135-142. ACM, New York, 1992. doi: 10. 1145/142750.142772 + +[27] K. N. Kirby and D. Gerlanc. BootES: An R package for bootstrap confidence intervals on effect sizes. Behavior Research Methods, 45(4):905-927, Mar. 2013. doi: 10.3758/s13428-013-0330-5 + +[28] R. Kopper, F. Bacim, and D. A. Bowman. Rapid and accurate 3D selection by progressive refinement. In Proc. 3DUI, pp. 67-74. IEEE Computer Society, Los Alamitos, 2011. doi: 10.1109/3DUI.2011.5759219 + +[29] M. Krzywinski and N. Altman. Points of significance: Error bars. Nature Methods, 10(10):921-922, Oct. 2013. doi: 10.1038/nmeth.2659 + +[30] J. J. LaViola, Jr., E. Kruijff, R. P. McMahan, D. A. Bowman, and I. Poupyrev. 3D User Interfaces: Theory and Practice. Addison-Wesley Professional, Boston, ${2}^{\text{nd }}$ ed.,2017. + +[31] B. Leggio, J. Laussu, A. Carlier, C. Godin, P. Lemaire, and E. Faure. MorphoNet: An interactive online morphological browser to explore complex multi-scale data. Nature Communications, 10(1):2812:1- 2812:8, June 2019. doi: 10.1038/s41467-019-10668-1 + +[32] W. Li, M. Agrawala, B. Curless, and D. Salesin. Automated generation of interactive 3D exploded view diagrams. ACM Transactions on Graphics, 27(3):101:1-101:7, Aug. 2008. doi: 10.1145/1360612.1360700 + +[33] J. Liang and M. Green. JDCAD: A highly interactive 3D modeling system. Computers & Graphics, 18(4):499-506, July/Aug. 1994. doi: 10.1016/0097-8493(94)90062-0 + +[34] Y. Lu, C. Yu, and Y. Shi. Investigating bubble mechanism for ray-casting to improve 3D target acquisition in virtual reality. In Proc. VR, pp. 35-43. IEEE Computer Society, Los Alamitos, 2020. doi: 10. 1109/VR46266.2020.00021 + +[35] M. R. Mine. Virtual environment interaction techniques. Technical Report 95-018, Univ. of North Carolina at Chapel Hill, USA, Jan. 1995. + +[36] M. R. Morris, A. Danielescu, S. Drucker, D. Fisher, B. Lee, m. c. schraefel, and J. O. Wobbrock. Reducing legacy bias in gesture elicitation studies. Interactions, 21(3):40-45, May/June 2014. doi: 10. ${1145}/{2591689}$ + +[37] J. Moukhtar, A. Trubuil, K. Belcram, D. Legland, Z. Khadir, A. Urbain, J.-C. Palauqui, and P. Andrey. Cell geometry determines symmetric and asymmetric division plane selection in arabidopsis early embryos. PLOS Computational Biology, 15(2):e1006771:1-e1006771:27, Feb. 2019. doi: 10.1371/journal.pcbi.1006771 + +[38] A. Olwal, H. Benko, and S. Feiner. SenseShapes: Using statistical geometry for object selection in a multimodal augmented reality. In Proc. ISMAR, pp. 300-301. IEEE Computer Society, Los Alamitos, 2003. doi: 10.1109/ISMAR.2003.1240730 + +[39] I. Poupyrev, M. Billinghurst, S. Weghorst, and T. Ichikawa. The GoGo interaction technique: Non-linear mapping for direct manipulation in VR. In Proc. UIST, pp. 79-80. ACM, New York, 1996. doi: 10. 1145/237091.237102 + +[40] I. Poupyrev, T. Ichikawa, S. Weghorst, and M. Billinghurst. Egocentric object manipulation in virtual environments: Empirical evaluation of interaction techniques. Computer Graphics Forum, 17(3):41-52, Sept. 1998. doi: 10.1111/1467-8659.00252 + +[41] H. Ro, S. Chae, I. Kim, J. Byun, Y. Yang, Y. Park, and T. Han. A dynamic depth-variable ray-casting interface for object manipulation + +in AR environments. In Proc. SMC, pp. 2873-2878. IEEE Computer Society, Los Alamitos, 2017. doi: 10.1109/SMC.2017.8123063 + +[42] A. Rosset, L. Spadola, and O. Ratib. OsiriX: An open-source software for navigating in multidimensional DICOM images. Journal of Digital Imaging, 17(3):205-216, Sept. 2004. doi: 10.1007/s10278-004-1014-6 + +[43] J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig, M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld, B. Schmid, J.-Y. Tinevez, D. J. White, V. Hartenstein, K. Eliceiri, P. Tomancak, and A. Cardona. Fiji - An open-source platform for biological-image analysis. Nature Methods, 9(7):676-682, July 2012. doi: 10.1038/nmeth.2019 + +[44] A. Steed. Towards a general model for selection in virtual environments. In Proc. VR, pp. 103-110. IEEE Computer Society, Los Alamitos, 2006. doi: 10.1109/VR.2006.134 + +[45] F. Steinicke, T. Ropinski, and K. Hinrichs. Object selection in virtual environments using an improved virtual pointer metaphor. In K. Wojciechowski, B. Smolka, H. Palus, R. S. Kozera, W. Skarbek, and L. Noakes, eds., Computer Vision and Graphics, vol. 32 of Computational Imaging and Vision, pp. 320-326. Springer, Dordrecht, 2006. doi: 10.1007/1-4020-4179-9_46 + +[46] B. Szigeti, P. Gleeson, M. Vella, S. Khayrulin, A. Palyanov, J. Hokanson, M. Currie, M. Cantarelli, G. Idili, and S. Larson. OpenWorm: An open-science approach to modeling Caenorhabditis elegans. Frontiers in Computational Neuroscience, 8:137:1-137:7, Nov. 2014. doi: 10. 3389/fncom.2014.00137 + +[47] M. Tatzgern, D. Kalkofen, and D. Schmalstieg. Multi-perspective compact explosion diagrams. Computers & Graphics, 35(1):135-147, Feb. 2011. doi: 10.1016/j.cag.2010.11.005 + +[48] M. Veit, A. Capobianco, and D. Bechmann. Influence of degrees of freedom's manipulation on performances during orientation tasks in virtual reality environments. In Proc. VRST, pp. 51-58. ACM, New York, 2009. doi: 10.1145/1643928.1643942 + +[49] G. J. Wills. Selection: 524,288 ways to say "This is interesting". In Proc. InfoVis, pp. 54-60. IEEE Computer Society, Los Alamitos, 1996. doi: 10.1109/INFVIS.1996.559216 + +[50] L. Yu, K. Efstathiou, P. Isenberg, and T. Isenberg. Efficient structure-aware selection techniques for $3\mathrm{D}$ point cloud visualizations with $2\mathrm{{DOF}}$ input. IEEE Transactions on Visualization and Computer Graphics, 18(12):2245-2254, Dec. 2012. doi: 10.1109/TVCG.2012.217 + +[51] L. Yu, K. Efstathiou, P. Isenberg, and T. Isenberg. CAST: Effective and efficient user interaction for context-aware selection in $3\mathrm{D}$ particle clouds. IEEE Transactions on Visualization and Computer Graphics, 22(1):886-895, Jan. 2016. doi: 10.1109/TVCG.2015.2467202 \ No newline at end of file diff --git a/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/uYX0tEWUmTO/Initial_manuscript_tex/Initial_manuscript.tex b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/uYX0tEWUmTO/Initial_manuscript_tex/Initial_manuscript.tex new file mode 100644 index 0000000000000000000000000000000000000000..4b07a972268313ce75c36d8c2b2983c4346dba22 --- /dev/null +++ b/papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/uYX0tEWUmTO/Initial_manuscript_tex/Initial_manuscript.tex @@ -0,0 +1,237 @@ +§ COMPARING SELECTION TECHNIQUES FOR TIGHTLY PACKED 3D OBJECTS + +Category: Research + +§ ABSTRACT + +We report on a controlled user study in which we investigated and compared three selection techniques in discovering and traversing 3D objects in densely packed environments. We apply this to cell division history marking as required by plant biologists who study the development of embryos, for whom existing selection techniques do not work due to the occlusion and tight packing of the cells to be selected. We specifically compared a list-based technique with an additional $3\mathrm{D}$ view, a $3\mathrm{\;D}$ selection technique that relies on an exploded view, and a combination of both techniques. Our results indicate that the combination was most preferred. List selection has advantages for traversing cells, while we did not find differences for surface cells. Our participants appreciated the combination because it supports discovering $3\mathrm{D}$ objects with the $3\mathrm{D}$ explosion technique, while using the lists to traverse 3D cells. + +Index Terms: H.5.2 [Information Interfaces and Presentation]: User Interfaces—Interaction Styles + +§ 1 INTRODUCTION + +Selection as an interaction technique is fundamental for data analysis and visualization [49]. In 3D space, selection requires users to find and point out one or more 3D objects (or subspaces), and a sizable amount of research has been carried out on different 3D selection techniques $\left\lbrack {1,2,5,8,{20}}\right\rbrack$ . Among them, ray-casting $\left\lbrack {1,{35},{41}}\right\rbrack$ and ray-pointing $\left\lbrack {1,4,{38}}\right\rbrack$ for object selection as well as lasso techniques $\left\lbrack {{50},{51}}\right\rbrack$ for point clouds or volumetric data are common techniques. These existing techniques come to a limit, however, when data objects are tightly packed and no space exists whatsoever between adjacent data objects so that internal structures are inaccessible. + +Such selection problems in dense environments arise in many scientific domains where researchers deal with data that originates from sampling properties in $3\mathrm{D}$ space. We are motivated, in particular, by botany where cells are densely packed in captured data, virtually without any room between them and half or more of them being enclosed [20] such as in a confocal microscopy dataset of a plant embryo's cellular structure (Fig. 1). With such data, botanists explore the development of plant embryos based on their cellular structure. Using a segmented dataset, they reconstruct the history of the embryo's cellular development [37]. This process requires them to select each cell, one by one, examine its immediate neighborhood, select each potential candidate in the neighborhood to check the shared surface and relative position, and then decide on a likely sister cell that originated from the same parent as the target cell. This process is continued for all cells, and potentially previous assignments are revised if needed. The cells are naturally tightly packed, so we ask the question of how to effectively select 3D objects in such spaces, in particular for realistic datasets with 200 cells or more. + +Currently, botanists use several tools to study cell division, but none of them provides efficient selection interaction techniques for 3D objects in dense packed environments; they are unable, e. g., to filter cells in a view for better selecting or to support marking based on 3D data rather than just 2D (TIFF) images. Researchers currently manually mark the cells, starting by targeting cells for which it is easiest to find the respective sisters. From the set of $2\mathrm{D}$ images, they then identify all neighbors and examine their shapes and that of the surface the two cells share. Based on their past experience, they then decide on the most likely sister for the target cell. + +We thus worked with them to understand their needs and support them with a new approach for interactively deriving the cell division tree. To better investigate the effectiveness of the needed selection techniques in this specific dense packed data scenario, we divided the cell selection into two parts: discovery and traversal. Discovery means to find a specific cell to assign within the whole embryo, while traversal refers to picking a specific range of cells in order. With this definition, we can describe the cell division process as repeatedly discovering target cells and traversing all their neighbors. We then explored three selection techniques: list selection (List), explosion selection (Explosion), and a combination of both (Combination). List provides traditional lists to indirectly select cells, while Explosion displays an explosion view of the embryo and allows to directly select cells. Combination supports both techniques in one interface. We were also interested in how efficient these techniques are when selecting cells in different positions (on the surface and being enclosed). We thus designed a user study to compare the techniques and two cell positions. We measured task completion times, assignment accuracy and clicking ratios (clicking times for each neighbor). We also gathered subjective feedback from participants such as their interaction strategies and preference. + + < g r a p h i c s > + +Figure 1: Plant embryo dataset with 201 cells (87 "occluded" cells): (a) a segmented cross section from confocal microscopy, (b) the 3D model, and (c) a part of the desired cell lineage tree-the botanists' goal to be able to study the embryo' development. + +Our results show most participants favored the Combination technique: they preferred to control the cell distance, often discovering targets in the 3D view, and then using the lists to traverse the neighbors. List performed better than Explosion when assigning occluded cells, while there was no clear performance difference between these two techniques for the cells on the surface. With our results on the techniques' performance and people's feedback about interaction, we derived suggestions for future 3D selection technique design and discuss current limitations. In summary, we contribute: + + * a controlled experiment to study selection of dense 3D datasets with traditional input devices, whose results shed light on the performance of three selection techniques, for two cell positions (on the surface or occluded), + + * an analysis of participants' preferred strategies for List, Explosion and Combination as well as the involved two steps (discovery and traversal) of cell selection, and + + * a discussion of selection techniques for dense 3D environments. + +§ 2 RELATED WORK + +The actual tasks we employed in our work on selection techniques focus on object discovery and traversal, rather than simple picking. Below we thus first review related work about discovery and accessing techniques for $3\mathrm{D}$ objects. We then discuss general interaction techniques besides selection for dense datasets, especially for desktop-based interaction. We end this section with a small survey of cell visualization applications-our application domain. + +§ 2.1 DISCOVERY AND ACCESS TECHNIQUES + +3D discovery is essential for finding the target cells among numerous cells. It needs to be able to deal with occlusion, yet should maintain the spatial relationship of an object and its context [20]. Elmqvist and Tsigas [20] summarized a range of techniques to discover objects from densely datasets in virtual environments. They identified five design patterns: multiple viewports, virtual X-ray tools, tour planners, volumetric probes, and projection distorters. One of our approaches (explosion selection) falls into the last of these categories, while our list selection seems to be a separate category as it uses an abstract representation of the elements. + +Though there were ways in dealing with the occlusion problem, the direct interactions including discovering are limited and to completely solve the occlusion, usually multiple techniques would be used [2]. To ease discovery, researchers have also used object highlighting or dimming the remainder of the objects. In the past, space distortion [21-23] and distinguishing the objects in a region [45] have been extensively studied for object highlighting, while object deacentuation has been achieved with transparency $\left\lbrack {{16},{18},{21}}\right\rbrack$ and selective object hiding [21]. These techniques, however, have not been fully tested for discovering a large number of objects such as in our case because the such datasets have high needs for orientation and an extreme lack of visual cues. Here, our application has an advantage: it is guaranteed that the sister cell, at any hierarchy level, is next to its sibling. + +Multiple techniques have also been studied for precise accessing [20], and the spacial occlusion cases are most relevant for us. In 3D environments and, especially, VR, researchers have investigated using dedicated 3D selection tools to address the occlusion issue [2]. The most common techniques are ray-casting [30,34,35], ray-pointing [38], bubble cursor [12, 34], sphere-casting refined by QUAD-menu (SQUAD) [28] and virtual hand [39,40]. Among these four, ray-casting and SQUAD were claimed suitable for dense objects [10] and numerous of studies have explored ways to improve these two techniques. For example, JDCAD [33] allowed people to use the cone selection to freely create the selection volume, which avoided the drawback of the ray-casting that using additional 1D input to select 3D objects. Grossman et al. [24] proposed a ray cursor that provided all the intersected targets and allowed users to choose. Later, Baloup et al. [4] developed RayCursor to automatically highlight the closest target and support manually switching the selection of intersected objects. As for the SQUAD, to offset the cumbersome steps in accessing dense objects, Cashion et al. [10] added a dimension called Expand to enable the sphere to zoom. Furthermore, to help accurately select an object users see, researchers have explored advanced access techniques that could calculate which object users would possibly select. For example, Haan et al.'s [13] IntenSelect technique dynamically calculated a score for objects inside a set volume and allowed people choose from the objects with the highest scores. Similarly, Smart Ray [24] continuously calculated and updated object weights to help users to determine which object to select when multiple targets were intersected. All these techniques are efficient in discovering and accessing objects in sparse datasets, yet are not suitable for the highly dense environments with no space between possible selection targets. Moreover, in practical scenarios people are typically aware of which target to select, while in our cell division application the biologists make the decision by referring to the shared surface between the two cells and thus have to traverse a number of potential targets to assess their suitability. Also, the learning effects of new techniques could be high. + +§ 2.2 INTERACTION TECHNIQUES FOR DENSE DATASETS + +In virtual 3D cell manipulations, biologists need to precisely select objects from dense sets, without knowing which objects may need to be selected. Previous studies [36] have demonstrated that users tended to stick with the familiar mouse interaction. In addition, past work $\left\lbrack {6,{48}}\right\rbrack$ has shown that low-DoF input devices such as mouse and keyboard can easily achieve such tasks with high accuracy. These supported our decision to study cell division with familiar input devices. Nonetheless, in virtual 3D environments-especially in VR—discovering an enclosed object can consume more time [2], even though the selecting is easier due to better depth perception in stereoscopy. In our dense embryo cells scenario we thus relied on a traditional projected-3D environment with mouse and keyboard input to accommodate our domain's need for high selection accuracy. + +Researchers have also explored various methods for mouse and keyboard input to manipulate the objects. For example, Houde [26] raised the idea of creating a handle box outside the 3D object and, similarly, modern 3D modeling applications such as Blender and Rhino allow users to individually transform the 3D objects with mouse and keyboard. Applications also provide layers for organizing the objects and selecting multiple items from a list. Even though in some controlled environments the object layout can be rearranged to avoid occlusion [44], in our case the cells' spatial relationship must not be changed to provide our users with a faithful representation. + +Past work on selection in dense datasets has focused on structure-aware approaches (e. g., $\left\lbrack {{14},{15},{19},{50},{51}}\right\rbrack$ ). Unlike particle or volumetric data which contains huge amounts of points or a sampled data grid without explicit borders, our embryo cell data has dedicated cells that could be picked-yet are tightly packed to each other such that many are not accessible for traditional picking. Lasso-based selection is also not appropriate because we do not need to enclose regions but need to match two dedicated objects as sister cells. We thus instead require interaction techniques that preserve the respective positioning at least locally and allow us to access all cells in an efficient and effective way. + +§ 2.3 CELL VISUALIZATION + +Cell data visualization has been found to be useful in helping biologists get knowledge about cell development. Various academic tools (e. g., OsiriX [42], Fiji ImageJ [43], OpenWorm [46], and Icy [11]) and commercial software (e. g., Avizo, Imaris) provide advanced live-imaging techniques and computational approaches to allow users to clearly observe and interact with their data. The interaction in these tools, however, remains simple: mouse-clicking the cells on the surface of an embryo provides the users with access to specific variables and actions. For example, MorphoNet [31] uses Unity to visualize diverse types of cell data on a website, allowing users to visually explore cells. They left-click to target a cell, and can rotate and zoom using specific keyboard combinations. This interacting process is smooth for a few cells, while it gets slow and tedious for large datasets (i. e., with $> {100}$ cells). Though the software can hide and show cells, it only provides access to the current outside of the embryo. No single tool among the mentioned software is applicable to the cell division annotation, so we worked to develop and study dedicated selection techniques for the entire embryo. + +§ 3 STUDY DESIGN + +To understand how people can best select objects in densely packed 3D settings—in our application domain to discover target cells and traverse their neighbors-and, ultimately, to process large datasets using these interaction techniques, we designed the experiment we describe below. We pre-registered this study (osf.io/yze5n/? view_only=19925d8cfed240f9bd11c24e5bf98995) and it was approved by our institution's ethical review board. + +§ 3.1 INTERACTION TECHNIQUES + +We chose all the techniques based on previous related work and implementations biologists are using now. From our decisions to focus on desktop settings, an obvious interaction technique to select from a set of segmented objects is to use a list widget (Fig. 2(a)). Participants could discover the target cells from the list only. It has the advantage of mapping the objects distributed in 3D space into a 1D dimension, for a given order in the set. Naturally, there is no such mapping that preserves the objects' original 3D location, but in our use case researchers need to access all of the cells from the set eventually. Moreover, this interaction also lends itself easily to the task of marking the cell division history, as we can algorithmically extract the potential sister cells of a selected target from the segmented dataset and show them in another list widget. For each item in the list, we only show a name because, in the real scenario, biologists refer to such names. In addition, we did not include additional data since they evaluate the shapes and neighborhoods of cells in the 3D view rather than making decisions based on numeric cell property values such as a shared surface area. + + < g r a p h i c s > + +Figure 2: Three main interaction targets for the techniques compared in the study: (a) List, (b) 3D Explosion, and (c) Combination selection. Target cells are marked in orange and selected cells are red. In all three cases the 3D view was visible to the participants. + + < g r a p h i c s > + +Figure 3: The focused view of a target cell and the associated number shown near the neighbor cell's surface (red cell is the target cell and yellow cell is the neighbor cell with its associated number). + +Nonetheless, the 3D location and 3D shape of the respective cells do play a role, both for the initial target selection (as researchers tend to solve the easy cases first) and for the decision on the sister cell (by inspecting the geometry of the shared surface). We thus were also interested in the performance of selection techniques directly in the projected 3D view. We solved the inherent object density and occlusion issues by employing 3D explosion techniques [32,47]. Using this approach we created additional space between the cell objects, both for the initial selection of a target cell in the embryo (e. g., Fig. 2(b)), the examination and, ultimately, selection of the sister cells for this target (e. g., Fig. 3). + +Another fundamental approach to exploring the inside of $3\mathrm{D}$ objects or volumetric datasets in visualization is the use of cutting planes (e. g., [25]). We also explored this technique as a basis for exploration and selection as it conceptually relates to the slices of the confocal microscopy approach in our application domain. With this technique, researchers would be able to move and orient a cutting plane freely in 3D space, and then we would show the intersected cells in an unprojected slice view where they could be clicked for selection. Pilot tests showed, however, that this approach was not promising because it was difficult to reason from the intersected cells to their correct 3D shape and correct selections took a long time, so we did not further pursue this technique in our experiment. + +Instead, we also merged the first two techniques into a Combination technique in which participants had the choice between using List and Explosion selection. Moreover, in all techniques, including in the List selection, we showed the 3D projection of the embryo or a target cell's direct environment as our collaborating biologists always make the decision of which two cells are sisters based on the shape and size of their interface (i. e., the shared surface between the two cells). We thus also used an explosion representation for the List selection technique, to guarantee that our participants can observe the shared surface. In the Explosion and Combination techniques, however, we allow users to freely adjust the explosion degree and to control the amount of space they need for navigating in 3D space. + +§ 3.2 TASKS + +With these interaction techniques we aimed to support the practical task of deriving the cell lineage for an entire embryo. We thus modeled the tasks in our experiment based on the approach our collaborating experts (three plant biologists, all with more than 20 years of professional experience) take to derive the cell division history as outlined in, using the tools described in Sect. 2. We followed the same process in our experiment: participants were first asked to select a non-marked target cell from the embryo. We then showed them this cell's immediate neighborhood in the focused view (Fig. 3, both as a 3D view and, in case of List and Combination techniques, as a list), and then asked them to select the correct cell based on which cell is most likely the sister of the target. + +This approach would naturally limit us to participants with years of experience in plant biology cell lineage analysis and the cell division scenario only. To circumvent these restrictions, we implemented a proxy for the biologists' experience: As we show a target cell's neighborhood, we asked participants to select each potential neighbor, after which we showed a pre-defined "likelihood" (an Integer $\in \left\lbrack {1,{99}}\right\rbrack )$ of being the correct sister cell. We chose this number randomly and independent of the specific situation because we were interested in general feedback on selection in dense environments with non-expert participants. We displayed this number in the 3D environment hidden from the current view to force participants to use 3D navigation (i. e., rotation) to reveal the number-this interaction mimicking the $3\mathrm{D}$ evaluation of the interface between two cells that the biologists would do. Participants would then need to find the cell with the highest number to make a correct selection. In addition, this highest number was not necessarily 99, so that participants would have to examine each potential neighbor at least once. + +§ 3.3 DATASETS + +We used a real embryo data provided by our collaborators, which contained 201 cells. We chose this dataset due to its realistic size. Experimental time limits, however, meant that participants could not assign sisters for all cells, we thus created three sets of target cells for them to mark, each with 10 cells. We were interested in the influence of the cell position (surface vs. occluded), so we created all three sets with 5 cells on the embryo’s surface and 5 cells that were enclosed by other cells. To reduce learning effects, the three sets did not share a same cell, nor did they share any of the respective neighbors. Each set plus its 1-neighborhood (i. e., direct neighbors) was thus completely distinct from the other sets, plus their respective 1-neighborhoods, which guaranteed that any past assignment (even if done incorrectly) would not affect any future marking. Otherwise, if two target cells would have shared a potential neighbor, then participants marking this neighbor as a sister of either target would means that the other target would lose a sister candidate. + + < g r a p h i c s > + +Figure 4: Study interface (combination selection shown). + +§ 3.4 INTERFACE + +In three conditions, the interfaces contained three main parts: instruction panel, 3D view and operation panel (see Fig. 4). The operation panel in all techniques contains two buttons. One could be used to auto relocate the whole embryo to center the center of the 3D view, in case participants got lost, and another one enabled participants to jump to the next task. In List and Combination, this panel included a global list of all cells in the left list view and a focused neighbors list, showing only the direct neighbors of a selected target cell. We scaled the interface to completely fill the screen size of participants' computers, with the ratio of each part's size to the interface size being fixed. In the instruction panel, we displayed the study progress state and a brief introduction of the interaction in the task. We placed the 3D view on the left, while we showed the operation panel on the right. We designed the relative to indicate that 3D view was the main reference, and such that it was approximately square. Below the 3D view, we placed a horizontal bar widget to allow participants to control the explosion distance between the cells. We placed the button to mark two cells as sisters on the top and in the center, somewhat in the middle between 3D view and operation panel such that the distances to travel to the button from $3\mathrm{D}$ view or lists were about the same. We also allowed participants to assign cells by pressing the space in the keyboard to further reduce the impact of the actual marking action on completion time. + +For indicating cells from the sets to be marked, we highlighted them in the list via orange icons for List and rendered the cells' 3D shapes in orange in the 3D view for Explosion. In Combination mode, we used both forms of highlighting. When participants clicked on a cell either in the 3D view or the lists, we also showed the corresponding item in the lists and the cell in $3\mathrm{D}$ view would in red (for target cells) or yellow (for neighbor cells) in the 3D view or highlighted in the list as shown in Fig. 4. Finally, we modeled the interaction in the $3\mathrm{D}$ view after commercial $3\mathrm{D}$ modeling software like Rhinoceros or Blender. Participants could hold the right mouse button to rotate, scroll the wheel to scale, and hold the wheel to pan. To distinguish rotating from clicking, the left button of mouse in the 3D view could be used to click and double click the cell. + +§ 3.5 MEASURES + +We assigned a unique participating number to every participant and recorded all data based on this number to guarantee participant anonymity. For all trials, we recorded total completion times, accuracy, every action participants did, and tracked the real-time position of the camera. We started the timer when the program had loaded the visualization for each trial and stopped once the participant triggered the signal of assigning the cell sister (button click or keyboard press). We asked participants to activate the assignment once they found the sister. After choosing the sister for the target, these two cells would disappear in the $3\mathrm{D}$ view and the corresponding items in the lists would also be disabled. We then instructed participants to continue with the next assignment and we restarted the timer. We measured the total trial completion time and accuracy by calculating the ratio of correct assignments in all assignments. Aside from completion time and accuracy, we also recorded the cell selection ratio (clicking times divided by the neighbor count) to better understand the efficiency of different techniques. A more efficient selection technique was likely to have lower clicking ratio, one that is closer to 1 . After participants finished all tasks, the examiner conducted a post-study semi-structured interview, focusing specifically on the following questions: Q1-Sort the three techniques by preference; Q2-What strategies did you use in doing three tasks? and Q3-Do you have any other comments on the interaction? + +§ 3.6 PARTICIPANTS + +As our goal was to generally understand object selection in dense datasets and to provide recommendations also for non-botany scenarios, we targeted non-expert participants. We recruited 24 people in total via social networking and our local university's mailing list ( 8 females, 16 males; 24-31 years old, with a mean age of 26.96 years). All participants had at least a master degree, were right-handed, and were well trained in the usage of mouse and keyboard interaction. None of them was color deficient. Twelve of them had previous experience in 3D manipulation including 3D video games playing, and none of them had knowledge about cell division before. The latter aspect is important as it suggests that all participants made their assignments only based on the number we showed, rather than their previous knowledge of cell division patterns. + +§ 3.7 PROCEDURE + +We conducted the experiment via remote video calls due to the limitations that arose from the Covid-19 pandemic for our research environment and for the participants. We minimised the remoteness effects by checking in advance whether every participant could smoothly conduct the experiment with their preferred devices. We first explained participants the purpose of our study, asked them to fill in basic demographic information, and sign a consent form if they agreed to participate. Because we conducted the study online, for those participants who preferred not to install our experimental software by themselves, we asked them to use a dedicated remote interaction software to allow them to remotely control the experimenter's computer. The others had downloaded the software and installed the software in advance and shared their screen while they communicated with the researcher via video conferencing. + +We divided the experiment into three blocks, one for each technique. Each block began with a non-timed training session in which the experimenter first explained the task using written instructions in the interface and a study script, and then asked participants to try their best to traverse all the neighbors of a target cell and to find the correct answer as soon as possible. Before transferring to the main task, the experimenter ensured that participants understood the task and were able to conduct the tasks correctly and independently. After finishing all tasks, we conducted the mentioned post-study interview to explore participants' strategies and individual experiences. + +Our first objective with the experiment was to compare the List and Explosion techniques. We thus only presented these two techniques in the first two study blocks. We counter-balanced the order of both techniques to reduce order effects. Our second objective was to assess how participants would interact when having the choice of using the Combination technique, after having experienced the List and Explosion techniques separately. In the third block we thus always presented the Combination technique to participants. In addition, we were interested in the effect of occluded vs. surface cells, so we alternated between these types and also counter-balanced the type a participant would see first. We did not expect an effect of the specific order of cells in the list view, so we always used the same order (by name) for all participants. In List and Explosion tasks, we showed the next target cell in orange after participants had finished the former assignment, while we marked all target cells at the start of a Combination task to explore in which sequence participants would assign them. The order of the specific cell subsets may play a role, so we counter-balanced the order of the three subsets. In total, we thus had a 2 techniques $\times 2$ cell types $\times 3$ data subsets design, resulting in 12 combinations in total, and each possible combination was experienced by two participants. We used 10 trials per technique and the resulting experiment lasted about one hour per participant. + + < g r a p h i c s > + +Figure 5: Completion time (absolute mean time) for different numbers of cell neighbors in seconds: (a) overall time, (b) List selection, (c) Explosion selection, and (d) Combination selection. + + < g r a p h i c s > + +Figure 6: Completion time (absolute mean time) in seconds (List in yellow and Explosion in red): (a) the overall results, (b) selection of occluded cells, and (c) selection of surface cells. + +§ 4 RESULTS + +We now present our experimental results of completion time, accuracy, and clicking ratio for the two selection techniques List and Explosion. We then individually examine the use of Combination, which we cannot analyze together with the other techniques due to potential order effects. We also compared the performance of the different techniques in assigning cells from two positions (on the surface or occluded). Cells on the surface (surface cells) typically have less neighbors and clearer layers, while enclosed cells (occluded cells) are hidden entirely from an outside view. We also discuss our participants' strategies and subjective feedback. + +We gathered totally 720 trials (24 participants $\times 3$ tasks $\times {10}$ trials). Recent recommendations from the statistics community made us choose an analysis of the results using estimation techniques with confidence intervals (CIs) and effect sizes $\left\lbrack {7,{17},{29}}\right\rbrack$ , instead of using a traditional analysis based on $p$ -values [3], to avoid the dichotomous decisions. We did not find all measurements to be normally distributed, so we used bootstrapping CI [27] to analyze completion time, accuracy, and clicking ratio. We visualized our output distributions to increase the transparency of our reporting. + +§ 4.1 COMPLETION TIME + +We can naturally assume an impact of neighbor count on completion time and we indeed observed an approximately linear relationship-globally for all tasks (Fig. 5(a)) and also for the individual tasks (Fig. 5(b)-(d)). The mean neighbor count per dataset, however, was approximately similar (10.4 vs. 10.1 vs. 10.8). Moreover, each combination of task with dataset was seen by the same number of participants (fully counter-balanced), so in our remaining global analysis of completion times this relationship does not play a role. + +Techniques. In Fig. 6 we present the absolute mean values of time in seconds for each technique. With List, the average time is ${63.81}\mathrm{\;s}$ (CI [56.25s,74.82s]), while using Explosion, the average time for one target cell is ${69.75}\mathrm{\;s}$ (CI [60.64s,80.26s]). Since the CIs overlap a lot, to better demonstrate the difference in the completion time, we checked the pair-wise ratio for these two techniques (see Fig. 7). The ratio for List/Explosion is 0.91 (CI [0.86,1.01]). As we can see, the upper bound CI of List/Explosion is 1.01, close to but above 1, so there is some evidence that the List selection tool less time than Explosion. The absolute difference, however, is only small as evident in the similar completion times. We also investigated the completion time differences with these two techniques in two task parts: discovery and traversal. For the discovery part (i. e., the accumulated times from the start of a trial to the selection of the target cells), the average mean times are 7.57s (CI [6.79s, 8.52s]) with List and 5.23s (CI [4.31s, 6.36]) with Explosion (see Fig. 8(a)). Since the upper bound of CI in Explosion is smaller than lower bound of CI in List, the Explosion is evidently faster in discovering target cells than List. We also checked the pair-wise ratio of List/Explosion and it is ${1.45}\left( {\mathrm{{CI}}\left\lbrack {{1.27},{1.69}}\right\rbrack }\right)$ , which confirmed that List selection needed more time than Explosion (see Fig. 9(a)) for object discovery. As for traversing (i. e., the accumulated times for checking all neighbors of a cell), the average time for List is 54.84s (CI [47.98s, 65.12s]), while for Explosion it is 62.26s (CI [54.37s, 71.49s]) (see Fig. 8(b)). Because the CIs overlap a lot, we examined the pair-wise ratio to better analyze the difference. As Fig. 9(b) shows, the ratio for List/Explosion is0.88(CI $\left\lbrack {{0.82},{0.98}}\right\rbrack$ ), so there is some evidence that List selection is faster for traversal than Explosion. + + < g r a p h i c s > + +Figure 7: Pair-wise differences for completion time: (a) the ratio overall, (b) the ratio for occluded cells, and (c) the ratio for surface cells. + + < g r a p h i c s > + +Figure 8: Completion time (absolute mean time) in seconds with two steps (List in yellow and Explosion in red): (a) the target cell discovery, + +and (b) neighborhood traversal. + + < g r a p h i c s > + +Figure 9: Pair-wise differences for completion time in two steps: the ratios for (a) discovery and (b) traversal. + +Positions. We were also interested in the possible influence of the cell position on performance. We investigated the average completion time for occluded cells (Fig. 6(b)), which was 79.42s (CI [69.83s, 93.52s]) in List and 88.58s (CI [77.43s, 102.33s]) in Explosion. Because this difference of mean completion times is small and the CIs overlap, we again checked the pair-wise ratio, which is 0.90 (CI $\left\lbrack {{0.84},{0.97}}\right\rbrack$ ). The upper bound of the CI is again close to 1.0, so there is some evidence that with List participants could finish the task quicker than Explosion when dealing with occluded cells. We did the same analysis for surface cells. Here, the average times are 51.62s (List; CI [45.05s, 61.23s]) and 54.92s (Explosion; CI [46.87s, 63.27s]), and the pair-wise ratio for List/Explosion is 0.94 (CI [0.86, 1.06]). We thus cannot find much evidence that, in assigning surface cells, List selection would be faster than Explosion. + +§ 4.2 ACCURACY + +We measured the accuracy of the assignments with two techniques (List and Explosion) and two positions. We calculated the accuracy by dividing the correct assignments count by the total trials count. + +Techniques. We report the absolute mean values of correctness in two techniques in Fig. 10 and the pair-wise ratio for comparison in Fig. 11. The accuracy was high in both techniques so we kept three decimals for a better comparison. For the List, the absolute mean value of accuracy is0.987(CI $\left\lbrack {{0.963},{0.996}}\right\rbrack$ ), while in Explosion, the value is0.933(CI $\left\lbrack {{0.892},{0.958}}\right\rbrack$ ). From Fig. 10(a) we can see that all participants found at least 8 correct sisters (as every participant used each technique to make assignments for 10 cells). In addition, the fact that CIs do not overlap provides evidence that List resulted in more accurate assignments than Explosion. We also analyzed the pair-wise ratio (List/Explosion) to better understand the difference, which was ${1.06}\left( {\mathrm{{CI}}\left\lbrack {{1.03},{1.10}}\right\rbrack }\right)$ . This result provides evidence that List works more accurate then Explosion, although the mean accuracy values are similar and are both high. + +Positions. We also present the absolute mean values of accuracy for the two positions in the two techniques in Fig. 10 and the pairwise ratios between them in Fig. 11. For occluded cells, the absolute mean values of List and Explosion are 1.000 (CI [NA, NA]) and 0.933 (CI [0.858, 0.967]), respectively (Fig. 10(b)). Using the List technique, all participants thus assigned all occluded cells correctly and we can say that the List technique achieved more correct assignments than Explosion. The pair-wise ratio (List/Explosion), which turned out to be1.10(CI $\left\lbrack {{1.03},{1.20}}\right\rbrack$ ), confirms this finding, yet its lower bound being close to 1 makes this result only weak evidence. For the surface cells, the absolute mean values for the two selection techniques (List and Explosion) are 0.975 (CI [0.925, 0.992]) and 0.933(CI [0.883,0.958]). The largely overlapped CIs show limited information for the differences. The pair-wise ratio is ${1.05}(\mathrm{{CI}}\lbrack {1.01}$ , 1.09]) which also only provides weak evidence that List performed more accurately than Explosion for surface cells. + +§ 4.3 CLICKING RATIO + +We also counted the click events in both the lists and on the 3D view. We separated the clicks needed for rotation in the 3D view for both techniques as these were right clicks-in contrast to the left clicks in the list or $3\mathrm{D}$ view for selection. Thus, we only counted clicks to access specific cells. We define the clicking ratio as the average times participants clicked on every neighbor to get the right answer, i. e., the click counts divided by the number of neighbors. Ideally, participants only click all neighbors once to find the right sister, with a clicking ratio of 1 . In practice, however, participants usually clicked one same cell multiple times. We chose this variable as a factor to evaluate the efficiency of the selection techniques. The more this number deviates from 1, the worse is the efficiency. + +Techniques. We report the absolute mean values of the clicking ratio for the two techniques in Fig. 12(a). List had the smallest absolute mean value which with 1.37 (CI [1.32,1.45]), while the value for Explosion was ${1.70}\left( {\mathrm{{CI}}\left\lbrack {{1.58},{1.86}}\right\rbrack }\right)$ . Though the $\mathrm{{CIs}}$ are non-overlapping and there is evidence that supports that List has a lower clicking ration than Explosion, to further explore the differences we also calculated the pair-ratio of List/Explosion (Fig. 13(a)). The ratio turned out to be0.84(CI [0.77,0.90]), which provides good evidence that List required less clicks than Explosion. + +Positions. We also examined the absolute mean values of the clicking ratio for the two positions. The absolute mean values for occluded cells are 1.31 (List; CI [1.26, 1.38]) and 1.71 (Explosion; CI [1.56, 1.88]) respectively. The upper bound CI of List being much smaller than the lower bound CI of Explosion provides evidence that List required fewer clicks than Explosion. The pairwise ratio (List/Explosion) being 0.81 (CI [0.73,0.89]) confirms this assessment. For the surface cells, the mean values are 1.45 (List; CI $\left\lbrack {{1.37},{1.56}}\right\rbrack )$ and 1.69 (Explosion; CI [1.58. 1.87]) as shown in Fig. 12(c). The confidence intervals are close to we further checked the pair-wise ratio (List/Explosion), which is 0.88 (CI [0.82, 0.94]). This evidence supports that using List required fewer clicks than Explosion also for surface cells. + +§ 4.4 TECHNIQUES USED IN COMBINATION + +We analyzed the Combination technique individually because we presented this technique to participants always last-participants first had to learn the individual techniques. In Combination, participants were able to complete the task freely, with both List and + + < g r a p h i c s > + +Figure 13: Pair-wise differences for clicking ratio: (a) the ratio overall and (b) the ratio for occluded cells (c)the ratio for surface cells. + + < g r a p h i c s > + +Figure 14: Clicking proportions of List/(List + Explosion) in the Combination task: (a) overall and (b) by neighbor count (for discovery + traversal; $x$ represents the numbers of the cell neighbors, and $y$ represents the clicking proportions). + +Explosion available to them. We were interested in how participants would combine them and whether the neighbor number would influence their choice. We thus calculated the proportions of their click counts in the List condition (over List plus Explosion clicks together) to present the strategy, which we show in Fig. 14(a) (top bar; the Explosion click proportion is the complement of the List proportion). The absolute mean value of the list proportion is ${0.87}(\mathrm{{CI}}({0.85}$ , ${0.90}))$ , meaning that participants clicked more frequently in the list widgets than in the 3D view (for discovery or traversal). We also calculated the proportions for discovery and traversal separately, whose ratios are0.50(CI [0.37,0.63]) and0.79(CI [0.75,0.83]). We also analyzed the list clicking proportion individually by cell neighbor counts (Fig. 14(b)). As we had noted already, however, the numbers of neighbors varied depending on the dataset and some neighbor counts received only few trials. We thus only analyzed those numbers which had more than 10 trials. In all cases, the average values of the percentage are higher than 0.5, which means participants clicked more often in the list widgets than in the 3D view. Although the differences are small, we observed that the List click proportion increases with a growing number of neighbors. While these numbers suggest a strong preference for list interaction, this observation is skewed by the fact that by far the most clicks naturally happened in the traversal phase (0.082% on average). Looking only at target cell discovery, however, in the post-study interview feedback 13/24 participants stated that, after trying and adjusting their strategies, they finally chose to examine the exploded embryo in the 3D view to find the target cells, while the other 11/24 participants checked the list by scrolling from the top to the bottom. We show this difference of strategies in the click proportions in the two lower bars in Fig. 14(a). We also investigated, for the Combination task, the order participants chose to assign the cells. According to our logs, 8 participants always stuck to the list order, without taking the cells' positions into consideration. Another two participants switched the strategies and finally followed the list order. Others simply clicked on random orange cells they saw. + +§ 4.5 TASK STRATEGIES + +We were also interested in our participants' approaches to finding target cells and traversing the neighbors, especially for the Explosion, and their choice of methods for the Combination condition. Here we report the strategies based on participants' statements in the post-study interview, combined with our observations of the participants as they interacted during the experiment. In the List condition, all participants scrolled up and down the cell list to find the orange item and then traversed the neighbors by going through the neighbor list. Participants memorized the largest associated number and either the cell name or its position in the list to complete the task. + +Because we provided no lists in the Explosion condition, participants could not rely the same strategies as with the List. We thus specifically asked them about their detailed strategies in the 3D explosion condition, organized their ideas, and grouped similar points. To help with traversal, 8/24 (33.3%) participants stated that they mentally divided neighbors into different layers and zones based on the spatial placement. For staying oriented, 7/24 (29.2%) participants rotated back to the original position every time when they finished checking the associated number of one neighbor, while $4/{24}\left( {{16.7}\% }\right)$ tried to rotate the embryo by only one fixed axis. One participant kept the best candidate cell on top during traversal. Another participant observed the relative positions of the cells and matched them into a special shape like a sphere or triangle. Then he traversed neighbors by referring to his chosen shape's corner cells. Other participants tried to memorize the cell shape, their 3D relative position, and the temporally largest number during the trial. + +During the Combination task, 10/24 (41.7%) participants used the same steps as they did in List because they were afraid to get lost in 3D interaction. One person exclusively used the Explosion interaction in the Combination task because she was bored to scroll the long list. Another 10 participants discovered target cells with Explosion and traversed neighbors with the List technique. Only $3/{24}\left( {{12.5}\% }\right)$ participants chose the techniques based on the number of neighbors. When this number was small, they used Explosion, and otherwise the List technique. Among them, two participants discovered target cells with direct interaction in the 3D view, while the other one searched the target cells in the list. + +§ 4.6 SUBJECTIVE FEEDBACK + +In the post-study interview we asked about participants' preferences for the three techniques and their general thoughts on the interaction. + +As Fig. 15 shows, more than a half of participants (16/24) liked the Combination selection most. Two participants considered the Combination and List to be equally satisfying, while another one favored the Combination and Explosion techniques equally. The remaining 5/24 participants preferred the List technique. For this technique, participants appreciated its item order (e. g., "much easier to follow which have been clicked"). However, the interaction was troublesome (e. g., "was boring to scroll the list," "I had to fast move the mouse cursor between the lists on the right and 3D cells on the left"). Moreover, when the associated number was similar to the cell name by chance, it was easy to get confused (e. g., "I got messed up with the name and associated number. I forgot which one was the temporally best candidate cell."). Meanwhile, they stated that they did not pay attention to information such as the shape and 3D relative position of the cell because they only looked at the associated number in the 3D view and otherwise focused on the list ("[1] only remembered the numbers and did not examine the shape"). In the Explosion condition, participants appreciated the convenience to fast click on the cells (e. g., "all [are] the interactions in the 3D view") and the usefulness of being able to control the distance between two cells (e. g., "spreading out the cells is useful in targeting cells"), but they disliked the need to rotate the view because this led them get lost and forget which cells they had already examined (e. g., "less useful in checking out neighbors," "it was easy to get lost when rotating the embryo ... I am not sure whether I have traversed all the cells or not"). For the Combination, participants liked the freedom to spread out cells and the convenience of the default order in the list ("supports both techniques and I could be quicker"). Nonetheless, some participants would just use the same technique they preferred in the previous two tasks and thought it was useless. Others reported confusion ("I struggled to choose the technique"). One participant also reported being bored and tired in doing the last task. + + < g r a p h i c s > + +Figure 15: Accumulated participant preference ranks. Note that we allowed participants to rank two techniques as their first choice and then counted none as the second, resulting in ranks 1, 1, and 3 . + +Commenting on the whole interaction, participants proposed some changes (e. g., "The interaction is good, and it will be better if there is a mark on the cells I have checked in all techniques," "[I] would like to have more context in the background of the ${3D}$ view to help orientation," "[you should] show the name of cells in 3D view so that I could have a name order to follow," and "hiding the least possible candidate cell manually would accelerate the process"). Some participants thought the two techniques should not be combined. One participant, e. g., stated that "List has an order and ${3D}$ view has another order (layer). These two orders do not have a similar logic or strategy and could not be combined. These two techniques in the same interface will disturb each other's use ... could present a 3D order based on the 3D position and link to ${2D}$ order in the list." Though most participants liked the explosion bar, one argued that horizontally moving the bar, for him, did not intuitively represent the conceptual increase of inter-cell distance. + +§ 5 DISCUSSION + +§ 5.1 PERFORMANCE DIFFERENCES + +We found evidence that List led to more efficient (faster, fewer clicks) and more precise input than Explosion overall. This indicates that traditional list-based selection was more familiar to participants, compared with 3D interaction which was unfamiliar to many. Moreover, the List condition provided an order of the potential neighbors of a target, which supported participants in traversing every cell in the list without missing one as well as remembering the cell with the highest associated number, regardless of potential view manipulations in the $3\mathrm{D}$ view. In contrast to the overall results and the results for occluded cells, we did not find clear differences in completion time and accuracy of two techniques for studying surface cells. This finding may due to the fact that surface cells usually have fewer neighbors and a clear arrangement of the cells such that participants had less problems when traversing these in the 3D view. + +We also found that a direct interaction in the 3D view has advantages. While the List condition enabled participants to traverse neighborhoods faster than with the Explosion technique, with the latter participants were faster in discovering the next target. This last point probably is due to the $3\mathrm{D}$ view showing all remaining targets in a single view (with only some rotation necessary), in the lists participants had to use scrolling to get to the next target. In the traversal, in contrast, the lists of potential neighbors had a lot fewer entries than the overall list of cells, so that the participants did not need to scroll and thus their speed improved. Moreover, the need to rotate the $3\mathrm{D}$ view to traverse all neighbors often led to participants losing orientation such that they no longer remembered which cells they had looked at already. + +While this is a problem that was apparent in our pool of participants, the situation may be very different in our envisioned application domain of plant biologists constructing lineage trees. Here, the experts will not look for numbers but instead investigate the potential sister cells based on the cell's overall shape as well as the size and shape of the shared surface between the cells, properties that are essential for making the lineage decision. This means that the plant biologists not only inherently have to focus much more on the 3D view, but they also do not necessarily traverse all neighbors because they can easily reject some candidates based on their shape. Because we had to use a number associated to the cells as a proxy for the biologists' experience, our participants, in contrast, only focused on this abstract property and thus could more easily focus almost entirely on the list as their main reference point, which in turn likely led to the List condition's performance advantage. + +§ 5.2 SUBJECTIVE RATINGS + +We can also find these assumptions supported by our participants' qualitative feedback. In particular, they preferred the List technique because they felt it led to a lower mental load, requiring less memorization. Essentially, because they were not experts they turned our envisioned spatial decision into an abstract task because they did not need to examine the cell's shape etc. They thus focused on and used the arbitrary order of cells in the List condition. Consequently, our participants also disliked that they had to move back and forth between list and 3D view in the List condition. + +In the Explosion condition, in contrast, participants liked to be able to explode the embryo, to freely explore it, and to have a whole view and direct access to the cells. The downside of this aspect was the lack of a clear order of the elements that they could follow to traverse all neighbors. Moreover, the needed rotations made participants more likely to lose the orientation in the $3\mathrm{D}$ view, and consequently also to forget which of the already visited cells had the highest associated number. Participants had to memorize this intermediate result based on the cell's shape and 3D position, which was much harder for them than memorizing a position or a label in the 1D list. While these aspects made the task more mentally demanding for participants compared to the List condition, experts likely will not suffer from the same problems as we noted above. + +Another problem with the Explosion condition was that the discovery phase and the traversal phase needed different view configurations: in the former participants needed to see all cells of the embryo, while in the latter they needed to focus on only the 1-neighborhood of a single cell. We had specifically ensured that the positions of the cells did not change when switching between overall and focused view to maintain spatial continuity; yet this meant that in the Explosion condition participants had to frequently manipulate the view (adjust the zoom factors). In the List condition, in contrast, we automatically centered the view on a newly selected target because people focused on the overall cell list when selecting targets, which lead to much less need for view adjustments. + +§ 5.3 IMPLICATIONS + +One of our main insights is that 3D interaction techniques work best for truly three-dimensional tasks which have no additional informative tags. When we asked participants to perform a purely $3\mathrm{D}$ action such as to discover colored objects among a set of exploded cells of the embryo, e. g., the 3D Explosion technique performed well and our participants used them when they had the choice. In contrast, for tasks like the traversal which our participants converted into an abstract search task as we had discussed, the List technique was faster, more accurate, and preferred. As we discussed in Sect. 5.1, for the realistic task in the biology domain the actual sister cell selection is likely much more a 3D task than our proxy, so we hypothesize that the Explosion technique will be a strong competitor (but this will have to be verified in a separate experiment). + +We also found that the use of explosion techniques as an interaction metaphor makes it possible to access objects in tightly packed 3D environments, such as for selection as in our application. For discovering target cells, our participants increased the distance between two cells and zoomed out to have a clear overview of the embryo and the relative positions of cells, while for traversal, they tended to shorten the distance and zoomed in so that they could examine cells and find a structure to traverse. Also, our participants reported that they would freely adjust the distance between two cells to have a better overview or check cell details. + +Next, the Combination seems to combine the advantages of the single techniques. While we always showed it last to participants and thus cannot rule our order effects for its performance, participants clearly preferred this type of interface over only the (1D) List or the (3D) Explosion interaction. It allows users to freely choose which technique works best for them, for a given task and dataset, and also allows them to transition to a $3\mathrm{\;D}$ interaction as they progress and as 3D aspects become more important. Nonetheless, even though with the Combination both individual interaction methods were available to participants, a constant switching between 3D view and lists is inconvenient. Participants who preferred to use List chose strategies that operating the objects in the right part of the 3D view which is placed close to the lists, while others tried to directly interact in $3\mathrm{D}$ view. + +While we studied the specific scenario of cell division analysis in botany, our results apply to many other settings in which objects need to be selected in dense environments. For instance, machine assemblies [47] and datasets in brain connectomics [9] share similar properties. In such settings experts similarly need to be able to select parts with virtually no space in-between, and have to be able to understand spatial and logical relationships between neighbors. We thus believe that our findings can inform work in such fields. + +§ 5.4 LIMITATIONS + +Naturally, our work is not without limitations. We already pointed out that, while we aimed to replicate the biologists' spatial analysis task as well as possible in our experimental setting, it turned out that our proxy for "experience" allowed participants to turn the 3D spatial analysis task into an abstract search task, and we have explained the implications of this change in Sect. 5.1. While in the future we plan an empirical validation with experts, we think that our work still sheds valuable light on how we can realize selection and access tasks in tight 3D environments. + +Beyond this point, the fact that we were required by our IRB to conduct our work via video conferencing also may have affected the outcome. Naturally, participants had different types of equipment (screen resolution and size, PC power, general environment, etc.). An on-site experiment may have resulted in a more controlled environment and procedure. Nonetheless, this spread of environment reflects real-world working conditions, so we do not see this point as a strong limitation. Next, our specific choice of application case and, consequently, study dataset is a unique setting: all cells in the dataset were of roughly the same size and were "well" distributed. Other datasets in other application domains-even if they are densely packed-may have different properties and may thus lead to slightly different selection performance. Yet we believe that our general conclusions still hold. Finally, we only tested manual selection techniques. In the future, however, we foresee the use of machine learning (ML) approaches to support the biologists in establishing the cell lineage and, thus, the interaction requirements will change from manual selection to ML supervision and verification. + +§ 6 CONCLUSION + +We have advanced our understanding of interaction techniques for the selection of objects in dense 3D environments. We saw that a list-based selection has advantages when the number of elements is large and when the needed information can be represented in (or "projected" to) lists. We also saw, however, that if the relevant criteria are three-dimensional properties then an explosion-based selection can have advantages, in particular when the target audience is familiar with orienting themselves in 3D space. A combination of both techniques, ultimately, provides the best of both worlds. \ No newline at end of file