Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_md/Initial_manuscript.md +251 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_tex/Initial_manuscript.tex +165 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_md/Initial_manuscript.md +431 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_tex/Initial_manuscript.tex +349 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_md/Initial_manuscript.md +293 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_tex/Initial_manuscript.tex +319 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_md/Initial_manuscript.md +255 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_tex/Initial_manuscript.tex +175 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_md/Initial_manuscript.md +323 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_tex/Initial_manuscript.tex +280 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_md/Initial_manuscript.md +191 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_tex/Initial_manuscript.tex +151 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_md/Initial_manuscript.md +369 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_tex/Initial_manuscript.tex +227 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_md/Initial_manuscript.md +248 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_tex/Initial_manuscript.tex +219 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_md/Initial_manuscript.md +447 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_tex/Initial_manuscript.tex +380 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_md/Initial_manuscript.md +381 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_tex/Initial_manuscript.tex +331 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_md/Initial_manuscript.md +271 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_tex/Initial_manuscript.tex +217 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_md/Initial_manuscript.md +303 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_tex/Initial_manuscript.tex +263 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_md/Initial_manuscript.md +437 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_tex/Initial_manuscript.tex +357 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_md/Initial_manuscript.md +365 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_tex/Initial_manuscript.tex +356 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_md/Initial_manuscript.md +299 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_tex/Initial_manuscript.tex +241 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_md/Initial_manuscript.md +227 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_tex/Initial_manuscript.tex +182 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_md/Initial_manuscript.md +423 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_tex/Initial_manuscript.tex +336 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_md/Initial_manuscript.md +301 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_tex/Initial_manuscript.tex +205 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_md/Initial_manuscript.md +421 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_tex/Initial_manuscript.tex +320 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_md/Initial_manuscript.md +397 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_tex/Initial_manuscript.tex +445 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_md/Initial_manuscript.md +379 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_tex/Initial_manuscript.tex +329 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_md/Initial_manuscript.md +199 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_tex/Initial_manuscript.tex +157 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_md/Initial_manuscript.md +445 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_tex/Initial_manuscript.tex +358 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_md/Initial_manuscript.md +249 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_tex/Initial_manuscript.tex +255 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_md/Initial_manuscript.md +333 -0
- papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_tex/Initial_manuscript.tex +278 -0
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Divergence-Free Copy-Paste For Fluid Animation Using Stokes Interpolation
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Traditional grid-based fluid simulation is often difficult to control and costly to perform. Therefore, the ability to reuse previously computed simulation data is a tantalizing idea that would have significant benefits to artists and end-users of fluid animation tools. We introduce a remarkably simple yet effective copy-and-paste methodology for fluid animation that allows direct reuse of existing simulation data by smoothly combining two existing simulation data sets together. The method makes use of a steady Stokes solver to determine an appropriate transition velocity field across a blend region between an existing simulated outer target domain and a region of flow copied from a source simulation. While prior work suffers from non-divergence and associated compression artifacts, our approach always yields divergence-free velocity fields, yet also ensures an optimally smooth blend between the two input flows.
|
| 8 |
+
|
| 9 |
+
## 1 INTRODUCTION
|
| 10 |
+
|
| 11 |
+
The ubiquitous copy-paste metaphor from text and image processing tools is popular because it is conceptually simple and significantly reduces the need for redundant effort on the part of the user. A copy-paste tool for fluid simulation could offer similar benefits while reducing the total computational effort expended to achieve a desired results through the reuse of existing simulation data. This paper proposes exactly such a scheme.
|
| 12 |
+
|
| 13 |
+
The control of fluids has long been a subject of interest in computer animation: typical strategies that have been explored include space-time optimization (e.g., $\left\lbrack {{18},{35}}\right\rbrack$ ), space-time interpolation (e.g., $\left\lbrack {{26},{33}}\right\rbrack$ ), and approaches that involve the application of some combination of user-designed forces, constraints, or boundary conditions (e.g., $\left\lbrack {{19},{24},{32},{34},{36},{37}}\right\rbrack$ ). Because the last of these families is typically the least expensive and offers the most direct control, it has generally been the most effective and widely used in practice. Our method also falls into this category.
|
| 14 |
+
|
| 15 |
+
Inspired by Poisson image editing [23], recent work by Sato et al. [28] hints at the potential power of a copy-paste metaphor for fluids. Unfortunately, their approach suffers from problematic nonzero divergence artifacts at the boundary of pasted regions, which depend heavily on the choice of input fields. We therefore introduce a new approach that provides natural blends between source and target regions yet is relatively simple to set up, requires solving only a standard Stokes problem over a narrow blend region at each time step, and always produces divergence-free vector fields.
|
| 16 |
+
|
| 17 |
+
## 2 RELATED WORK
|
| 18 |
+
|
| 19 |
+
### 2.1 Controlling fluid animation
|
| 20 |
+
|
| 21 |
+
Artistic control of fluid flows has been a subject of interest from the earliest days of three-dimensional fluid animation research. Foster and Metaxas [11] proposed a variety of basic control mechanism through imposition of initial or boundary values on quantities such as velocity, pressure and surface tension. A wide range of subsequent methods have been proposed to enable various control methods, which we review below.
|
| 22 |
+
|
| 23 |
+
One quite common approach is to apply forces or optimization approaches to encourage a simulation to hit particular target keyframes for the density or shape of smoke or liquid $\left\lbrack {8,{18},{22},{29},{34},{35}}\right\rbrack$ . Another strategy makes use of multiple scales or frequencies, using a precomputed or procedural flow to describe the low-resolution motion and allowing a new physical simulation to add in high-frequency details $\left\lbrack {{10},{19} - {21},{34}}\right\rbrack$ .
|
| 24 |
+
|
| 25 |
+
Other approaches aim to work more directly on the fluid geometry, rather than the velocity field. For example new editing metaphors have been proposed, such as space-time fluid sculpting [17] and fluid-carving [9]; the latter is conceptually similar to seam-carving from image/video editing [1]. Another direct geometric approach seeks to directly interpolate the global fluid shape and motion [26, 33]. These strategies generally require the overall simulation to already be relatively close to the desired target behavior.
|
| 26 |
+
|
| 27 |
+
Approaches that rely on the direction application of velocity boundary conditions on the fluid flow are similar to ours in some respects $\left\lbrack {{24},{25},{32},{36}}\right\rbrack$ . Often these have been used to cause liquid to follow a target motion or character, with varying degrees of "looseness" allowed in order to retain a fluid-like effect. They have not been used to combine existing simulations.
|
| 28 |
+
|
| 29 |
+
Another useful task in liquid animation is to insert a localized 3D dynamic liquid simulation, such as the region around a ship or swimming character, into a much larger surrounding procedural ocean or similar model. This has been achieved through the use of non-reflecting boundary conditions $\left\lbrack {6,{30}}\right\rbrack$ . These approaches focus on simulating the interior surface region of a liquid and smoothly damping out the surface flow to match the prescribed exterior model. This contrasts with our copy and paste problem, where both the interior and exterior are presimulated flows that must be combined together.
|
| 30 |
+
|
| 31 |
+
The closest method to ours is of course that of Sato et al. [28], who first proposed the copy-paste fluid problem. Their work also begins from the Dirichlet energy; however, through an ad hoc substitution of the input field's curl, they arrive at a new energy that minimizes the squared divergence of the velocity field plus the difference of the curl of the output and input vector fields. This formulation penalizes divergence rather than constraining it to be zero, and this likely accounts for the presence of undesired erroneous divergence in their results. By contrast, our approach is always strictly divergence-free.
|
| 32 |
+
|
| 33 |
+
### 2.2 Stokes flow in computer graphics
|
| 34 |
+
|
| 35 |
+
Steady (time-independent) Stokes flow is an approximation that is appropriate when momentum is effectively negligible, as indicated by a low Reynolds number. In computer graphics this approximation has been used in the context of paint simulation [4] and for design of fluidic devices [7]. The unsteady (time-dependent) variant has also been used as a substep within a more general Navier-Stokes simulator [16] for Newtonian fluids. Closest to our work is that of Bhattacharya et al. [5] who use the Stokes equations to fill a volume with smooth velocities, as an alternative to simple velocity extrapolation or potential flow approximations; they then use the generated field as a force to influence a liquid simulation. Their approach is shown to maintain rotational motion better than those existing alternative interpolants. We expand on this idea to address the copy-paste problem.
|
| 36 |
+
|
| 37 |
+
## 3 Method
|
| 38 |
+
|
| 39 |
+
Our method takes as input the per-timestep vector field data for two complete grid-based incompressible fluid simulations (denoted source and target), along with geometry information dividing the domain of the final animation into an inner source region ${\Omega }_{s}$ , an outer target region ${\Omega }_{t}$ , and a blending region ${\Omega }_{b}$ . In the final time-varying vector field to be assembled, the data in ${\Omega }_{s}$ and ${\Omega }_{t}$ are simply replayed from the inputs; the central task we must solve is to generate a "natural" vector field for the blend region ${\Omega }_{b}$ in between for all time steps.
|
| 40 |
+
|
| 41 |
+
We would like the vector field we generate in the blend region to possess a few key characteristics. First, the velocities at the boundaries of the blend region (on either ${\Gamma }_{t} = {\Omega }_{t} \cap {\Omega }_{b} = \partial {\Omega }_{t}$ or ${\Gamma }_{s} = {\Omega }_{s} \cap {\Omega }_{b} = \partial {\Omega }_{s}$ ) should exactly match the velocities of the corresponding input field - this is essentially the familiar no-slip boundary condition often used for kinematic solids or prescribed inflows/outflows in Newtonian fluids. Second, the vector field should be relatively smooth, since our objective is essentially a special kind of velocity interpolant. With only these two stipulations, a very natural choice is harmonic interpolation [15]. As suggested by Sato et al. [28], this can be expressed as minimizing the Dirichlet energy:
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\mathop{\operatorname{argmin}}\limits_{{\mathbf{u}}_{b}}{\iiint }_{{\Omega }_{b}}{\begin{Vmatrix}\nabla {\mathbf{u}}_{\mathbf{b}}\end{Vmatrix}}^{2} \tag{1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\text{subject to}{\mathbf{u}}_{b} = {\mathbf{u}}_{s}\text{on}{\Gamma }_{s}\text{,}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
{\mathbf{u}}_{b} = {\mathbf{u}}_{t}\text{on}{\Gamma }_{t}\text{.}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+
The minimizer satisfies $\nabla \cdot \nabla {\mathbf{u}}_{b} = 0$ , i.e. a componentwise Laplace equation on the velocity. (From here on we diverge from Sato et al. who proceed instead to manipulate the Dirichlet energy into a form that yields a vector Poisson equation.)
|
| 58 |
+
|
| 59 |
+
The Dirichlet energy alone is clearly insufficient, because it will prioritize smoothness at the cost of introducing divergence. Because we have assumed an incompressible flow model for our input (and desired output), the velocities in the blend region should not create or destroy material. A natural solution would be to simply apply a standard pressure projection as a post-process to convert the harmonic velocity field above to be incompressible. Unfortunately, this can cause the velocity field to deviate significantly from the harmonic input. Moreover, as we show in Section 5, pressure projection enforces only a free-slip condition (no-normal-flow), which allows objectionable tangential velocity discontinuities at the blend region's boundaries to be introduced.
|
| 60 |
+
|
| 61 |
+
We instead simultaneously combine the divergence-free stipulation with harmonic interpolation through the following formulation:
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\mathop{\operatorname{argmin}}\limits_{{\mathbf{u}}_{b}}{\iiint }_{{\Omega }_{b}}{\begin{Vmatrix}\nabla {\mathbf{u}}_{\mathbf{b}}\end{Vmatrix}}^{2} \tag{2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\text{subject to}\nabla \cdot {\mathbf{u}}_{b} = 0\text{on}{\Omega }_{b}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{\mathbf{u}}_{b} = {\mathbf{u}}_{s}\text{on}{\Gamma }_{s}\text{,}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{\mathbf{u}}_{b} = {\mathbf{u}}_{t}\text{on}{\Gamma }_{t}\text{.}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
This optimization problem provides the smoothest velocity field that interpolates the boundary data while preserving incompressibility. If we enforce the constraint with a Lagrange multiplier $p$ , the optimality conditions turn out to yield exactly the (constant viscosity) steady Stokes equations,
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\nabla \cdot \nabla {\mathbf{u}}_{b} - \nabla p = 0. \tag{3}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\nabla \cdot {\mathbf{u}}_{b} = 0, \tag{4}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
consistent with Helmholtz's minimum dissipation theorem [2]. We therefore refer to this construction as Stokes interpolation.
|
| 90 |
+
|
| 91 |
+
As noted in Section 2, we are not the first to suggest using the Stokes equations as a fluid interpolant: Bhattacharya et al. [5] first proposed steady state Stokes flow interpolation. However, our derivation and discussion above provides additional justification and insight into the variational nature of this approach. More importantly, Bhattacharya et al. did not consider the fluid cut-and-paste problem that we address in the current work.
|
| 92 |
+
|
| 93 |
+
A minor issue is that, for there to exist a valid solution, the boundary conditions must satisfy a compatibility condition; that is, the integrated flux across the two boundaries must be consistent with the condition of incompressibility on the blend region's interior:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
{\iiint }_{{\Omega }_{b}}\nabla \cdot {\mathbf{u}}^{n + 1}{dV} = 0 = {\iint }_{{\Gamma }_{s}}{\mathbf{u}}_{s}^{n + 1} \cdot \mathbf{n}{dA} + {\iint }_{{\Gamma }_{t}}{\mathbf{u}}_{t}^{n + 1} \cdot \mathbf{n}{dA} \tag{5}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
Fortunately, since the input vector fields both come from simulations that are themselves incompressible, the divergence theorem ensures that both the source copied patch and the target region to be pasted over have zero net flux across their respective boundaries - hence compatibility is guaranteed.
|
| 100 |
+
|
| 101 |
+
We arrive at the following algorithm. For each timestep, extract the boundary velocities from the input source and target simulations. Perform a steady Stokes solve on ${\Omega }_{b}$ as we have described to produce ${\mathbf{u}}_{b}^{n + 1}$ . Finally, directly fill in the inner and outer ${\Omega }_{s}$ and ${\Omega }_{t}$ regions with velocity from the input data ${\mathbf{u}}_{s}^{n + 1}$ and ${\mathbf{u}}_{t}^{n + 1}$ , respectively. The resulting time-varying vector field is divergence-free and offers an attractively smooth blend between source and target flows.
|
| 102 |
+
|
| 103 |
+
Note that, since the combined vector field differs significantly from both its inputs, the flow of any passive material (such as smoke density or tracer particles) must be recomputed from scratch by advection through the new field in order to yield a consistent visual result. This can usually be done efficiently and in parallel, since each (passive) particle's motion affects no other particles.
|
| 104 |
+
|
| 105 |
+
## 4 IMPLEMENTATION
|
| 106 |
+
|
| 107 |
+
While our concept is very general, our implementation assumes that all simulations are arranged on a standard staggered ("MAC") grid [14]. This provides a natural infrastructure on which to discretize the Stokes equations on the blend region, via centered finite differences. The boundary between the blend region and the surrounding source and target flow fields is assumed to lie on axis-aligned grid faces between cells (although this could potentially be generalized to irregular cut-cells if desired $\left\lbrack {3,{16}}\right\rbrack$ ). Where needed to ensure precise no-slip velocities conditions at the exact face midpoints on voxelized boundaries of the blend region, we make use of the usual ghost fluid method [12] for the Laplace operator in (4). To solve the Stokes linear system at each step, we use the Least Squares Conjugate Gradient solver provided by the Eigen library [13], with a tolerance of $5 \times {10}^{-5}$ . (Other options for solving indefinite systems, such as SYMQMR or MINRES would also be appropriate [27].)
|
| 108 |
+
|
| 109 |
+
## 5 RESULTS
|
| 110 |
+
|
| 111 |
+
We now consider some illustrative scenarios to demonstrate the behaviour of our method. Most of our figures make use of passive marker particles with alternating colors in initially horizontal rows to better highlight the developing flow structure, but we strongly encourage the reader to review our supplemental video to assess the motion more fully.
|
| 112 |
+
|
| 113 |
+
Our first scenario (Figure 1) consists of a static solid disk in a vertical wind-tunnel scenario, with inflow at the top and outflow at the bottom (particles leaving the bottom boundary re-enter at the top). We wish to paste the disk and its surroundings from the source simulation into an even simpler empty vertically translating wind-tunnel target simulation. This yields a smooth divergence-free combination of the two flows, where the flow outside of the blend region is completely undisturbed. Our result necessarily differs from the source animation, since in the source animation the presence of the disk globally disturbed the flow; our Stokes interpolation approach must therefore deform the flow more strongly in the blend region to compensate, yet we still achieve a visually plausible flow (Figure 2).
|
| 114 |
+
|
| 115 |
+

|
| 116 |
+
|
| 117 |
+
Figure 1: Basic Setup: Our simplest scenario involves copying the flow around a disk from its source simulation (left, with region to be copied surrounded in blue) into an obstacle-free target simulation (middle). The result of our method is a new smoothly combined flow (right). The blue lines denote the inner and outer borders of the blend region over which we apply our Stokes interpolation. (The same frame of animation is shown in all three images.)
|
| 118 |
+
|
| 119 |
+

|
| 120 |
+
|
| 121 |
+
Figure 2: Merged Flow Over Time: A few frames of the edited animation result of our approach based on the scenario described in Figure 1.
|
| 122 |
+
|
| 123 |
+
Next, we consider our method in comparison to two other obvious alternatives, as discussed in Section 3: componentwise harmonic interpolation, and post-projected harmonic interpolation. Pure harmonic interpolation seems effective at first glance, but unfortunately suffers from non-negligible divergence, as shown in Figure 3.
|
| 124 |
+
|
| 125 |
+

|
| 126 |
+
|
| 127 |
+
Figure 3: Harmonic Interpolation: Harmonic interpolation of the velocity across the blend region yields a somewhat plausible flow (left), but suffers from large divergence (right). Red indicates positive divergence, blue indicates negative divergence. The divergence gradually induces greater clumping and spreading of the particles, as seen in the middle of the left image.
|
| 128 |
+
|
| 129 |
+
A possible improvement is to post-process the harmonic result with a projection to a divergence-free state. Unfortunately, while this successfully removes divergence, the natural free-slip conditions of the pressure projection reintroduce tangential slip along the borders of the blend region leading to objectionable motion artifacts in the flow. In the wind-tunnel scenario the vertical component of velocity suffers from discontinuities at blend region borders, leading to visible grid-aligned shearing of the flow.
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
|
| 133 |
+
Figure 4: Projected Harmonic Interpolation: Our Stokes interpolation approach (left) yields continuous velocity fields. However, under projected harmonic interpolation (right), undesirable free-slip conditions introduce tangential discontinuities in the flow velocity at blend region borders, seen here as positional discontinuities in the rows of colored particles at far left and right.
|
| 134 |
+
|
| 135 |
+
To further stress-test our method, we consider some challenging scenarios analogous to those suggested by Sato et al. [28]. We combine flows in which the source and target differ in direction or speed. In Sato's approach, both larger speed and angle deviations lead to more severe failures of the divergence-free condition (we refer the reader to the secondary supplemental video accompanying that paper). Figure 5 shows the same test as we performed in our earlier examples, except that we have changed the ambient flow direction of the source simulation to have steadily increasing angles, including an example where the flow direction is completely reversed. While this leads to an increasingly unnatural look, the resulting flow field is still continuous, smooth on the blend region interior, and divergence-free independent, of this artistic decision. Similarly, Figure 6 performs a test in which the speed of the source (inner) simulation is slow or faster than the target (outer) simulation. Once again more severe speed differences lead to more unusual motions in the blend region in order to compensate. For example, when the speed ratio between source and target is 3 , more elaborate interior circulation of the flow in the blend region becomes necessary to satisfy the incompressible condition. However, because the source and target are divergence-free and therefore provide compatible boundary conditions, the result is still correctly divergence-free.
|
| 136 |
+
|
| 137 |
+
A further point to note about these stress tests is that the more severe cases induce strong vortices that cause gaps to open in the flow. However, this is not due to divergence; rather, typical small numerical errors in particle trajectories due to interpolation and advection cause the particles to spread out from these points.
|
| 138 |
+
|
| 139 |
+
Lastly, we consider a few slightly more complex scenarios. Figure 7 shows our basic scenario again but using a rectangular obstacle instead of a disk. Figure 8 shows a scenario in which the user replaces a rectangular obstacle with a disk. Finally, in Figure 9, we paste a disk obstacle into a scene containing three rectangles, where the disk replaces the middle rectangle. Because of the additional obstacles, the flow structure is more complex. In this example, we tightened the blend region to fit more closely around the paste region. In all cases a plausible flow is constructed.
|
| 140 |
+
|
| 141 |
+
Notably, because a disk and a rectangle lead to different downstream motions in their respective wakes, a close inspection of the motion in these regions of our results reveals slightly unnatural motions, as our interpolant diligently tries to transition between two different flow structures. Fortunately, such effects are fairly subtle unless one is specifically looking for them. Ultimately, Stokes interpolation provides the best available solution under the stated constraints (smoothness, incompressibility, interpolation of boundary values), and it is up to the user to apply their judgment regarding whether a proposed flow edit achieves the desired effect.
|
| 142 |
+
|
| 143 |
+
Figure 5: Varying Angles: In these scenes, the outer flow is vertical while the pasted inner flow from the source simulation has flow direction with a relative angle of: ${0}^{ \circ }$ (top-left), ${45}^{ \circ }$ (top-right), ${90}^{ \circ }$ (bottom-left), and ${180}^{ \circ }$ (bottom-right). In all cases, the flow remains divergence-free.
|
| 144 |
+
|
| 145 |
+
## 6 CONCLUSIONS AND FUTURE WORK
|
| 146 |
+
|
| 147 |
+
We have presented an approach to the fluid copy-paste problem that guarantees smooth and divergence-free fields by solving a steady Stokes problem at each time step to fill in a blend region between the source and target flow regions.
|
| 148 |
+
|
| 149 |
+
Our work suggests several directions to explore in future work. First, for simplicity we assumed axis-aligned rectangles for the copy-paste region, similar to basic region-selection in image editing, but it could be useful to extend our approach to more general (lasso-type) selection regions, either in a voxelized fashion or using irregular cut-cells $\left\lbrack {3,{16}}\right\rbrack$ for smoother shapes. This would add greater artistic flexibility, and may render the blend-region borders less apparent.
|
| 150 |
+
|
| 151 |
+
The mathematics underlying our approach extends naturally to 3D, although providing a manageable user interface for selecting and placing time-dependent volumetric flow regions becomes more challenging. This would be interesting to explore.
|
| 152 |
+
|
| 153 |
+
Another intriguing question is whether even better behavior at blend region borders could be achieved by replacing our Dirichlet energy with a higher order energy. At present, the no-slip condition enforces matching of the velocity value at the boundaries, but not its gradient. Minimizing instead a squared Laplacian energy (see e.g., [31]), still subject to incompressibility, would lead to a bilaplacian operator on velocity. This is conceptually similar to replacing linear interpolation with cubic interpolation. While it would lead to a more challenging linear system to solve (in terms of conditioning) it may be able to offer a value- and gradient-matched divergence-free blend field.
|
| 154 |
+
|
| 155 |
+
Finally, a challenging unanswered question in fluid animation more broadly is to what makes a fluid motion perceptually "realistic" from a human perspective, and how much deviation from physical accuracy can safely be tolerated in visual applications. A metric
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+
Figure 6: Varying Speeds: In these scenes, the outer flow has a fixed speed while the pasted inner flow from the source simulation has a speed ratio of: 1.0 (top-left), 0.75 (top-right), 1.5 (bottom-left), and 3.0 (bottom-right). In all cases, the flow remains divergence-free.
|
| 160 |
+
|
| 161 |
+

|
| 162 |
+
|
| 163 |
+
Figure 7: Pasting a rectangle into a flow. From left to right: source scene, target scene, result.
|
| 164 |
+
|
| 165 |
+
of this kind could allow one to quantify more concretely whether a proposed flow edit is successful or harmful.
|
| 166 |
+
|
| 167 |
+
## REFERENCES
|
| 168 |
+
|
| 169 |
+
[1] S. Avidan and A. Shamir. Seam carving for content-aware image resizing. In ACM SIGGRAPH 2007 papers, pp. 10-es. 2007.
|
| 170 |
+
|
| 171 |
+
[2] G. K. Batchelor. An introduction to fluid dynamics. Cambridge university press, 2000.
|
| 172 |
+
|
| 173 |
+
[3] C. Batty, F. Bertails, and R. Bridson. A fast variational framework for accurate solid-fluid coupling. ACM Transactions on Graphics (TOG), 26(3):100-es, 2007.
|
| 174 |
+
|
| 175 |
+
[4] W. Baxter, Y. Liu, and M. C. Lin. A viscous paint model for interactive applications. Computer Animation and Virtual Worlds, 15(3-4):433- 441, 2004.
|
| 176 |
+
|
| 177 |
+
[5] H. Bhattacharya, M. B. Nielsen, and R. Bridson. Steady state stokes flow interpolation for fluid control. In Eurographics (Short Papers), pp. 57-60. Citeseer, 2012.
|
| 178 |
+
|
| 179 |
+
[6] M. Bojsen-Hansen and C. Wojtan. Generalized non-reflecting boundaries for fluid re-simulation. ACM Transactions on Graphics (TOG), 35(4):1-7, 2016.
|
| 180 |
+
|
| 181 |
+
[7] T. Du, K. Wu, A. Spielberg, W. Matusik, B. Zhu, and E. Sifakis. Functional optimization of fluidic devices with differentiable stokes flow. ACM Transactions on Graphics (TOG), 39(6):1-15, 2020.
|
| 182 |
+
|
| 183 |
+
[8] R. Fattal and D. Lischinski. Target-driven smoke animation. ACM Trans. Graph. (SIGGRAPH), 23(3):441-448, 2004.
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
|
| 187 |
+
Figure 8: Replacing a rectangle with a disk. From left to right: source scene, target scene, result.
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
|
| 191 |
+
Figure 9: Replacing a rectangle with a disk in a flow with additional obstacles, using a narrow blend region. From left to right: source scene, target scene, result.
|
| 192 |
+
|
| 193 |
+
[9] S. Flynn, P. Egbert, S. Holladay, and B. Morse. Fluid carving: intelligent resizing for fluid simulation data. ACM Transactions on Graphics (TOG), 38(6):1-14, 2019.
|
| 194 |
+
|
| 195 |
+
[10] Z. Forootaninia and R. Narain. Frequency-domain smoke guiding. ACM Transactions on Graphics (TOG), 39(6):1-10, 2020.
|
| 196 |
+
|
| 197 |
+
[11] N. Foster and D. Metaxas. Controlling fluid animation. In Computer Graphics International, p. 178, 1997.
|
| 198 |
+
|
| 199 |
+
[12] F. Gibou, R. P. Fedkiw, L.-T. Cheng, and M. Kang. A second-order-accurate symmetric discretization of the poisson equation on irregular domains. Journal of Computational Physics, 176(1):205-227, 2002.
|
| 200 |
+
|
| 201 |
+
[13] G. Guennebaud, B. Jacob, et al. Eigen. URl: http://eigen.tuxfamily.org, 2010.
|
| 202 |
+
|
| 203 |
+
[14] F. H. Harlow and J. E. Welch. Numerical calculation of time-dependent viscous incompressible flow of fluid with free surface. The physics of fluids, 8(12):2182-2189, 1965.
|
| 204 |
+
|
| 205 |
+
[15] P. Joshi, M. Meyer, T. DeRose, B. Green, and T. Sanocki. Harmonic coordinates for character articulation. ACM Transactions on Graphics (TOG), 26(3):71-es, 2007.
|
| 206 |
+
|
| 207 |
+
[16] E. Larionov, C. Batty, and R. Bridson. Variational stokes: a unified pressure-viscosity solver for accurate viscous liquids. ACM Transactions on Graphics (TOG), 36(4):1-11, 2017.
|
| 208 |
+
|
| 209 |
+
[17] P.-L. Manteaux, U. Vimont, C. Wojtan, D. Rohmer, and M.-P. Cani. Space-time sculpting of liquid animation. In Proceedings of the 9th International Conference on Motion in Games, pp. 61-71, 2016.
|
| 210 |
+
|
| 211 |
+
[18] A. McNamara, A. Treuille, Z. Popović, and J. Stam. Fluid control using the adjoint method. ACM Transactions on Graphics, 23(3):449, 2004.
|
| 212 |
+
|
| 213 |
+
[19] M. B. Nielsen and R. Bridson. Guide shapes for high resolution naturalistic liquid simulation. ACM Trans. Graph. (SIGGRAPH), 30(4):1, 2011.
|
| 214 |
+
|
| 215 |
+
[20] M. B. Nielsen and B. B. Christensen. Improved variational guiding of smoke animations. In Computer Graphics Forum, vol. 29, pp. 705-712. Wiley Online Library, 2010.
|
| 216 |
+
|
| 217 |
+
[21] M. B. Nielsen, B. B. Christensen, N. B. Zafar, D. Roble, and K. Museth. Guiding of smoke animations through variational coupling of simulations at different resolutions. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 217-226, 2009.
|
| 218 |
+
|
| 219 |
+
[22] Z. Pan, J. Huang, Y. Tong, C. Zheng, and H. Bao. Interactive localized liquid motion editing. ACM Transactions on Graphics (TOG), 32(6):1- 10, 2013.
|
| 220 |
+
|
| 221 |
+
[23] P. Pérez, M. Gangnet, and A. Blake. Poisson image editing. ACM
|
| 222 |
+
|
| 223 |
+
Trans. Graph. (SIGGRAPH), 22(3):313-318, 2003.
|
| 224 |
+
|
| 225 |
+
[24] N. Rasmussen, D. Enright, D. Nguyen, S. Marino, N. Sumner, W. Geiger, S. Hoon, and R. Fedkiw. Directable photorealistic liquids. In Symposium on Computer Animation, pp. 193-202, 2004.
|
| 226 |
+
|
| 227 |
+
[25] K. Raveendran, N. Thuerey, C. Wojtan, and G. Turk. Controlling liquids using meshes. In Proceedings of the 11th ACM SIG-GRAPH/Eurographics conference on Computer Animation, pp. 255- 264, 2012.
|
| 228 |
+
|
| 229 |
+
[26] K. Raveendran, C. Wojtan, N. Thuerey, and G. Turk. Blending liquids. ACM Transactions on Graphics, 33(4):1-10, jul 2014.
|
| 230 |
+
|
| 231 |
+
[27] A. Robinson-Mosher, R. E. English, and R. Fedkiw. Accurate tangential velocities for solid fluid coupling. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 227-236, 2009.
|
| 232 |
+
|
| 233 |
+
[28] S. Sato, Y. Dobashi, and T. Nishita. Editing fluid animation using flow interpolation. ACM Trans. Graph., 37(5):1-12, sep 2018.
|
| 234 |
+
|
| 235 |
+
[29] L. Shi and Y. Yu. Taming liquids for rapidly changing targets. In Symposium on Computer Animation, pp. 229-236, 2005.
|
| 236 |
+
|
| 237 |
+
[30] A. Söderström, M. Karlsson, and K. Museth. A pml-based nonreflective boundary for free surface fluid animation. ACM Transactions on Graphics (TOG), 29(5):1-17, 2010.
|
| 238 |
+
|
| 239 |
+
[31] O. Stein, E. Grinspun, M. Wardetzky, and A. Jacobson. Natural boundary conditions for smoothing in geometry processing. ACM Transactions on Graphics (TOG), 37(2):1-13, 2018.
|
| 240 |
+
|
| 241 |
+
[32] A. Stomakhin and A. Selle. Fluxed animated boundary method. ACM Transactions on Graphics (TOG), 36(4):1-8, 2017.
|
| 242 |
+
|
| 243 |
+
[33] N. Thuerey. Interpolations of smoke and liquid simulations. ACM Trans. Graph., 36(1):1-16, sep 2016.
|
| 244 |
+
|
| 245 |
+
[34] N. Thuerey, R. Keiser, M. Pauly, and U. Rüde. Detail-preserving fluid control. In Symposium on Computer Animation, pp. 7-12, 2006.
|
| 246 |
+
|
| 247 |
+
[35] A. Treuille, A. McNamara, Z. Popović, J. Stam, A. Treuille, A. McNamara, Z. Popović, and J. Stam. Keyframe control of smoke simulations. ACM Transactions on Graphics, 22(3):716, 2003.
|
| 248 |
+
|
| 249 |
+
[36] M. Wiebe and B. Houston. The Tar Monster: Creating a character with fluid simulation. In SIGGRAPH Sketches, 2004.
|
| 250 |
+
|
| 251 |
+
[37] M. Wrenninge and D. Roble. Fluid simulation interaction techniques. In SIGGRAPH Sketches, p. 1. ACM Press, New York, New York, USA, 2003.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/C7nHFPmJbTX/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DIVERGENCE-FREE COPY-PASTE FOR FLUID ANIMATION USING STOKES INTERPOLATION
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Traditional grid-based fluid simulation is often difficult to control and costly to perform. Therefore, the ability to reuse previously computed simulation data is a tantalizing idea that would have significant benefits to artists and end-users of fluid animation tools. We introduce a remarkably simple yet effective copy-and-paste methodology for fluid animation that allows direct reuse of existing simulation data by smoothly combining two existing simulation data sets together. The method makes use of a steady Stokes solver to determine an appropriate transition velocity field across a blend region between an existing simulated outer target domain and a region of flow copied from a source simulation. While prior work suffers from non-divergence and associated compression artifacts, our approach always yields divergence-free velocity fields, yet also ensures an optimally smooth blend between the two input flows.
|
| 8 |
+
|
| 9 |
+
§ 1 INTRODUCTION
|
| 10 |
+
|
| 11 |
+
The ubiquitous copy-paste metaphor from text and image processing tools is popular because it is conceptually simple and significantly reduces the need for redundant effort on the part of the user. A copy-paste tool for fluid simulation could offer similar benefits while reducing the total computational effort expended to achieve a desired results through the reuse of existing simulation data. This paper proposes exactly such a scheme.
|
| 12 |
+
|
| 13 |
+
The control of fluids has long been a subject of interest in computer animation: typical strategies that have been explored include space-time optimization (e.g., $\left\lbrack {{18},{35}}\right\rbrack$ ), space-time interpolation (e.g., $\left\lbrack {{26},{33}}\right\rbrack$ ), and approaches that involve the application of some combination of user-designed forces, constraints, or boundary conditions (e.g., $\left\lbrack {{19},{24},{32},{34},{36},{37}}\right\rbrack$ ). Because the last of these families is typically the least expensive and offers the most direct control, it has generally been the most effective and widely used in practice. Our method also falls into this category.
|
| 14 |
+
|
| 15 |
+
Inspired by Poisson image editing [23], recent work by Sato et al. [28] hints at the potential power of a copy-paste metaphor for fluids. Unfortunately, their approach suffers from problematic nonzero divergence artifacts at the boundary of pasted regions, which depend heavily on the choice of input fields. We therefore introduce a new approach that provides natural blends between source and target regions yet is relatively simple to set up, requires solving only a standard Stokes problem over a narrow blend region at each time step, and always produces divergence-free vector fields.
|
| 16 |
+
|
| 17 |
+
§ 2 RELATED WORK
|
| 18 |
+
|
| 19 |
+
§ 2.1 CONTROLLING FLUID ANIMATION
|
| 20 |
+
|
| 21 |
+
Artistic control of fluid flows has been a subject of interest from the earliest days of three-dimensional fluid animation research. Foster and Metaxas [11] proposed a variety of basic control mechanism through imposition of initial or boundary values on quantities such as velocity, pressure and surface tension. A wide range of subsequent methods have been proposed to enable various control methods, which we review below.
|
| 22 |
+
|
| 23 |
+
One quite common approach is to apply forces or optimization approaches to encourage a simulation to hit particular target keyframes for the density or shape of smoke or liquid $\left\lbrack {8,{18},{22},{29},{34},{35}}\right\rbrack$ . Another strategy makes use of multiple scales or frequencies, using a precomputed or procedural flow to describe the low-resolution motion and allowing a new physical simulation to add in high-frequency details $\left\lbrack {{10},{19} - {21},{34}}\right\rbrack$ .
|
| 24 |
+
|
| 25 |
+
Other approaches aim to work more directly on the fluid geometry, rather than the velocity field. For example new editing metaphors have been proposed, such as space-time fluid sculpting [17] and fluid-carving [9]; the latter is conceptually similar to seam-carving from image/video editing [1]. Another direct geometric approach seeks to directly interpolate the global fluid shape and motion [26, 33]. These strategies generally require the overall simulation to already be relatively close to the desired target behavior.
|
| 26 |
+
|
| 27 |
+
Approaches that rely on the direction application of velocity boundary conditions on the fluid flow are similar to ours in some respects $\left\lbrack {{24},{25},{32},{36}}\right\rbrack$ . Often these have been used to cause liquid to follow a target motion or character, with varying degrees of "looseness" allowed in order to retain a fluid-like effect. They have not been used to combine existing simulations.
|
| 28 |
+
|
| 29 |
+
Another useful task in liquid animation is to insert a localized 3D dynamic liquid simulation, such as the region around a ship or swimming character, into a much larger surrounding procedural ocean or similar model. This has been achieved through the use of non-reflecting boundary conditions $\left\lbrack {6,{30}}\right\rbrack$ . These approaches focus on simulating the interior surface region of a liquid and smoothly damping out the surface flow to match the prescribed exterior model. This contrasts with our copy and paste problem, where both the interior and exterior are presimulated flows that must be combined together.
|
| 30 |
+
|
| 31 |
+
The closest method to ours is of course that of Sato et al. [28], who first proposed the copy-paste fluid problem. Their work also begins from the Dirichlet energy; however, through an ad hoc substitution of the input field's curl, they arrive at a new energy that minimizes the squared divergence of the velocity field plus the difference of the curl of the output and input vector fields. This formulation penalizes divergence rather than constraining it to be zero, and this likely accounts for the presence of undesired erroneous divergence in their results. By contrast, our approach is always strictly divergence-free.
|
| 32 |
+
|
| 33 |
+
§ 2.2 STOKES FLOW IN COMPUTER GRAPHICS
|
| 34 |
+
|
| 35 |
+
Steady (time-independent) Stokes flow is an approximation that is appropriate when momentum is effectively negligible, as indicated by a low Reynolds number. In computer graphics this approximation has been used in the context of paint simulation [4] and for design of fluidic devices [7]. The unsteady (time-dependent) variant has also been used as a substep within a more general Navier-Stokes simulator [16] for Newtonian fluids. Closest to our work is that of Bhattacharya et al. [5] who use the Stokes equations to fill a volume with smooth velocities, as an alternative to simple velocity extrapolation or potential flow approximations; they then use the generated field as a force to influence a liquid simulation. Their approach is shown to maintain rotational motion better than those existing alternative interpolants. We expand on this idea to address the copy-paste problem.
|
| 36 |
+
|
| 37 |
+
§ 3 METHOD
|
| 38 |
+
|
| 39 |
+
Our method takes as input the per-timestep vector field data for two complete grid-based incompressible fluid simulations (denoted source and target), along with geometry information dividing the domain of the final animation into an inner source region ${\Omega }_{s}$ , an outer target region ${\Omega }_{t}$ , and a blending region ${\Omega }_{b}$ . In the final time-varying vector field to be assembled, the data in ${\Omega }_{s}$ and ${\Omega }_{t}$ are simply replayed from the inputs; the central task we must solve is to generate a "natural" vector field for the blend region ${\Omega }_{b}$ in between for all time steps.
|
| 40 |
+
|
| 41 |
+
We would like the vector field we generate in the blend region to possess a few key characteristics. First, the velocities at the boundaries of the blend region (on either ${\Gamma }_{t} = {\Omega }_{t} \cap {\Omega }_{b} = \partial {\Omega }_{t}$ or ${\Gamma }_{s} = {\Omega }_{s} \cap {\Omega }_{b} = \partial {\Omega }_{s}$ ) should exactly match the velocities of the corresponding input field - this is essentially the familiar no-slip boundary condition often used for kinematic solids or prescribed inflows/outflows in Newtonian fluids. Second, the vector field should be relatively smooth, since our objective is essentially a special kind of velocity interpolant. With only these two stipulations, a very natural choice is harmonic interpolation [15]. As suggested by Sato et al. [28], this can be expressed as minimizing the Dirichlet energy:
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\mathop{\operatorname{argmin}}\limits_{{\mathbf{u}}_{b}}{\iiint }_{{\Omega }_{b}}{\begin{Vmatrix}\nabla {\mathbf{u}}_{\mathbf{b}}\end{Vmatrix}}^{2} \tag{1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\text{ subject to }{\mathbf{u}}_{b} = {\mathbf{u}}_{s}\text{ on }{\Gamma }_{s}\text{ , }
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
{\mathbf{u}}_{b} = {\mathbf{u}}_{t}\text{ on }{\Gamma }_{t}\text{ . }
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
< g r a p h i c s >
|
| 56 |
+
|
| 57 |
+
The minimizer satisfies $\nabla \cdot \nabla {\mathbf{u}}_{b} = 0$ , i.e. a componentwise Laplace equation on the velocity. (From here on we diverge from Sato et al. who proceed instead to manipulate the Dirichlet energy into a form that yields a vector Poisson equation.)
|
| 58 |
+
|
| 59 |
+
The Dirichlet energy alone is clearly insufficient, because it will prioritize smoothness at the cost of introducing divergence. Because we have assumed an incompressible flow model for our input (and desired output), the velocities in the blend region should not create or destroy material. A natural solution would be to simply apply a standard pressure projection as a post-process to convert the harmonic velocity field above to be incompressible. Unfortunately, this can cause the velocity field to deviate significantly from the harmonic input. Moreover, as we show in Section 5, pressure projection enforces only a free-slip condition (no-normal-flow), which allows objectionable tangential velocity discontinuities at the blend region's boundaries to be introduced.
|
| 60 |
+
|
| 61 |
+
We instead simultaneously combine the divergence-free stipulation with harmonic interpolation through the following formulation:
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\mathop{\operatorname{argmin}}\limits_{{\mathbf{u}}_{b}}{\iiint }_{{\Omega }_{b}}{\begin{Vmatrix}\nabla {\mathbf{u}}_{\mathbf{b}}\end{Vmatrix}}^{2} \tag{2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\text{ subject to }\nabla \cdot {\mathbf{u}}_{b} = 0\text{ on }{\Omega }_{b}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{\mathbf{u}}_{b} = {\mathbf{u}}_{s}\text{ on }{\Gamma }_{s}\text{ , }
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{\mathbf{u}}_{b} = {\mathbf{u}}_{t}\text{ on }{\Gamma }_{t}\text{ . }
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
This optimization problem provides the smoothest velocity field that interpolates the boundary data while preserving incompressibility. If we enforce the constraint with a Lagrange multiplier $p$ , the optimality conditions turn out to yield exactly the (constant viscosity) steady Stokes equations,
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\nabla \cdot \nabla {\mathbf{u}}_{b} - \nabla p = 0. \tag{3}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\nabla \cdot {\mathbf{u}}_{b} = 0, \tag{4}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
consistent with Helmholtz's minimum dissipation theorem [2]. We therefore refer to this construction as Stokes interpolation.
|
| 90 |
+
|
| 91 |
+
As noted in Section 2, we are not the first to suggest using the Stokes equations as a fluid interpolant: Bhattacharya et al. [5] first proposed steady state Stokes flow interpolation. However, our derivation and discussion above provides additional justification and insight into the variational nature of this approach. More importantly, Bhattacharya et al. did not consider the fluid cut-and-paste problem that we address in the current work.
|
| 92 |
+
|
| 93 |
+
A minor issue is that, for there to exist a valid solution, the boundary conditions must satisfy a compatibility condition; that is, the integrated flux across the two boundaries must be consistent with the condition of incompressibility on the blend region's interior:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
{\iiint }_{{\Omega }_{b}}\nabla \cdot {\mathbf{u}}^{n + 1}{dV} = 0 = {\iint }_{{\Gamma }_{s}}{\mathbf{u}}_{s}^{n + 1} \cdot \mathbf{n}{dA} + {\iint }_{{\Gamma }_{t}}{\mathbf{u}}_{t}^{n + 1} \cdot \mathbf{n}{dA} \tag{5}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
Fortunately, since the input vector fields both come from simulations that are themselves incompressible, the divergence theorem ensures that both the source copied patch and the target region to be pasted over have zero net flux across their respective boundaries - hence compatibility is guaranteed.
|
| 100 |
+
|
| 101 |
+
We arrive at the following algorithm. For each timestep, extract the boundary velocities from the input source and target simulations. Perform a steady Stokes solve on ${\Omega }_{b}$ as we have described to produce ${\mathbf{u}}_{b}^{n + 1}$ . Finally, directly fill in the inner and outer ${\Omega }_{s}$ and ${\Omega }_{t}$ regions with velocity from the input data ${\mathbf{u}}_{s}^{n + 1}$ and ${\mathbf{u}}_{t}^{n + 1}$ , respectively. The resulting time-varying vector field is divergence-free and offers an attractively smooth blend between source and target flows.
|
| 102 |
+
|
| 103 |
+
Note that, since the combined vector field differs significantly from both its inputs, the flow of any passive material (such as smoke density or tracer particles) must be recomputed from scratch by advection through the new field in order to yield a consistent visual result. This can usually be done efficiently and in parallel, since each (passive) particle's motion affects no other particles.
|
| 104 |
+
|
| 105 |
+
§ 4 IMPLEMENTATION
|
| 106 |
+
|
| 107 |
+
While our concept is very general, our implementation assumes that all simulations are arranged on a standard staggered ("MAC") grid [14]. This provides a natural infrastructure on which to discretize the Stokes equations on the blend region, via centered finite differences. The boundary between the blend region and the surrounding source and target flow fields is assumed to lie on axis-aligned grid faces between cells (although this could potentially be generalized to irregular cut-cells if desired $\left\lbrack {3,{16}}\right\rbrack$ ). Where needed to ensure precise no-slip velocities conditions at the exact face midpoints on voxelized boundaries of the blend region, we make use of the usual ghost fluid method [12] for the Laplace operator in (4). To solve the Stokes linear system at each step, we use the Least Squares Conjugate Gradient solver provided by the Eigen library [13], with a tolerance of $5 \times {10}^{-5}$ . (Other options for solving indefinite systems, such as SYMQMR or MINRES would also be appropriate [27].)
|
| 108 |
+
|
| 109 |
+
§ 5 RESULTS
|
| 110 |
+
|
| 111 |
+
We now consider some illustrative scenarios to demonstrate the behaviour of our method. Most of our figures make use of passive marker particles with alternating colors in initially horizontal rows to better highlight the developing flow structure, but we strongly encourage the reader to review our supplemental video to assess the motion more fully.
|
| 112 |
+
|
| 113 |
+
Our first scenario (Figure 1) consists of a static solid disk in a vertical wind-tunnel scenario, with inflow at the top and outflow at the bottom (particles leaving the bottom boundary re-enter at the top). We wish to paste the disk and its surroundings from the source simulation into an even simpler empty vertically translating wind-tunnel target simulation. This yields a smooth divergence-free combination of the two flows, where the flow outside of the blend region is completely undisturbed. Our result necessarily differs from the source animation, since in the source animation the presence of the disk globally disturbed the flow; our Stokes interpolation approach must therefore deform the flow more strongly in the blend region to compensate, yet we still achieve a visually plausible flow (Figure 2).
|
| 114 |
+
|
| 115 |
+
< g r a p h i c s >
|
| 116 |
+
|
| 117 |
+
Figure 1: Basic Setup: Our simplest scenario involves copying the flow around a disk from its source simulation (left, with region to be copied surrounded in blue) into an obstacle-free target simulation (middle). The result of our method is a new smoothly combined flow (right). The blue lines denote the inner and outer borders of the blend region over which we apply our Stokes interpolation. (The same frame of animation is shown in all three images.)
|
| 118 |
+
|
| 119 |
+
< g r a p h i c s >
|
| 120 |
+
|
| 121 |
+
Figure 2: Merged Flow Over Time: A few frames of the edited animation result of our approach based on the scenario described in Figure 1.
|
| 122 |
+
|
| 123 |
+
Next, we consider our method in comparison to two other obvious alternatives, as discussed in Section 3: componentwise harmonic interpolation, and post-projected harmonic interpolation. Pure harmonic interpolation seems effective at first glance, but unfortunately suffers from non-negligible divergence, as shown in Figure 3.
|
| 124 |
+
|
| 125 |
+
< g r a p h i c s >
|
| 126 |
+
|
| 127 |
+
Figure 3: Harmonic Interpolation: Harmonic interpolation of the velocity across the blend region yields a somewhat plausible flow (left), but suffers from large divergence (right). Red indicates positive divergence, blue indicates negative divergence. The divergence gradually induces greater clumping and spreading of the particles, as seen in the middle of the left image.
|
| 128 |
+
|
| 129 |
+
A possible improvement is to post-process the harmonic result with a projection to a divergence-free state. Unfortunately, while this successfully removes divergence, the natural free-slip conditions of the pressure projection reintroduce tangential slip along the borders of the blend region leading to objectionable motion artifacts in the flow. In the wind-tunnel scenario the vertical component of velocity suffers from discontinuities at blend region borders, leading to visible grid-aligned shearing of the flow.
|
| 130 |
+
|
| 131 |
+
< g r a p h i c s >
|
| 132 |
+
|
| 133 |
+
Figure 4: Projected Harmonic Interpolation: Our Stokes interpolation approach (left) yields continuous velocity fields. However, under projected harmonic interpolation (right), undesirable free-slip conditions introduce tangential discontinuities in the flow velocity at blend region borders, seen here as positional discontinuities in the rows of colored particles at far left and right.
|
| 134 |
+
|
| 135 |
+
To further stress-test our method, we consider some challenging scenarios analogous to those suggested by Sato et al. [28]. We combine flows in which the source and target differ in direction or speed. In Sato's approach, both larger speed and angle deviations lead to more severe failures of the divergence-free condition (we refer the reader to the secondary supplemental video accompanying that paper). Figure 5 shows the same test as we performed in our earlier examples, except that we have changed the ambient flow direction of the source simulation to have steadily increasing angles, including an example where the flow direction is completely reversed. While this leads to an increasingly unnatural look, the resulting flow field is still continuous, smooth on the blend region interior, and divergence-free independent, of this artistic decision. Similarly, Figure 6 performs a test in which the speed of the source (inner) simulation is slow or faster than the target (outer) simulation. Once again more severe speed differences lead to more unusual motions in the blend region in order to compensate. For example, when the speed ratio between source and target is 3, more elaborate interior circulation of the flow in the blend region becomes necessary to satisfy the incompressible condition. However, because the source and target are divergence-free and therefore provide compatible boundary conditions, the result is still correctly divergence-free.
|
| 136 |
+
|
| 137 |
+
A further point to note about these stress tests is that the more severe cases induce strong vortices that cause gaps to open in the flow. However, this is not due to divergence; rather, typical small numerical errors in particle trajectories due to interpolation and advection cause the particles to spread out from these points.
|
| 138 |
+
|
| 139 |
+
Lastly, we consider a few slightly more complex scenarios. Figure 7 shows our basic scenario again but using a rectangular obstacle instead of a disk. Figure 8 shows a scenario in which the user replaces a rectangular obstacle with a disk. Finally, in Figure 9, we paste a disk obstacle into a scene containing three rectangles, where the disk replaces the middle rectangle. Because of the additional obstacles, the flow structure is more complex. In this example, we tightened the blend region to fit more closely around the paste region. In all cases a plausible flow is constructed.
|
| 140 |
+
|
| 141 |
+
Notably, because a disk and a rectangle lead to different downstream motions in their respective wakes, a close inspection of the motion in these regions of our results reveals slightly unnatural motions, as our interpolant diligently tries to transition between two different flow structures. Fortunately, such effects are fairly subtle unless one is specifically looking for them. Ultimately, Stokes interpolation provides the best available solution under the stated constraints (smoothness, incompressibility, interpolation of boundary values), and it is up to the user to apply their judgment regarding whether a proposed flow edit achieves the desired effect.
|
| 142 |
+
|
| 143 |
+
Figure 5: Varying Angles: In these scenes, the outer flow is vertical while the pasted inner flow from the source simulation has flow direction with a relative angle of: ${0}^{ \circ }$ (top-left), ${45}^{ \circ }$ (top-right), ${90}^{ \circ }$ (bottom-left), and ${180}^{ \circ }$ (bottom-right). In all cases, the flow remains divergence-free.
|
| 144 |
+
|
| 145 |
+
§ 6 CONCLUSIONS AND FUTURE WORK
|
| 146 |
+
|
| 147 |
+
We have presented an approach to the fluid copy-paste problem that guarantees smooth and divergence-free fields by solving a steady Stokes problem at each time step to fill in a blend region between the source and target flow regions.
|
| 148 |
+
|
| 149 |
+
Our work suggests several directions to explore in future work. First, for simplicity we assumed axis-aligned rectangles for the copy-paste region, similar to basic region-selection in image editing, but it could be useful to extend our approach to more general (lasso-type) selection regions, either in a voxelized fashion or using irregular cut-cells $\left\lbrack {3,{16}}\right\rbrack$ for smoother shapes. This would add greater artistic flexibility, and may render the blend-region borders less apparent.
|
| 150 |
+
|
| 151 |
+
The mathematics underlying our approach extends naturally to 3D, although providing a manageable user interface for selecting and placing time-dependent volumetric flow regions becomes more challenging. This would be interesting to explore.
|
| 152 |
+
|
| 153 |
+
Another intriguing question is whether even better behavior at blend region borders could be achieved by replacing our Dirichlet energy with a higher order energy. At present, the no-slip condition enforces matching of the velocity value at the boundaries, but not its gradient. Minimizing instead a squared Laplacian energy (see e.g., [31]), still subject to incompressibility, would lead to a bilaplacian operator on velocity. This is conceptually similar to replacing linear interpolation with cubic interpolation. While it would lead to a more challenging linear system to solve (in terms of conditioning) it may be able to offer a value- and gradient-matched divergence-free blend field.
|
| 154 |
+
|
| 155 |
+
Finally, a challenging unanswered question in fluid animation more broadly is to what makes a fluid motion perceptually "realistic" from a human perspective, and how much deviation from physical accuracy can safely be tolerated in visual applications. A metric
|
| 156 |
+
|
| 157 |
+
< g r a p h i c s >
|
| 158 |
+
|
| 159 |
+
Figure 6: Varying Speeds: In these scenes, the outer flow has a fixed speed while the pasted inner flow from the source simulation has a speed ratio of: 1.0 (top-left), 0.75 (top-right), 1.5 (bottom-left), and 3.0 (bottom-right). In all cases, the flow remains divergence-free.
|
| 160 |
+
|
| 161 |
+
< g r a p h i c s >
|
| 162 |
+
|
| 163 |
+
Figure 7: Pasting a rectangle into a flow. From left to right: source scene, target scene, result.
|
| 164 |
+
|
| 165 |
+
of this kind could allow one to quantify more concretely whether a proposed flow edit is successful or harmful.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,431 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
|
| 3 |
+
## Slacktivity: Scaling Slack for Large Organizations
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Figure 1: Slacktivity's galaxy view. Each circle in the visualization is a Slack channel. (Note: division names have been changed to remove identifying information, and select channel names have been replaced with <fruitIvegetable>.)
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Group chat programs, such as Slack, are a promising and increasingly popular tool for improving communication among collections of people. However, it is unclear how the current design of group chat applications scales to support large and distributed organizations. We present a case study of a company-wide Slack installation in a large organization $( > {10},{000}$ employees) incorporating data from semi-structured interviews, exploratory use, and analysis of the data from the Slack workspace itself. Our case study reveals emergent behaviour, issues with exploring the content of such a large Slack workspace, and the inability to keep up with news from across the organization. To address these issues, we designed Slacktivity, a novel visualization system to augment the use of Slack in large organizations and demonstrate how Slacktivity can be used to overcome many of the challenges found when using Slack at scale.
|
| 12 |
+
|
| 13 |
+
Keywords: Group chat, visualization.
|
| 14 |
+
|
| 15 |
+
## 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
As organizations become larger and more distributed, communication becomes increasingly difficult. Even when workers are co-located, technologies like email and instant messaging are frequently used as an alternative to direct face-to-face communication. Email is often thought of as asynchronous and useful for long-term searching, while instant messaging is useful for synchronous communication and quicker conversations. More recently, group chat systems (such as Slack [1] and Microsoft Teams [2]) have become popular, combining benefits of both email and instant messaging, while offering the potential for increased collaboration and coordination.
|
| 18 |
+
|
| 19 |
+
When used over a period of time, a group chat workspace has the potential to be a useful "knowledge base" for a company, storing a history of communication about a given topic. However, the design of current group chat systems is optimized for near real-time communication, making it difficult to derive insights from this wealth of historical information [3]. This problem is exacerbated for large organizations using Slack where there may be thousands of channels and millions of messages. Additionally, with so many channels and messages, it is neither feasible nor possible to keep up with all activity across all channels to get a sense of what is going on throughout the organization, and the task of figuring out which channels to join or post questions to becomes more difficult.
|
| 20 |
+
|
| 21 |
+
In this paper we present a case study of the internal usage of Slack at BigCorp, ${}^{1}$ a ten thousand person software company which has been using a unified Slack workspace for the past 3+ years, with over 15,000 channels created and 65 million messages sent over that time. Our case study combines exploratory use of the Slack workspace, formal and informal interviews, and data analysis of the use of Slack by BigCorp. Using this data, we identified strategies employees use to cope with Slack at scale and also the pain points of using Slack such as the inability to find historical information, and the need to keep up with their organization better.
|
| 22 |
+
|
| 23 |
+
Using the findings from our case study we present Slacktivity, a tool to address the limitations of Slack when being used within a large organization. Slacktivity gives an overview of the channels across the entire organization with a cluster view, and also allows for detailed exploration of the entire history of a channel.
|
| 24 |
+
|
| 25 |
+
This paper makes two main contributions: a case study to better understand how group chat is used in large organizations, and a novel interactive visualization system to augment the use of Slack in a workplace.
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
${}^{1}$ Name anonymised for submission.
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## 2 Background and Related Work
|
| 34 |
+
|
| 35 |
+
### 2.1 Introduction to Group Chat
|
| 36 |
+
|
| 37 |
+
Group chat works by combining the functionality of instant messaging and Internet Relay Chat (IRC) with rich communication. The most basic form of communication in group chat is a message. Messages can contain text, images or other attachments and can reply to one another, forming a thread. However, threads in group chat are typically limited to one level of replies. In addition to replying to a message, a user can also react. Reactions are small emoji-like glyphs that are counted just below the message. When responding with a reaction there is no notification sent, making this a non-intrusive form of communication. Messages can be sent to three different locations: direct messages, private channels, and public channels. Direct messages can be sent to groups of up to 8 people. Channels on the other hand can hold any number of people and are identified by a channel name (eg. #general). Channels can be either public, meaning anybody can join, or private, which require an invitation. Finally, all channels and direct messages are part of a workspace, the highest level of hierarchy in group chat.
|
| 38 |
+
|
| 39 |
+
Our work explores the use of group chat in the workplace, and the use of visualization to improve its usage. This work looks at, and refers to Slack as the group-chat system, but the concepts and ideas apply equally to other systems (such as Microsoft Teams [2]).
|
| 40 |
+
|
| 41 |
+
### 2.2 Communication in the Workplace
|
| 42 |
+
|
| 43 |
+
Technology plays a key role in supporting communication in the workplace, but its use is varied and complicated. When face to face interaction is needed but not possible, video conference software like Skype or Google Hangouts is used. However, most communication does not require such an expressive communication style and can instead be textual. Recently, instant messaging has been a popular communication tool in the workplace [4], [5]. A case study [5] of a large tech company from 2005 showed that ${38}\%$ of communication took place over instant messaging, nearly equal to the ${39}\%$ with email. Only ${23}\%$ of communication was face-to-face or using the phone. Email is also used extensively and many people view their email as more than a communication tool, as a "personal information store" [6].
|
| 44 |
+
|
| 45 |
+
Another aspect of communication in the workplace is social media. Traditional non-workplace social medias like Facebook and Twitter have been demonstrated to be useful to organizations through the creation of weak ties [7]. Furthermore, some social networks have been designed specifically for use in the workplace. WaterCooler [8] was designed for use at HP to aggregate various content from across the organization. Another more commercialised approach to social networking is the Yammer tool, which is aimed at bringing a social network more comparable to Facebook into the workplace.
|
| 46 |
+
|
| 47 |
+
In recent years group chat has been used increasingly in the workplace, beginning with the inception of Slack in 2013 which had over 10 million active daily users in 2019 [9]. Despite the rapid adoption of group chat in the workplace, Zhang and Cranshaw [10] identify several issues associated with group. They report that employees struggle to find old information; chat history is overwhelming to newcomers; and employees fail to keep up with multiple channels. Unlike email, it is unclear if group chat is also used as a "personal information store".
|
| 48 |
+
|
| 49 |
+
Communication tools have the opportunity to be a critical part of an organization's knowledge management strategy as critical discussions often occur over email, instant messaging, or group chat. Research has shown how an organization can use instant messaging [11], [12] and email [13] as a part of their knowledge management strategy, but it is unknown how group chat can fit in.
|
| 50 |
+
|
| 51 |
+
### 2.3 Visualization of Conversation
|
| 52 |
+
|
| 53 |
+
Conversation exists in many mediums, such as forums, emails and instant messaging, and the academic literature has explored many different ways to visualize it either during the creation of the conversation, or after the conversation has occurred. Pioneering work began with the visualisation of forums, with a special focus on Usenet. Smith and Fiore [14] visualized Usenet threads by displaying the structure of the thread as a tree augmented with information about the users involved in the thread. Turner et. al [15] visualize Usenet as a hierarchical strategy to make recommendations on how to cultivate and manage Usenet groups. Wikum [16] uses recursive summarisation and visualisation to make the overcome the scale of online discussions. Other techniques have explored visualizing conversations by taking advantage of threads, including Thread Arcs [17], ThemeRiver [18], tldr [19], iForum [20], and Newman's work [21].
|
| 54 |
+
|
| 55 |
+
Venolia and Neustaedter [22] visualized email by creating conversations from the sequence and reply relationships, building on the idea of threading. Conversation thumbnails [23] visualize each email in a conversation as a rectangle, displayed in the order they were received and representing the complexity of the conversation.
|
| 56 |
+
|
| 57 |
+
Bergstrom and Karahalios [24] designed Conversation Clusters as a method of archiving instant message conversations in a manner that is easy to retrieve a desired conversation. Conversation Clusters visualize the groups of salient words by using colour. However, most work in instant messaging has looked at augmenting communication by changing the interface you interact with, such as giving people movable circles for their avatars [25], [26] or using a visualization to foster positive behavioural changes [27]. Our work differs from prior work visualising conversation in that we augment each mark with more information from the chat and also break the sequential linear relationship with time.
|
| 58 |
+
|
| 59 |
+
### 2.4 Understanding Group Chat
|
| 60 |
+
|
| 61 |
+
Efforts to understand group chat better are relatively rare. Our work is most related to T-Cal [28], which visualizes Slack conversations using a calendar-based visualization and allows for in-depth exploration using their thread-pulse design. However, T-Cal was designed to address the needs of Slack workspaces for massively open online courses (MOOCs) that are only used for a set amount of time. Another approach was taken by Zhang and Cranshaw [10], who aimed to improve sensemaking of group chat by creating a Slack bot, Tilda, that assisted in collaborative tagging of conversations. Both our case study of Slack and Slacktivity build on and draw inspiration from T-Cal and Tilda, however our focus is discovering and addressing the particular problems faced by deploying Slack at a large scale.
|
| 62 |
+
|
| 63 |
+
## 3 CASE STUDY: SLACK AT BIGCORP
|
| 64 |
+
|
| 65 |
+
To get a better understanding of how Slack is used "at scale", we studied Slack usage at BigCorp - a multinational design software company with over 10,000 employees distributed at dozens of locations around the world. BigCorp has officially adopted and encouraged the use of Slack as a platform for group chat. Further, BigCorp has consolidated Slack activity into a single unified Slack workspace, rather than allowing teams to maintain their own individual Slack workspaces.
|
| 66 |
+
|
| 67 |
+
### 3.1 Methodology
|
| 68 |
+
|
| 69 |
+
Our case study incorporates data from several sources: aggregate analysis from an export of the entire (non-private) history of the Slack workspace, extensive exploratory analysis, an interview with the Director in charge of Slack at BigCorp, informal discussions with dozens of employees about their Slack usage, and formal interviews with 5 employees ( 3 heavy users of Slack, 1 occasional user, and 1 infrequent user - referred to as P1-P5).
|
| 70 |
+
|
| 71 |
+
Our aggregate data analysis uses all Slack messages sent on all internally-public channels (that is, channels that all employees of BigCorp can see). This includes all of the data about threads and reactions. Every public channel is included and its metadata such as descriptions and creation time. We analyse this data using only simple summary statistics and visualisation. We also combined the Slack user profiles with data from HR sources for properties like job title and organizational division.
|
| 72 |
+
|
| 73 |
+
We employed exploratory use of the Slack workspace for some of our results. This consisted of primarily the first author using the workspace and exploring channels while taking notes of their contents. For some results, like the types of channels, an open coding scheme was used. Results stemming from exploratory use are not intended to be generalizable nor complete. Instead it is aimed to demonstrate some of the ways Slack can be used at scale.
|
| 74 |
+
|
| 75 |
+
The formal semi-structured interviews were each 1 hour long. The interviews looked to answer three key questions: how do employees use Slack during the workday; how do employees find information on Slack; and, how do employees keep up to date with what is happening at BigCorp (not necessarily using Slack). We analysed these interviews by transcribing them and finding interesting and recurring themes throughout the interviews.
|
| 76 |
+
|
| 77 |
+
#### 3.1.1 Data Anonymization
|
| 78 |
+
|
| 79 |
+
Given the private nature of communication tools we took several measures to preserve employee privacy. We only analysed "public" data that all employees at BigCorp have access to. We also did not analyse archived channels because people may "archive" a channel in an attempt to "delete" it. Additionally, we have anonymized all employees in the paper and accompanying material by blurring profile images and replacing all names with generated pseudonyms. We carefully examined each message before allowing it to appear in the paper or accompanying video. Anytime a channel name would be easily identifiable outside the company we replace it with a vegetable or fruit name and wrap it in angle brackets (eg. <eggplant>).
|
| 80 |
+
|
| 81 |
+
### 3.2 Origins of Slack at BigCorp
|
| 82 |
+
|
| 83 |
+
BigCorp has been running a company wide Slack workspace since May 2016 (3.5 years at the time of data analysis). Before that date, there were over 50 fragmented Slack workspaces being maintained by individual teams. Responding to the grass-roots demand for Slack, BigCorp decided to officially support Slack and consolidated all activity into a single shared Slack workspace and encouraged its use as an "approved" communication mechanism.
|
| 84 |
+
|
| 85 |
+
#### 3.2.1 Public by Default
|
| 86 |
+
|
| 87 |
+
The vice president at BigCorp who "sponsored" (paid for) the initial consolidation of Slack activity under a single Slack workspace agreed to do so under the condition that it would be run with a "public by default" policy, that is, that all channels would be set to "public", so they could be viewed and joined by everyone in the company - not just those in a particular team. The hope with this policy was that it would encourage openness and collaboration across the company and break down historic organizational "silos" where communication between separate teams and divisions had been limited. (Note: "public" channels are still only visible to employees of BigCorp, not the "general public")
|
| 88 |
+
|
| 89 |
+
Though the policy dictates that all channels are open to everyone by default, channels do still have "members" who are subscribed to a channel. For example, a channel for a specific small development team might (and often does) only have members from that team. This can lead to the feeling that a channel is in fact private, even though it is public and anyone in the company could potentially find it. Our raw data analysis identified numerous cases of people using language inappropriate for a corporate environment, suggesting some users might not appreciate how "open" their communication on these Slack channels is. However, participant P3 was well aware of the privacy of their messages and stated that: "I'll use [private messages] ... because there's just some things that don't need a public forum."
|
| 90 |
+
|
| 91 |
+
### 3.3 Analysis of Slack Usage Data
|
| 92 |
+
|
| 93 |
+
At the time of writing the Slack workspace has a total of 12,084 members and 8,204 public channels. The number of public channels has nearly doubled over the past year (Figure 2Error! Reference source not found.). There are also a small number of private channels, limited to channels where confidential HR discussions occur. We do not analyse private channels to respect employee privacy.
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
|
| 97 |
+
Figure 2: Number of channels over time.
|
| 98 |
+
|
| 99 |
+
In total 82 million messages have been sent in the Slack workspace with ${88}\%$ of those shared as private direct messages, ${11}\%$ in public channels, and $1\%$ in private channels. Users are able to use the "direct messaging" functionality of Slack to have private conversations between 2 to 8 people. The relatively high percentage of Slack activity occurring via direct messaging is from the combined effect of people using Slack as a one-to-one instant messaging tool, and people creating "private DM groups" to essentially circumvent the mandate that all channels be public.
|
| 100 |
+
|
| 101 |
+
Despite the volume of private direct messages, the ${11}\%$ of messages being sent in public channels means there have been over 9 million Slack public messages sent which are visible and searchable to all BigCorp workers, serving as a potentially rich source of company information. With such a large number of users and channels, the usage patterns among them is considerably varied.
|
| 102 |
+
|
| 103 |
+
#### 3.3.1 Channels
|
| 104 |
+
|
| 105 |
+
In addition to the 8,204 active public channels, there are an additional 6,891 archived public channels whose content (over 3.3 million messages) is still accessible through Slack search. Of the un-archived and technically "active" channels, their level of activity varies greatly with the least active ${20}\%$ of channels generating less than one post per month, while the ${90}\%$ percentile channel is generating 6.8 messages per month. Further, the most active channels are generating over 200 messages per month (Figure 3).
|
| 106 |
+
|
| 107 |
+
Channel membership counts are also widely distributed with half of all channels having less than 13 members, while the 40 most popular channels have over 500 members each, including the three channels to which all employees are automatically subscribed Error! Reference source not found.Figure 4).
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Figure 3: The number of messages posted per week, per channel. (Each dot represents one channel.)
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 4: Channel membership distributions.
|
| 116 |
+
|
| 117 |
+
#### 3.3.2 Users
|
| 118 |
+
|
| 119 |
+
The 12,084 members of the Slack workspace represents nearly every worker type at BigCorp, ranging from the CEO and VPs, to temporary contractors and outside collaborators (with limited access). Unsurprisingly, usage patterns vary among members. There are 9,417 weekly active members (have read at least one public channel in the past week), and 7,973 members who have posted at least one message in the past week. On the high-usage side, an average of 77 users post more than 100 messages in a week, and the most prolific members posting more than 400 messages in a week (Figure 5, horizontal axis).
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
|
| 123 |
+
Figure 5: Plot of the number of channels subscribed to vs. the average number of messages posted each week. (Note: each dot represents a single user.)
|
| 124 |
+
|
| 125 |
+
We also see a wide range in how members subscribe to channels (Figure 5, vertical axis), with the median user subscribed to 16 channels. However, there are 168 users subscribed to over 100 channels, and 7 users subscribed to more than 200 channels. For comparison, when using the native Slack client on a 1920x1080 resolution display, only 17 channel names will fit before scrolling is required.
|
| 126 |
+
|
| 127 |
+
#### 3.3.3 Reactions
|
| 128 |
+
|
| 129 |
+
A distinguishing feature of Slack compared to more "traditional" or formal means of communication in corporate environments is the use of reactions to posts. Reactions are a quick way to respond to a message in Slack and take the form of a small emoji-like glyph or animation (Figure 6). In the corpus of public messages, a total of 422,094 reactions have been left on 220,032 unique messages. Of all reactions the :+1 : "thumbs up" emoji(-)is the most frequent with over 166,000 uses (39% of all reactions), while the popular party parrot( )has been used nearly 22,000 times (5% of all reactions). Each of the 2,774 reactions used are shown in Figure 6, with their area scaled proportional to their relative usage rates.
|
| 130 |
+
|
| 131 |
+
### 3.4 Emergent Behaviour
|
| 132 |
+
|
| 133 |
+
During our exploration of the Slack workspace we discovered some interesting behaviours which had emerged, in part to facilitate using Slack at this scale.
|
| 134 |
+
|
| 135 |
+
#### 3.4.1 Naming scheme
|
| 136 |
+
|
| 137 |
+
The volunteer Slack administration team has instituted a fairly strict naming scheme. Words are separated by a dash (-) and should be as short as possible. Non-business channels are prefixed with #fun- and technology channels such as #tech-git are prefixed with #tech-. Most other channels are prefixed by their organizational unit, such as #research- or #hr-.
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+
Figure 6: Emoji-style reactions used on messages in the Slack workspace, scaled by area proportional to usage.
|
| 142 |
+
|
| 143 |
+
#### 3.4.2 Types of Channels
|
| 144 |
+
|
| 145 |
+
Channels naturally fell into a series of groupings. We determined the groupings by using an open coding scheme during our exploratory use of the Slack workspace.
|
| 146 |
+
|
| 147 |
+
Q&A channels are very structured channels for providing help. These channels involve a specific format for sending every message and strictly follow rules for creating threads.
|
| 148 |
+
|
| 149 |
+
News channels typically contain a collection of links and other small resources, and they often receive little engagement.
|
| 150 |
+
|
| 151 |
+
Management Team channels are composed of the direct reports of a manager and contain messages such as an employee declaring they are working from home.
|
| 152 |
+
|
| 153 |
+
Project Team channels are used to collect people on a specific project, who may come from different divisions and management chains.
|
| 154 |
+
|
| 155 |
+
Issue/Event channels are a limited lifetime channel spun up to collect communication about one specific issue. Their name corresponds directly to the issue id in the bug management system. These channels have rapid, bursty communication and fall into disuse after the issue is resolved (typically in a matter of days).
|
| 156 |
+
|
| 157 |
+
#### 3.4.3 Reactions
|
| 158 |
+
|
| 159 |
+
Several norms were formed around the way reactions were used within channels. For instance, in Q&A channels, the organizers often use eyes (*) to indicate that a request is being addressed (essentially assigning the task). After a request is complete the message will be marked with a checkmark ( Reference source not found.).
|
| 160 |
+
|
| 161 |
+
#### 3.4.4 Events
|
| 162 |
+
|
| 163 |
+
One interesting and effective event that occurs are ask me anything (AMAs) sessions. In these events, upper management blocks off a portion of their time for employees to ask them any questions they would like. Employees engage strongly with these events and even read AMAs from divisions they are not associated with. Some of our interview participants (P2, P3) reported being very interested in hearing what upper management had to say. Other events like product launches also occur on Slack.
|
| 164 |
+
|
| 165 |
+
Bryan Withings $4 : {32}\mathrm{{PM}}$ Hi @agskn, can you add my account to the sg-devops group please? Thanks! ...
|
| 166 |
+
|
| 167 |
+
1 reply Today at 5:06 PM
|
| 168 |
+
|
| 169 |
+
Figure 7: Asking a question in a help-specific channel with reactions.
|
| 170 |
+
|
| 171 |
+
#### 3.4.5 Email vs Slack
|
| 172 |
+
|
| 173 |
+
Although a large proportion of users engage regularly with Slack, there is still a population of people who continue to use email, and even the employees who primarily use Slack still revert to email. Anecdotally, we have found that Slack is often more quickly adopted by newer/younger employees. P1 uses email "when I want to send something a bit more official". P4 preferred email almost exclusively because it is "Easier to search for things". Interview participants also reported using email to communicate with people higher up in the management hierarchy. In fact, we noticed a trend in the data suggesting that upper-management level employees tend to use Slack less often than the rank-and-file. Although there is a need for email, P3 prefers Slack because "email threads [become], you know, untenable, they would grow and they would fork." Both Slack as well as email have their own niche in which they are useful.
|
| 174 |
+
|
| 175 |
+
### 3.5 Primary Problems
|
| 176 |
+
|
| 177 |
+
Our case study identified many interesting types of Slack usage in BigCorp; however, it also identified many problems employees encounter in day-to-day use. Two problems were particularly common throughout the case study: finding historical information, and keeping up with all the activity happening on Slack. These issues align strongly with Zhang et. al's [10] findings that looked at smaller Slack workspaces.
|
| 178 |
+
|
| 179 |
+
#### 3.5.1 Finding Historical Information
|
| 180 |
+
|
| 181 |
+
The first common theme throughout the case study was users having difficulty finding historical information in the Slack workspace(P1, P2, P3). So even though the workspace contains lots of potentially useful information, it is not easily accessible. P1 specifically mentioned "I find the search tool in Slack a bit cumbersome, I usually avoid having to use it" and "I might search in Slack [to find information]..., but usually that is not very productive." It is fairly unsurprising that finding historical information is difficult given the millions of messages and thousands of channels. When the huge quantity and timeline of messages is taken into account, it becomes obvious that an interface that only allows you to see one channel's messages, and only a small sample of messages at one time, will make finding historical information difficult.
|
| 182 |
+
|
| 183 |
+
#### 3.5.2 Keeping up with the Workspace
|
| 184 |
+
|
| 185 |
+
The second especially common problem was with people trying to "keep up" with everything happening in the workspace. With so many channels and messages, it can understandably be overwhelming. P1 and P2 both subscribe to a large number of channels, but then rarely check them - leading to hundreds of unread messages. We found that people liked to subscribe to many channels for topics they were interested in but could not practically read all of the messages posted there. A related problem is trying to "catch up" with your channels after a vacation - with some people opting to simply "mark all messages as read" rather than trying to read or skim what they missed.
|
| 186 |
+
|
| 187 |
+
## 4 SLACKTIVITY
|
| 188 |
+
|
| 189 |
+
Slack's focus is to provide a communication tool; however, our case study shows that users need more than just communication. Employees need to be able to access historical information and also keep up with their organization. We designed a system called Slacktivity to augment the use of Slack without hindering the ability to use Slack as a communication tool.
|
| 190 |
+
|
| 191 |
+
### 4.1 Design Goals
|
| 192 |
+
|
| 193 |
+
The design of Slacktivity was guided by a set of five design goals.
|
| 194 |
+
|
| 195 |
+
## D1: Maintain Consistent Interface Vocabulary with Slack
|
| 196 |
+
|
| 197 |
+
Our system should share the same interface vocabulary already used in Slack. By reusing concepts users are familiar with, they can transfer knowledge they already have to learn our system faster and more naturally. We found this principle especially important when sorting information. We decided on this design goal because it is important users can reuse the information they already know from Slack and are not confused.
|
| 198 |
+
|
| 199 |
+
## D2: Encourage Exploration
|
| 200 |
+
|
| 201 |
+
The linear, time-centric nature of Slack makes exploration difficult. Our system should support exploration by breaking out of the linear consumption of messages without breaking time continuity. Encouraging exploration also stands to improve finding historical information, as some information such as entire topics or unknown query terms are not directly searchable.
|
| 202 |
+
|
| 203 |
+
## D3: Enable and Support Emergent Behaviours
|
| 204 |
+
|
| 205 |
+
Emergent behaviour is difficult to predict and as such it is impossible to design directly for it. However, our system should enable and support emergent behaviour by allowing for flexibility. By supporting emergent behaviour, we hope to enable events like AMA's in addition to other behaviour like the use of reactions for specific purposes.
|
| 206 |
+
|
| 207 |
+
## D4: Enable Consuming Information at Varying Detail Levels
|
| 208 |
+
|
| 209 |
+
Slack's goal as a communication tool necessitates a focus around individual messages, however users interact and visualize Slack at a higher level when they are seeking information. As a result, our system should support consumption at varying degrees of detail including those not explicitly part of Slack. Supporting varying levels of detail can also assist users in keeping up with the workspace because they do not have to look at individual messages. Organizations have other sources of data, such as HR databases. Wherever possible, we aim to take advantage of that information to help contextualize the information found within Slack within these other, existing, sources of data.
|
| 210 |
+
|
| 211 |
+
---
|
| 212 |
+
|
| 213 |
+
D5: Take Advantage of Organizational Data
|
| 214 |
+
|
| 215 |
+
---
|
| 216 |
+
|
| 217 |
+
### 4.2 Implementation
|
| 218 |
+
|
| 219 |
+
The system is implemented as a web application using React combined with D3.js. Because the dataset is so large, we serve aggregate data using a Node.js server from a MongoDB.
|
| 220 |
+
|
| 221 |
+
### 4.3 Galaxy View
|
| 222 |
+
|
| 223 |
+
The galaxy view is the home page of the visualization, and one of the two main screens (Figure 8). It consists of four sections: clusters, trending, stream, and search. The galaxy view is designed to support exploration of the Slack workspace, as well as keeping up with the rest of the organization. Additionally, it acts as a jumping off point for accessing the messages view, the other main screen.
|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
|
| 227 |
+
Figure 8: Galaxy view of Slacktivity.
|
| 228 |
+
|
| 229 |
+
#### 4.3.1 Clusters (D4)
|
| 230 |
+
|
| 231 |
+
In the center of the screen are a series of hierarchical clusters (Figure 1). There are three concepts when describing the visualization to keep in mind: channel, prefix groups, and divisions. We will describe the clustering from the bottom upwards starting at the channels. Each circle is a channel. The radius of the circle is proportional to the square root of the number of members in the channel. The colour of the circle is determined by the organization from which the employees in that channel are a part of. If there is no majority ( $< {50}\%$ of workers from a single organization), then the circle is coloured grey. The data regarding employee organization is obtained from an HR data source (D5), as Slack does not provide it. Additionally, the saturation of each circle is determined by the magnitude of the employee composition. For example, a pure dark blue circle will consist almost entirely of employees from the blue division.
|
| 232 |
+
|
| 233 |
+
The next level of clustering, prefixes, is formed by the prefix of the channel. shows the #spg- group of channels. Large channel prefixes are identified by their label, but smaller prefixes are unlabelled to improve readability.
|
| 234 |
+
|
| 235 |
+
There is one final level of clustering, divisions. This final level clusters the prefixes by the majority of the channel colours in a prefix. For example, the #tech- channels are mostly grey, therefore that group of channels appears with other Misc channels. This last level of clustering is meant to represent the different divisions in the organization.
|
| 236 |
+
|
| 237 |
+
The visualization is generated using a circle packing algorithm from D3.js, with the hierarchy levels described here. It also supports zooming to allow users to more carefully select a channel they might be interested in, and to explore some of the smaller channels (D2).
|
| 238 |
+
|
| 239 |
+
The galaxy visualization reveals interesting features about the organization. For example, the green channels consist of support teams. This explains why there are often green channels inside of other divisions. These channels bridge the gap between support and product teams. Additionally, although most of the grey channels inside of "Misc" such as #tech- or #fun- channels are expressly created for all employees, there are some groups of channels that represent a product of BigCorp, like #<eggplant>- . These products are inter-division products that combine employees from across BigCorp, hence why they are classified as Misc.
|
| 240 |
+
|
| 241 |
+
#### 4.3.2 Thumbnails (D2, D4)
|
| 242 |
+
|
| 243 |
+
Upon hovering over a channel in the cluster visualization a thumbnail will appear displaying useful information (Figure 9). It shows a line graph with the number of messages sent since the Slack workspace was created, the channel name, number of members, the total number of messages, and the top reactions sent in this channel. A user can use these features to easily determine if they would like to join a channel or explore it further. It provides information about how casual the channel is (eg. if the reaction contains a party parrot the channel is likely to be casual), how active the channel is, and how many people a user will need to interact with. Thumbnails encourage exploration by allowing users to interactively investigate channels they might find interesting. Furthermore, it provides a level of detail that Slack does not by showing the lifetime of a channel.
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
|
| 247 |
+
Figure 9: Channel thumbnail, the channel is listed across the top, the current number of members, total number of messages, a small trend line indicating messages sent over time, and the top 5 used reactions.
|
| 248 |
+
|
| 249 |
+
#### 4.3.3 Stream
|
| 250 |
+
|
| 251 |
+
On the right side of the screen is a stream of all messages as they come into the Slack workspace (Figure 8). This is a livestream of all messages; no messages are filtered. However, they are grouped into their respective channels with the previous messages kept for context. As a message comes into the stream it is also highlighted in the galaxy view with a sonar effect that slowly dissipates as the message grows older. Some sonar effects are seen in the blue division. The clustering design of the visualization gives a few benefits can be used to see which parts of the organization are more active than others. By seeing which parts of the company are active an employee can keep up with the news of the company.
|
| 252 |
+
|
| 253 |
+
#### 4.3.4 Trending
|
| 254 |
+
|
| 255 |
+
On the left side of the screen, a single trending channel is chosen to be displayed (Figure 8, left). We compute the importance score for each message and take the sum of the importance of all messages sent to a channel weighted inversely by the seconds since it was sent. This means recent messages will have a higher weighting. For each individual message we calculate four metrics: the length of the message in characters (capped at 80), the total number of replies the message has received, the total number of reactions, the number of unique reactions, and the number of attachments the message has. Each of these values is normalised by taking the largest values and dividing them out so ] all numbers fall between 0 and 1 . We then take the sum of these values.
|
| 256 |
+
|
| 257 |
+
importance $= \left| \text{replies}\right| + \min \left( {\text{len}\left( \text{msg}\right) ,{80}}\right) + \left| {\text{reactions}}_{\text{unique}}\right| + \left| \text{reactions}\right|$
|
| 258 |
+
|
| 259 |
+
By capturing a trending channel, we give employees the ability to keep up with the company, including paying attention to AMA's as they happen.
|
| 260 |
+
|
| 261 |
+
#### 4.3.5 Search (D2)
|
| 262 |
+
|
| 263 |
+
The user can also search for a channel name by typing into the text box across the top of the screen (Figure 8). The search results contain the same visualization as the thumbnails used in the cluster visualization (Figure 9). Users can search to discover new channels they might not already be a part of as well as explore the Slack workspace as a whole.
|
| 264 |
+
|
| 265 |
+
### 4.4 Messages View
|
| 266 |
+
|
| 267 |
+
While the galaxy view gives an overview of the Slack workspace, the messages view (Figure 10), is designed to support detailed exploration and finding historical information. The messages view can be accessed by either searching for a channel from the galaxy view or clicking on a channel in the cluster visualization.
|
| 268 |
+
|
| 269 |
+
#### 4.4.1 Channel Overview
|
| 270 |
+
|
| 271 |
+
Across the top of the page is the channel overview (Figure 10A). This includes the channel name, topic, and description. Additionally, there is a deep link beside the channel name to go directly into the Slack application to that channel (D1).
|
| 272 |
+
|
| 273 |
+
#### 4.4.2 All Message Visualization
|
| 274 |
+
|
| 275 |
+
The primary view of this visualisation is the all message visualisation (Figure 10C). It represents all of the messages visible in the current time slice. Each message is represented as a horizontal bar, where the colour represents the user who sent the message and the length of the bar is proportional to the length of the message. Only the 5 users who have sent the most messages across the history of the channel are coloured to reduce visual clutter. Threaded replies appear as smaller boxes underneath the message they belong to. Each column of messages represents either a week, or month depending on the period of time that is currently selected. The y-vertical position of each message is decided by simply stacking bars upwards until all of the messages have been stacked. The height of each set of messages therefore represents how active the channel was for that time bucket. Hovering over a message will display a thumbnail showing the message and any threaded replies it has.
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+
Figure 10: Messages view for a single channel with sections for A) channel overview, B) filters, C) all message visualization, D) timeslice, and E) message pane.
|
| 280 |
+
|
| 281 |
+
#### 4.4.3 Message Pane (D1)
|
| 282 |
+
|
| 283 |
+
Along the right side is the messages pane which is designed to anchor the user back into the Slack workspace (Figure 10E). It is sorted reverse chronologically just like Slack, with newest messages appearing at the bottom. Each message is deep linked back into Slack. There is a light grey viewfinder in the channel visualization to represent the scroll position of where the message pane currently is. Clicking on a message in the visualization will scroll to that message in the message pane.
|
| 284 |
+
|
| 285 |
+
#### 4.4.4 Timeslice (D4)
|
| 286 |
+
|
| 287 |
+
The timeslice is chosen using the timeline across the bottom of the page (Figure 10D). The line graph displays how many messages were sent during that month. Each point in the graph is a month, with the y-axis representing the total number of messages sent that month. There are two preset buttons for choosing "this month" as well as "this year". The timeslice also limits the number of messages displayed in the message pane.
|
| 288 |
+
|
| 289 |
+
#### 4.4.5 Filtering
|
| 290 |
+
|
| 291 |
+
All filtering is done by the filter bar just below the channel overview (Figure 10B and Figure 11). Filters include message sender, important messages, reaction, and exact text matching by message. When a message is filtered out it is removed from the messages pane and also given reduced opacity in the visualization. Clicking a user profile image filters to only show messages from that user. This filter also acts as a legend.
|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
|
| 295 |
+
Figure 11: Filters for the messages view, allowing a user to explore a Slack channel by reducing the number of messages they need to read.
|
| 296 |
+
|
| 297 |
+
The slider filters messages using the same importance score used to detect trending. It works by adjusting the slider to control a percentile of the messages displayed. This allows users to quickly scan and review a channel by adjusting the slider to condense a channel and read only the most relevant messages.
|
| 298 |
+
|
| 299 |
+
Users can also filter by the 5 most common reactions used in the channel. Clicking one of the reactions filters to display only messages with that reaction. Supporting reaction filters allows for emergent behaviour by giving flexible filters (D3). Filtering by exact text matching is done by typing into the search bar on the far right side of the filters.
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 12: The messages view displaying a person rather than a channel.
|
| 304 |
+
|
| 305 |
+
#### 4.4.6 User Messages View
|
| 306 |
+
|
| 307 |
+
Although we have presented the messages view as built for channels, it is also possible to invert the relationship so that the view displays a user's messages rather than a channel's messages (Figure 12Error! Reference source not found.). Oftentimes when searching for information a person is more important than content [8]. By allowing exploration of messages sent by a user we enable exploration of a user (D2, D4).
|
| 308 |
+
|
| 309 |
+
The user messages view can be entered by clicking on a username or profile picture from any message in Slacktivity, for example the message stream from the galaxy view or the messages pane in the messages view.
|
| 310 |
+
|
| 311 |
+
This view differs from the messages view of a channel only in that the colours in the all message visualization reflect the channel a message was sent in rather than the user who sent it. Additionally, the filters support filtering by channel rather than the message sender.
|
| 312 |
+
|
| 313 |
+
### 4.5 Example Tasks
|
| 314 |
+
|
| 315 |
+
Slacktivity can be used in several ways, but we present three sample walkthrough tasks to illustrate some useful tasks.
|
| 316 |
+
|
| 317 |
+
#### 4.5.1 Task 1: A New Employee Joins AI Research
|
| 318 |
+
|
| 319 |
+
Often people transfer teams or join the company as new employees. New and transferring employees have many questions they need to answer, such as understanding how their new team functions, and discovering how their team interacts with the rest of the organization. The galaxy view can give the new employee a broad overview of the scale of the company they have joined, and how big of a niche their team is in. They can also easily compare how their coworkers use Slack by watching for activity from the prefixes their team channel belongs to and contrasting that with other divisions and prefixes.
|
| 320 |
+
|
| 321 |
+
#### 4.5.2 Task 2: Exploring a Topic Across the Organization
|
| 322 |
+
|
| 323 |
+
Another employee might be interested in exploring work BigCorp has done with a particular machine learning technique. The employee might know there is a research division that researches artificial intelligence, so they begin at the galaxy view by searching for channels that begin with #research. Upon seeing an active channel, they can navigate to the messages view for that channel. The channel has thousands of messages and they do not have time to scroll through them all, so they move the importance slider to show only the top 5% of messages in the channel. From here they scroll the messages pane and read the full text of the remaining messages. Finally, they notice that about half of the important messages were sent by a single person. From this exploration the employee learned about a new channel they could join, found a knowledgeable person in the organization to ask further questions, and learned specific details from reading important chat messages.
|
| 324 |
+
|
| 325 |
+
#### 4.5.3 Task 3: Keeping Up with the Organization
|
| 326 |
+
|
| 327 |
+
Rather than actively exploring Slacktivity, a user can keep a window or tab open in the background to the galaxy view. The user can then go back to the tab occasionally throughout their day and read only the trending channel. Through this, the user can discover if there is an ongoing AMA, or important discussion they can either follow or take part in. They might also see new channels they are interested in. This changes the paradigm of Slack from push communication to pull, allowing a user to keep up with the company by only spending the amount of time they would like.
|
| 328 |
+
|
| 329 |
+
## 5 Discussion
|
| 330 |
+
|
| 331 |
+
In this paper we described a case study of Slack use at BigCorp. Our case study identified several interesting behaviours and pain points with Slack use at scale. To address this, we designed Slacktivity to augment Slack at BigCorp. However, there are still interesting implications and unanswered questions regarding both our case study and Slacktivity, such as generalisability and evaluation.
|
| 332 |
+
|
| 333 |
+
### 5.1 Increased Slack Use
|
| 334 |
+
|
| 335 |
+
We hope that Slacktivity helps employees get value out of using Slack. However, this might have the indirect effect of increasing the amount of time employees spend on Slack. Although increased group chat use might yield many benefits for communicating in the workplace, encouraging its use could have drawbacks. In a study on instant messaging use, Cameron and Webster [29] reported that the use of instant messaging results in more interruptions throughout the day. It is unclear whether maximising the effectiveness of Slack would produce a net benefit when contrasted with increased use.
|
| 336 |
+
|
| 337 |
+
### 5.2 Slacktivity's Effect on Privacy
|
| 338 |
+
|
| 339 |
+
Slacktivity increases the number of ways an employee can view and discover a message, essentially making all messages a little bit less private. For example, the stream in the galaxy view has the added effect of showing employees how public their messages really are. Our case study revealed that some employees might not understand how private their messages are. Depending on the perspective, helping employees understand how public their messages are could be seen as either a positive or a negative. It is negative from an organizational perspective because users might be discouraged from using public channels. BigCorp explicitly wants employees to participate in public channels to encourage an environment of collaboration and openness. However, from an employee's perspective having a realistic view of how private they are is important to maintain trust in the organization.
|
| 340 |
+
|
| 341 |
+
### 5.3 Scaling Up
|
| 342 |
+
|
| 343 |
+
BigCorp is a large organization, having more than 10,000 employees. However, there exist even larger organizations with over 100,000 employees. It is unclear whether our system scales to such a large organization, or if an organization that large would use Slack in a similar fashion. For example, if the ratio between channels(8,204)and users(12,084)remains constant then one can expect a workspace with 100,000 employees to have 67,891 channels. It seems likely so many channels require another form of organization, maybe by adding another level of hierarchy to organize channels on top of divisions and prefixes.
|
| 344 |
+
|
| 345 |
+
We suspect it is unlikely that the median channel gets larger as an organization gets larger. Project and team channels are probably the same size whether an organization has 1000 employees or 100,000 employees. However, some channels are necessarily company wide. These channels already suffer because of the huge population that messages to them address. Sometimes mistakes happen and somebody notifies the entire channel by erroneously using @channel. This problem might be exacerbated when even more employees are in a channel.
|
| 346 |
+
|
| 347 |
+
### 5.4 Scaling Down
|
| 348 |
+
|
| 349 |
+
On the opposite end of the spectrum it is interesting to consider whether our case study and Slacktivity also generalise to small businesses with 100 or even medium businesses with 1,000 employees. The messages view probably has the same value because it was designed to work with small channels as well as with massive channels. Most of our findings regarding how Slack is used probably scale to medium sized organizations, as managing that many channels will require strict naming schemes and other forms of organization.
|
| 350 |
+
|
| 351 |
+
### 5.5 Generalisability
|
| 352 |
+
|
| 353 |
+
We have presented a case study of the use of Slack in a large organization. The question stands: do our results generalise to other organizations? This is a difficult question to answer because most organizations are unwilling to publicly disclose the detailed information described in our case study. However, we have some confidence that some aspects do generalise. 65 of the Fortune 100 companies use Slack, indicating that other companies do use Slack at a large scale. Other large organizations like IBM have channels "public by default" [30]. Not only do some of the more critical components of our case study generalise, but other parts such as issue specific channels are used in companies like Intuit [31] and other companies host events using Slack [32], [33]. It stands to reason that many of the other problems and descriptions of Slack use at scale described in this paper generalise to other large organizations.
|
| 354 |
+
|
| 355 |
+
A limitation of our work is that we do not evaluate Slacktivity with a user study. However, our example walkthroughs make an argument as to how Slacktivity might be used to solve some of the problems in our case study. Furthermore, this paper introduced novel methods of visualising Slack at scale and a detailed case study of how Slack is used at scale today. Future iterations of this work will be deployed at BigCorp, allowing us to study the long term impacts a deployment of Slacktivity might have.
|
| 356 |
+
|
| 357 |
+
## 6 CONCLUSION
|
| 358 |
+
|
| 359 |
+
In this paper we presented a case study of how Slack is used at BigCorp, a large organization with over 10,000 employees. Our case study highlighted interesting behaviour of Slack usage at a large corporation, and problems encountered by employees using Slack. Based on the case study we introduced a novel visualization technique, Slacktivity, designed to augment Slack in the workplace and enable its use at scale.
|
| 360 |
+
|
| 361 |
+
## REFERENCES
|
| 362 |
+
|
| 363 |
+
[1] "Where work happens | Slack." https://slack.com/intl/en-ca/ (accessed Sep. 19, 2019).
|
| 364 |
+
|
| 365 |
+
[2] "Microsoft Teams - Group Chat software." https://products.office.com/en-us/microsoft-teams/group-chat-software (accessed Sep. 19, 2019).
|
| 366 |
+
|
| 367 |
+
[3] H. McCracken and H. McCracken, "How Slack's search finally got good," Fast Company, Jul. 10, 2018. https://www.fastcompany.com/90180551/how-slacks-search-finally-got-good (accessed Sep. 20, 2019).
|
| 368 |
+
|
| 369 |
+
[4] J. D. Herbsleb, D. L. Atkins, D. G. Boyer, M. Handel, and T. A. Finholt, "Introducing Instant Messaging and Chat in the Workplace," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2002, pp. 171-178, doi: 10.1145/503376.503408.
|
| 370 |
+
|
| 371 |
+
[5] A. Quan-Haase, J. Cothrel, and B. Wellman, "Instant Messaging for Collaboration: a Case Study of a High-Tech Firm," J Comput Mediat Commun, vol. 10, no. 4, Jul. 2005, doi: 10.1111/j.1083- 6101.2005.tb00276.x.
|
| 372 |
+
|
| 373 |
+
[6] S. Whittaker, V. Bellotti, and J. Gwizdka, "Email in Personal Information Management," Commun. ACM, vol. 49, no. 1, pp. 68- 73, Jan. 2006, doi: 10.1145/1107458.1107494.
|
| 374 |
+
|
| 375 |
+
[7] M. M. Skeels and J. Grudin, "When social networks cross boundaries: a case study of workplace use of facebook and linkedin," in Proceedings of the ACM 2009 international conference on Supporting group work, Sanibel Island, Florida, USA, May 2009, pp. 95-104, doi: 10.1145/1531674.1531689.
|
| 376 |
+
|
| 377 |
+
[8] M. J. Brzozowski, "WaterCooler: exploring an organization through enterprise social media," in Proceedings of the ACM 2009 international conference on Supporting group work, Sanibel
|
| 378 |
+
|
| 379 |
+
Island, Florida, USA, May 2009, pp. 219-228, doi: 10.1145/1531674.1531706.
|
| 380 |
+
|
| 381 |
+
[9] "With 10+ million daily active users, Slack is where more work happens every day, all over the world," Several People Are Typing, Jan. 29, 2019. https://slackhq.com/slack-has-10-million-daily-active-users (accessed Sep. 19, 2019).
|
| 382 |
+
|
| 383 |
+
[10] A. X. Zhang and J. Cranshaw, "Making Sense of Group Chat Through Collaborative Tagging and Summarization," Proc. ACM Hum.-Comput. Interact., vol. 2, no. CSCW, p. 196:1-196:27, Nov. 2018, doi: 10.1145/3274465.
|
| 384 |
+
|
| 385 |
+
[11] A. D. Marwick, "Knowledge management technology," IBM Systems Journal, vol. 40, no. 4, pp. 814-830, 2001, doi: 10.1147/sj.404.0814.
|
| 386 |
+
|
| 387 |
+
[12] C.-Y. Wang, H.-Y. Yang, and S. T. Chou, "Using peer-to-peer technology for knowledge sharing in communities of practices," Decision Support Systems, vol. 45, no. 3, pp. 528-540, Jun. 2008, doi: 10.1016/j.dss.2007.06.012.
|
| 388 |
+
|
| 389 |
+
[13] P. Tyndale, "A taxonomy of knowledge management software tools: origins and applications," Evaluation and Program Planning, vol. 25, no. 2, pp. 183-190, May 2002, doi: 10.1016/S0149-7189(02)00012-5.
|
| 390 |
+
|
| 391 |
+
[14] M. A. Smith and A. T. Fiore, "Visualization Components for Persistent Conversations," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2001, pp. 136-143, doi: 10.1145/365024.365073.
|
| 392 |
+
|
| 393 |
+
[15] T. C. Turner, M. A. Smith, D. Fisher, and H. T. Welser, "Picturing Usenet: Mapping Computer-Mediated Collective Action," $J$ Comput Mediat Commun, vol. 10, no. 4, Jul. 2005, doi: 10.1111/j.1083-6101.2005.tb00270.x.
|
| 394 |
+
|
| 395 |
+
[16] A. X. Zhang, L. Verou, and D. Karger, "Wikum: Bridging Discussion Forums and Wikis Using Recursive Summarization," in Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, Portland, Oregon, USA, Feb. 2017, pp. 2082-2096, doi: 10.1145/2998181.2998235.
|
| 396 |
+
|
| 397 |
+
[17] B. Kerr, "Thread Arcs: an email thread visualization," in IEEE Symposium on Information Visualization 2003 (IEEE Cat. No.03TH8714), Oct. 2003, pp. 211-218, doi: 10.1109/INFVIS.2003.1249028.
|
| 398 |
+
|
| 399 |
+
[18] S. Havre, E. Hetzler, P. Whitney, and L. Nowell, "ThemeRiver: visualizing thematic changes in large document collections," IEEE Transactions on Visualization and Computer Graphics, vol. 8, no. 1, pp. 9-20, Jan. 2002, doi: 10.1109/2945.981848.
|
| 400 |
+
|
| 401 |
+
[19] S. Narayan and C. Cheshire, "Not Too Long to Read: The tldr Interface for Exploring and Navigating Large-Scale Discussion Spaces," in 2010 43rd Hawaii International Conference on System Sciences, Jan. 2010, pp. 1-10, doi: 10.1109/HICSS.2010.288.
|
| 402 |
+
|
| 403 |
+
[20] S. Fu, J. Zhao, W. Cui, and H. Qu, "Visual Analysis of MOOC Forums with iForum," IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 201-210, Jan. 2017, doi: 10.1109/TVCG.2016.2598444.
|
| 404 |
+
|
| 405 |
+
[21] P. S. Newman, "Exploring Discussion Lists: Steps and Directions," in Proceedings of the 2Nd ACM/IEEE-CS Joint Conference on Digital Libraries, New York, NY, USA, 2002, pp. 126-134, doi: 10.1145/544220.544245.
|
| 406 |
+
|
| 407 |
+
[22] G. D. Venolia and C. Neustaedter, "Understanding Sequence and Reply Relationships Within Email Conversations: A Mixed-model Visualization," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2003, pp. 361-368, doi: 10.1145/642611.642674.
|
| 408 |
+
|
| 409 |
+
[23] M. Wattenberg and D. Millen, "Conversation Thumbnails for Large-scale Discussions," in CHI '03 Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA, 2003, pp. 742-743, doi: 10.1145/765891.765963.
|
| 410 |
+
|
| 411 |
+
[24] T. Bergstrom and K. Karahalios, "Conversation Clusters: Grouping Conversation Topics Through Human-computer Dialog," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2009, pp. 2349-2352, doi: 10.1145/1518701.1519060.
|
| 412 |
+
|
| 413 |
+
[25] J. Donath and F. B. Viégas, "The Chat Circles Series: Explorations in Designing Abstract Graphical Communication Interfaces," in Proceedings of the 4th Conference on Designing Interactive
|
| 414 |
+
|
| 415 |
+
Systems: Processes, Practices, Methods, and Techniques, New York, NY, USA, 2002, pp. 359-369, doi: 10.1145/778712.778764.
|
| 416 |
+
|
| 417 |
+
[26] F. B. Viégas and J. S. Donath, "Chat Circles," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 1999, pp. 9-16, doi: 10.1145/302979.302981.
|
| 418 |
+
|
| 419 |
+
[27] G. Leshed et al., "Visualizing Real-time Language-based Feedback on Teamwork Behavior in Computer-mediated Groups," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2009, pp. 537-546, doi: 10.1145/1518701.1518784.
|
| 420 |
+
|
| 421 |
+
[28] S. Fu, J. Zhao, H. F. Cheng, H. Zhu, and J. Marlow, "T-Cal: Understanding Team Conversational Data with Calendar-based Visualization," in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2018, p. 500:1-500:13, doi: 10.1145/3173574.3174074.
|
| 422 |
+
|
| 423 |
+
[29] A. F. Cameron and J. Webster, "Unintended consequences of emerging communication technologies: Instant Messaging in the workplace," Computers in Human Behavior, vol. 21, no. 1, pp. 85-103, Jan. 2005, doi: 10.1016/j.chb.2003.12.001.
|
| 424 |
+
|
| 425 |
+
[30] "How the engineering team at IBM uses Slack throughout the development lifecycle," Several People Are Typing, May 22, 2017. https://slackhq.com/how-the-engineering-team-at-ibm-uses-slack-throughout-the-development-lifecycle (accessed Apr. 25, 2020).
|
| 426 |
+
|
| 427 |
+
[31] Slack, "Intuit | Customer Stories," Slack. https://slack.com/intl/en-ca/customer-stories/intuit (accessed Apr. 25, 2020).
|
| 428 |
+
|
| 429 |
+
[32] "From desert sun to virtual space: Taking our annual global sales event fully digital," Several People Are Typing, Mar. 16, 2020. https://slackhq.com/staging-digital-events-at-slack (accessed Apr. 25, 2020).
|
| 430 |
+
|
| 431 |
+
[33] "Twitter goes remote and hosts global all-hands in Slack," Several People Are Typing, Mar. 18, 2020. https://slackhq.com/twitter-goes-remote-with-slack (accessed Apr. 25, 2020).
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/2RMSIdhlfH9/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,349 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SLACKTIVITY: SCALING SLACK FOR LARGE ORGANIZATIONS
|
| 2 |
+
|
| 3 |
+
< g r a p h i c s >
|
| 4 |
+
|
| 5 |
+
Figure 1: Slacktivity's galaxy view. Each circle in the visualization is a Slack channel. (Note: division names have been changed to remove identifying information, and select channel names have been replaced with <fruitIvegetable>.)
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Group chat programs, such as Slack, are a promising and increasingly popular tool for improving communication among collections of people. However, it is unclear how the current design of group chat applications scales to support large and distributed organizations. We present a case study of a company-wide Slack installation in a large organization $( > {10},{000}$ employees) incorporating data from semi-structured interviews, exploratory use, and analysis of the data from the Slack workspace itself. Our case study reveals emergent behaviour, issues with exploring the content of such a large Slack workspace, and the inability to keep up with news from across the organization. To address these issues, we designed Slacktivity, a novel visualization system to augment the use of Slack in large organizations and demonstrate how Slacktivity can be used to overcome many of the challenges found when using Slack at scale.
|
| 10 |
+
|
| 11 |
+
Keywords: Group chat, visualization.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
As organizations become larger and more distributed, communication becomes increasingly difficult. Even when workers are co-located, technologies like email and instant messaging are frequently used as an alternative to direct face-to-face communication. Email is often thought of as asynchronous and useful for long-term searching, while instant messaging is useful for synchronous communication and quicker conversations. More recently, group chat systems (such as Slack [1] and Microsoft Teams [2]) have become popular, combining benefits of both email and instant messaging, while offering the potential for increased collaboration and coordination.
|
| 16 |
+
|
| 17 |
+
When used over a period of time, a group chat workspace has the potential to be a useful "knowledge base" for a company, storing a history of communication about a given topic. However, the design of current group chat systems is optimized for near real-time communication, making it difficult to derive insights from this wealth of historical information [3]. This problem is exacerbated for large organizations using Slack where there may be thousands of channels and millions of messages. Additionally, with so many channels and messages, it is neither feasible nor possible to keep up with all activity across all channels to get a sense of what is going on throughout the organization, and the task of figuring out which channels to join or post questions to becomes more difficult.
|
| 18 |
+
|
| 19 |
+
In this paper we present a case study of the internal usage of Slack at BigCorp, ${}^{1}$ a ten thousand person software company which has been using a unified Slack workspace for the past 3+ years, with over 15,000 channels created and 65 million messages sent over that time. Our case study combines exploratory use of the Slack workspace, formal and informal interviews, and data analysis of the use of Slack by BigCorp. Using this data, we identified strategies employees use to cope with Slack at scale and also the pain points of using Slack such as the inability to find historical information, and the need to keep up with their organization better.
|
| 20 |
+
|
| 21 |
+
Using the findings from our case study we present Slacktivity, a tool to address the limitations of Slack when being used within a large organization. Slacktivity gives an overview of the channels across the entire organization with a cluster view, and also allows for detailed exploration of the entire history of a channel.
|
| 22 |
+
|
| 23 |
+
This paper makes two main contributions: a case study to better understand how group chat is used in large organizations, and a novel interactive visualization system to augment the use of Slack in a workplace.
|
| 24 |
+
|
| 25 |
+
${}^{1}$ Name anonymised for submission.
|
| 26 |
+
|
| 27 |
+
§ 2 BACKGROUND AND RELATED WORK
|
| 28 |
+
|
| 29 |
+
§ 2.1 INTRODUCTION TO GROUP CHAT
|
| 30 |
+
|
| 31 |
+
Group chat works by combining the functionality of instant messaging and Internet Relay Chat (IRC) with rich communication. The most basic form of communication in group chat is a message. Messages can contain text, images or other attachments and can reply to one another, forming a thread. However, threads in group chat are typically limited to one level of replies. In addition to replying to a message, a user can also react. Reactions are small emoji-like glyphs that are counted just below the message. When responding with a reaction there is no notification sent, making this a non-intrusive form of communication. Messages can be sent to three different locations: direct messages, private channels, and public channels. Direct messages can be sent to groups of up to 8 people. Channels on the other hand can hold any number of people and are identified by a channel name (eg. #general). Channels can be either public, meaning anybody can join, or private, which require an invitation. Finally, all channels and direct messages are part of a workspace, the highest level of hierarchy in group chat.
|
| 32 |
+
|
| 33 |
+
Our work explores the use of group chat in the workplace, and the use of visualization to improve its usage. This work looks at, and refers to Slack as the group-chat system, but the concepts and ideas apply equally to other systems (such as Microsoft Teams [2]).
|
| 34 |
+
|
| 35 |
+
§ 2.2 COMMUNICATION IN THE WORKPLACE
|
| 36 |
+
|
| 37 |
+
Technology plays a key role in supporting communication in the workplace, but its use is varied and complicated. When face to face interaction is needed but not possible, video conference software like Skype or Google Hangouts is used. However, most communication does not require such an expressive communication style and can instead be textual. Recently, instant messaging has been a popular communication tool in the workplace [4], [5]. A case study [5] of a large tech company from 2005 showed that ${38}\%$ of communication took place over instant messaging, nearly equal to the ${39}\%$ with email. Only ${23}\%$ of communication was face-to-face or using the phone. Email is also used extensively and many people view their email as more than a communication tool, as a "personal information store" [6].
|
| 38 |
+
|
| 39 |
+
Another aspect of communication in the workplace is social media. Traditional non-workplace social medias like Facebook and Twitter have been demonstrated to be useful to organizations through the creation of weak ties [7]. Furthermore, some social networks have been designed specifically for use in the workplace. WaterCooler [8] was designed for use at HP to aggregate various content from across the organization. Another more commercialised approach to social networking is the Yammer tool, which is aimed at bringing a social network more comparable to Facebook into the workplace.
|
| 40 |
+
|
| 41 |
+
In recent years group chat has been used increasingly in the workplace, beginning with the inception of Slack in 2013 which had over 10 million active daily users in 2019 [9]. Despite the rapid adoption of group chat in the workplace, Zhang and Cranshaw [10] identify several issues associated with group. They report that employees struggle to find old information; chat history is overwhelming to newcomers; and employees fail to keep up with multiple channels. Unlike email, it is unclear if group chat is also used as a "personal information store".
|
| 42 |
+
|
| 43 |
+
Communication tools have the opportunity to be a critical part of an organization's knowledge management strategy as critical discussions often occur over email, instant messaging, or group chat. Research has shown how an organization can use instant messaging [11], [12] and email [13] as a part of their knowledge management strategy, but it is unknown how group chat can fit in.
|
| 44 |
+
|
| 45 |
+
§ 2.3 VISUALIZATION OF CONVERSATION
|
| 46 |
+
|
| 47 |
+
Conversation exists in many mediums, such as forums, emails and instant messaging, and the academic literature has explored many different ways to visualize it either during the creation of the conversation, or after the conversation has occurred. Pioneering work began with the visualisation of forums, with a special focus on Usenet. Smith and Fiore [14] visualized Usenet threads by displaying the structure of the thread as a tree augmented with information about the users involved in the thread. Turner et. al [15] visualize Usenet as a hierarchical strategy to make recommendations on how to cultivate and manage Usenet groups. Wikum [16] uses recursive summarisation and visualisation to make the overcome the scale of online discussions. Other techniques have explored visualizing conversations by taking advantage of threads, including Thread Arcs [17], ThemeRiver [18], tldr [19], iForum [20], and Newman's work [21].
|
| 48 |
+
|
| 49 |
+
Venolia and Neustaedter [22] visualized email by creating conversations from the sequence and reply relationships, building on the idea of threading. Conversation thumbnails [23] visualize each email in a conversation as a rectangle, displayed in the order they were received and representing the complexity of the conversation.
|
| 50 |
+
|
| 51 |
+
Bergstrom and Karahalios [24] designed Conversation Clusters as a method of archiving instant message conversations in a manner that is easy to retrieve a desired conversation. Conversation Clusters visualize the groups of salient words by using colour. However, most work in instant messaging has looked at augmenting communication by changing the interface you interact with, such as giving people movable circles for their avatars [25], [26] or using a visualization to foster positive behavioural changes [27]. Our work differs from prior work visualising conversation in that we augment each mark with more information from the chat and also break the sequential linear relationship with time.
|
| 52 |
+
|
| 53 |
+
§ 2.4 UNDERSTANDING GROUP CHAT
|
| 54 |
+
|
| 55 |
+
Efforts to understand group chat better are relatively rare. Our work is most related to T-Cal [28], which visualizes Slack conversations using a calendar-based visualization and allows for in-depth exploration using their thread-pulse design. However, T-Cal was designed to address the needs of Slack workspaces for massively open online courses (MOOCs) that are only used for a set amount of time. Another approach was taken by Zhang and Cranshaw [10], who aimed to improve sensemaking of group chat by creating a Slack bot, Tilda, that assisted in collaborative tagging of conversations. Both our case study of Slack and Slacktivity build on and draw inspiration from T-Cal and Tilda, however our focus is discovering and addressing the particular problems faced by deploying Slack at a large scale.
|
| 56 |
+
|
| 57 |
+
§ 3 CASE STUDY: SLACK AT BIGCORP
|
| 58 |
+
|
| 59 |
+
To get a better understanding of how Slack is used "at scale", we studied Slack usage at BigCorp - a multinational design software company with over 10,000 employees distributed at dozens of locations around the world. BigCorp has officially adopted and encouraged the use of Slack as a platform for group chat. Further, BigCorp has consolidated Slack activity into a single unified Slack workspace, rather than allowing teams to maintain their own individual Slack workspaces.
|
| 60 |
+
|
| 61 |
+
§ 3.1 METHODOLOGY
|
| 62 |
+
|
| 63 |
+
Our case study incorporates data from several sources: aggregate analysis from an export of the entire (non-private) history of the Slack workspace, extensive exploratory analysis, an interview with the Director in charge of Slack at BigCorp, informal discussions with dozens of employees about their Slack usage, and formal interviews with 5 employees ( 3 heavy users of Slack, 1 occasional user, and 1 infrequent user - referred to as P1-P5).
|
| 64 |
+
|
| 65 |
+
Our aggregate data analysis uses all Slack messages sent on all internally-public channels (that is, channels that all employees of BigCorp can see). This includes all of the data about threads and reactions. Every public channel is included and its metadata such as descriptions and creation time. We analyse this data using only simple summary statistics and visualisation. We also combined the Slack user profiles with data from HR sources for properties like job title and organizational division.
|
| 66 |
+
|
| 67 |
+
We employed exploratory use of the Slack workspace for some of our results. This consisted of primarily the first author using the workspace and exploring channels while taking notes of their contents. For some results, like the types of channels, an open coding scheme was used. Results stemming from exploratory use are not intended to be generalizable nor complete. Instead it is aimed to demonstrate some of the ways Slack can be used at scale.
|
| 68 |
+
|
| 69 |
+
The formal semi-structured interviews were each 1 hour long. The interviews looked to answer three key questions: how do employees use Slack during the workday; how do employees find information on Slack; and, how do employees keep up to date with what is happening at BigCorp (not necessarily using Slack). We analysed these interviews by transcribing them and finding interesting and recurring themes throughout the interviews.
|
| 70 |
+
|
| 71 |
+
§ 3.1.1 DATA ANONYMIZATION
|
| 72 |
+
|
| 73 |
+
Given the private nature of communication tools we took several measures to preserve employee privacy. We only analysed "public" data that all employees at BigCorp have access to. We also did not analyse archived channels because people may "archive" a channel in an attempt to "delete" it. Additionally, we have anonymized all employees in the paper and accompanying material by blurring profile images and replacing all names with generated pseudonyms. We carefully examined each message before allowing it to appear in the paper or accompanying video. Anytime a channel name would be easily identifiable outside the company we replace it with a vegetable or fruit name and wrap it in angle brackets (eg. <eggplant>).
|
| 74 |
+
|
| 75 |
+
§ 3.2 ORIGINS OF SLACK AT BIGCORP
|
| 76 |
+
|
| 77 |
+
BigCorp has been running a company wide Slack workspace since May 2016 (3.5 years at the time of data analysis). Before that date, there were over 50 fragmented Slack workspaces being maintained by individual teams. Responding to the grass-roots demand for Slack, BigCorp decided to officially support Slack and consolidated all activity into a single shared Slack workspace and encouraged its use as an "approved" communication mechanism.
|
| 78 |
+
|
| 79 |
+
§ 3.2.1 PUBLIC BY DEFAULT
|
| 80 |
+
|
| 81 |
+
The vice president at BigCorp who "sponsored" (paid for) the initial consolidation of Slack activity under a single Slack workspace agreed to do so under the condition that it would be run with a "public by default" policy, that is, that all channels would be set to "public", so they could be viewed and joined by everyone in the company - not just those in a particular team. The hope with this policy was that it would encourage openness and collaboration across the company and break down historic organizational "silos" where communication between separate teams and divisions had been limited. (Note: "public" channels are still only visible to employees of BigCorp, not the "general public")
|
| 82 |
+
|
| 83 |
+
Though the policy dictates that all channels are open to everyone by default, channels do still have "members" who are subscribed to a channel. For example, a channel for a specific small development team might (and often does) only have members from that team. This can lead to the feeling that a channel is in fact private, even though it is public and anyone in the company could potentially find it. Our raw data analysis identified numerous cases of people using language inappropriate for a corporate environment, suggesting some users might not appreciate how "open" their communication on these Slack channels is. However, participant P3 was well aware of the privacy of their messages and stated that: "I'll use [private messages] ... because there's just some things that don't need a public forum."
|
| 84 |
+
|
| 85 |
+
§ 3.3 ANALYSIS OF SLACK USAGE DATA
|
| 86 |
+
|
| 87 |
+
At the time of writing the Slack workspace has a total of 12,084 members and 8,204 public channels. The number of public channels has nearly doubled over the past year (Figure 2Error! Reference source not found.). There are also a small number of private channels, limited to channels where confidential HR discussions occur. We do not analyse private channels to respect employee privacy.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Figure 2: Number of channels over time.
|
| 92 |
+
|
| 93 |
+
In total 82 million messages have been sent in the Slack workspace with ${88}\%$ of those shared as private direct messages, ${11}\%$ in public channels, and $1\%$ in private channels. Users are able to use the "direct messaging" functionality of Slack to have private conversations between 2 to 8 people. The relatively high percentage of Slack activity occurring via direct messaging is from the combined effect of people using Slack as a one-to-one instant messaging tool, and people creating "private DM groups" to essentially circumvent the mandate that all channels be public.
|
| 94 |
+
|
| 95 |
+
Despite the volume of private direct messages, the ${11}\%$ of messages being sent in public channels means there have been over 9 million Slack public messages sent which are visible and searchable to all BigCorp workers, serving as a potentially rich source of company information. With such a large number of users and channels, the usage patterns among them is considerably varied.
|
| 96 |
+
|
| 97 |
+
§ 3.3.1 CHANNELS
|
| 98 |
+
|
| 99 |
+
In addition to the 8,204 active public channels, there are an additional 6,891 archived public channels whose content (over 3.3 million messages) is still accessible through Slack search. Of the un-archived and technically "active" channels, their level of activity varies greatly with the least active ${20}\%$ of channels generating less than one post per month, while the ${90}\%$ percentile channel is generating 6.8 messages per month. Further, the most active channels are generating over 200 messages per month (Figure 3).
|
| 100 |
+
|
| 101 |
+
Channel membership counts are also widely distributed with half of all channels having less than 13 members, while the 40 most popular channels have over 500 members each, including the three channels to which all employees are automatically subscribed Error! Reference source not found.Figure 4).
|
| 102 |
+
|
| 103 |
+
< g r a p h i c s >
|
| 104 |
+
|
| 105 |
+
Figure 3: The number of messages posted per week, per channel. (Each dot represents one channel.)
|
| 106 |
+
|
| 107 |
+
< g r a p h i c s >
|
| 108 |
+
|
| 109 |
+
Figure 4: Channel membership distributions.
|
| 110 |
+
|
| 111 |
+
§ 3.3.2 USERS
|
| 112 |
+
|
| 113 |
+
The 12,084 members of the Slack workspace represents nearly every worker type at BigCorp, ranging from the CEO and VPs, to temporary contractors and outside collaborators (with limited access). Unsurprisingly, usage patterns vary among members. There are 9,417 weekly active members (have read at least one public channel in the past week), and 7,973 members who have posted at least one message in the past week. On the high-usage side, an average of 77 users post more than 100 messages in a week, and the most prolific members posting more than 400 messages in a week (Figure 5, horizontal axis).
|
| 114 |
+
|
| 115 |
+
< g r a p h i c s >
|
| 116 |
+
|
| 117 |
+
Figure 5: Plot of the number of channels subscribed to vs. the average number of messages posted each week. (Note: each dot represents a single user.)
|
| 118 |
+
|
| 119 |
+
We also see a wide range in how members subscribe to channels (Figure 5, vertical axis), with the median user subscribed to 16 channels. However, there are 168 users subscribed to over 100 channels, and 7 users subscribed to more than 200 channels. For comparison, when using the native Slack client on a 1920x1080 resolution display, only 17 channel names will fit before scrolling is required.
|
| 120 |
+
|
| 121 |
+
§ 3.3.3 REACTIONS
|
| 122 |
+
|
| 123 |
+
A distinguishing feature of Slack compared to more "traditional" or formal means of communication in corporate environments is the use of reactions to posts. Reactions are a quick way to respond to a message in Slack and take the form of a small emoji-like glyph or animation (Figure 6). In the corpus of public messages, a total of 422,094 reactions have been left on 220,032 unique messages. Of all reactions the :+1 : "thumbs up" emoji(-)is the most frequent with over 166,000 uses (39% of all reactions), while the popular party parrot( )has been used nearly 22,000 times (5% of all reactions). Each of the 2,774 reactions used are shown in Figure 6, with their area scaled proportional to their relative usage rates.
|
| 124 |
+
|
| 125 |
+
§ 3.4 EMERGENT BEHAVIOUR
|
| 126 |
+
|
| 127 |
+
During our exploration of the Slack workspace we discovered some interesting behaviours which had emerged, in part to facilitate using Slack at this scale.
|
| 128 |
+
|
| 129 |
+
§ 3.4.1 NAMING SCHEME
|
| 130 |
+
|
| 131 |
+
The volunteer Slack administration team has instituted a fairly strict naming scheme. Words are separated by a dash (-) and should be as short as possible. Non-business channels are prefixed with #fun- and technology channels such as #tech-git are prefixed with #tech-. Most other channels are prefixed by their organizational unit, such as #research- or #hr-.
|
| 132 |
+
|
| 133 |
+
< g r a p h i c s >
|
| 134 |
+
|
| 135 |
+
Figure 6: Emoji-style reactions used on messages in the Slack workspace, scaled by area proportional to usage.
|
| 136 |
+
|
| 137 |
+
§ 3.4.2 TYPES OF CHANNELS
|
| 138 |
+
|
| 139 |
+
Channels naturally fell into a series of groupings. We determined the groupings by using an open coding scheme during our exploratory use of the Slack workspace.
|
| 140 |
+
|
| 141 |
+
Q&A channels are very structured channels for providing help. These channels involve a specific format for sending every message and strictly follow rules for creating threads.
|
| 142 |
+
|
| 143 |
+
News channels typically contain a collection of links and other small resources, and they often receive little engagement.
|
| 144 |
+
|
| 145 |
+
Management Team channels are composed of the direct reports of a manager and contain messages such as an employee declaring they are working from home.
|
| 146 |
+
|
| 147 |
+
Project Team channels are used to collect people on a specific project, who may come from different divisions and management chains.
|
| 148 |
+
|
| 149 |
+
Issue/Event channels are a limited lifetime channel spun up to collect communication about one specific issue. Their name corresponds directly to the issue id in the bug management system. These channels have rapid, bursty communication and fall into disuse after the issue is resolved (typically in a matter of days).
|
| 150 |
+
|
| 151 |
+
§ 3.4.3 REACTIONS
|
| 152 |
+
|
| 153 |
+
Several norms were formed around the way reactions were used within channels. For instance, in Q&A channels, the organizers often use eyes (*) to indicate that a request is being addressed (essentially assigning the task). After a request is complete the message will be marked with a checkmark ( Reference source not found.).
|
| 154 |
+
|
| 155 |
+
§ 3.4.4 EVENTS
|
| 156 |
+
|
| 157 |
+
One interesting and effective event that occurs are ask me anything (AMAs) sessions. In these events, upper management blocks off a portion of their time for employees to ask them any questions they would like. Employees engage strongly with these events and even read AMAs from divisions they are not associated with. Some of our interview participants (P2, P3) reported being very interested in hearing what upper management had to say. Other events like product launches also occur on Slack.
|
| 158 |
+
|
| 159 |
+
Bryan Withings $4 : {32}\mathrm{{PM}}$ Hi @agskn, can you add my account to the sg-devops group please? Thanks! ...
|
| 160 |
+
|
| 161 |
+
1 reply Today at 5:06 PM
|
| 162 |
+
|
| 163 |
+
Figure 7: Asking a question in a help-specific channel with reactions.
|
| 164 |
+
|
| 165 |
+
§ 3.4.5 EMAIL VS SLACK
|
| 166 |
+
|
| 167 |
+
Although a large proportion of users engage regularly with Slack, there is still a population of people who continue to use email, and even the employees who primarily use Slack still revert to email. Anecdotally, we have found that Slack is often more quickly adopted by newer/younger employees. P1 uses email "when I want to send something a bit more official". P4 preferred email almost exclusively because it is "Easier to search for things". Interview participants also reported using email to communicate with people higher up in the management hierarchy. In fact, we noticed a trend in the data suggesting that upper-management level employees tend to use Slack less often than the rank-and-file. Although there is a need for email, P3 prefers Slack because "email threads [become], you know, untenable, they would grow and they would fork." Both Slack as well as email have their own niche in which they are useful.
|
| 168 |
+
|
| 169 |
+
§ 3.5 PRIMARY PROBLEMS
|
| 170 |
+
|
| 171 |
+
Our case study identified many interesting types of Slack usage in BigCorp; however, it also identified many problems employees encounter in day-to-day use. Two problems were particularly common throughout the case study: finding historical information, and keeping up with all the activity happening on Slack. These issues align strongly with Zhang et. al's [10] findings that looked at smaller Slack workspaces.
|
| 172 |
+
|
| 173 |
+
§ 3.5.1 FINDING HISTORICAL INFORMATION
|
| 174 |
+
|
| 175 |
+
The first common theme throughout the case study was users having difficulty finding historical information in the Slack workspace(P1, P2, P3). So even though the workspace contains lots of potentially useful information, it is not easily accessible. P1 specifically mentioned "I find the search tool in Slack a bit cumbersome, I usually avoid having to use it" and "I might search in Slack [to find information]..., but usually that is not very productive." It is fairly unsurprising that finding historical information is difficult given the millions of messages and thousands of channels. When the huge quantity and timeline of messages is taken into account, it becomes obvious that an interface that only allows you to see one channel's messages, and only a small sample of messages at one time, will make finding historical information difficult.
|
| 176 |
+
|
| 177 |
+
§ 3.5.2 KEEPING UP WITH THE WORKSPACE
|
| 178 |
+
|
| 179 |
+
The second especially common problem was with people trying to "keep up" with everything happening in the workspace. With so many channels and messages, it can understandably be overwhelming. P1 and P2 both subscribe to a large number of channels, but then rarely check them - leading to hundreds of unread messages. We found that people liked to subscribe to many channels for topics they were interested in but could not practically read all of the messages posted there. A related problem is trying to "catch up" with your channels after a vacation - with some people opting to simply "mark all messages as read" rather than trying to read or skim what they missed.
|
| 180 |
+
|
| 181 |
+
§ 4 SLACKTIVITY
|
| 182 |
+
|
| 183 |
+
Slack's focus is to provide a communication tool; however, our case study shows that users need more than just communication. Employees need to be able to access historical information and also keep up with their organization. We designed a system called Slacktivity to augment the use of Slack without hindering the ability to use Slack as a communication tool.
|
| 184 |
+
|
| 185 |
+
§ 4.1 DESIGN GOALS
|
| 186 |
+
|
| 187 |
+
The design of Slacktivity was guided by a set of five design goals.
|
| 188 |
+
|
| 189 |
+
§ D1: MAINTAIN CONSISTENT INTERFACE VOCABULARY WITH SLACK
|
| 190 |
+
|
| 191 |
+
Our system should share the same interface vocabulary already used in Slack. By reusing concepts users are familiar with, they can transfer knowledge they already have to learn our system faster and more naturally. We found this principle especially important when sorting information. We decided on this design goal because it is important users can reuse the information they already know from Slack and are not confused.
|
| 192 |
+
|
| 193 |
+
§ D2: ENCOURAGE EXPLORATION
|
| 194 |
+
|
| 195 |
+
The linear, time-centric nature of Slack makes exploration difficult. Our system should support exploration by breaking out of the linear consumption of messages without breaking time continuity. Encouraging exploration also stands to improve finding historical information, as some information such as entire topics or unknown query terms are not directly searchable.
|
| 196 |
+
|
| 197 |
+
§ D3: ENABLE AND SUPPORT EMERGENT BEHAVIOURS
|
| 198 |
+
|
| 199 |
+
Emergent behaviour is difficult to predict and as such it is impossible to design directly for it. However, our system should enable and support emergent behaviour by allowing for flexibility. By supporting emergent behaviour, we hope to enable events like AMA's in addition to other behaviour like the use of reactions for specific purposes.
|
| 200 |
+
|
| 201 |
+
§ D4: ENABLE CONSUMING INFORMATION AT VARYING DETAIL LEVELS
|
| 202 |
+
|
| 203 |
+
Slack's goal as a communication tool necessitates a focus around individual messages, however users interact and visualize Slack at a higher level when they are seeking information. As a result, our system should support consumption at varying degrees of detail including those not explicitly part of Slack. Supporting varying levels of detail can also assist users in keeping up with the workspace because they do not have to look at individual messages. Organizations have other sources of data, such as HR databases. Wherever possible, we aim to take advantage of that information to help contextualize the information found within Slack within these other, existing, sources of data.
|
| 204 |
+
|
| 205 |
+
D5: Take Advantage of Organizational Data
|
| 206 |
+
|
| 207 |
+
§ 4.2 IMPLEMENTATION
|
| 208 |
+
|
| 209 |
+
The system is implemented as a web application using React combined with D3.js. Because the dataset is so large, we serve aggregate data using a Node.js server from a MongoDB.
|
| 210 |
+
|
| 211 |
+
§ 4.3 GALAXY VIEW
|
| 212 |
+
|
| 213 |
+
The galaxy view is the home page of the visualization, and one of the two main screens (Figure 8). It consists of four sections: clusters, trending, stream, and search. The galaxy view is designed to support exploration of the Slack workspace, as well as keeping up with the rest of the organization. Additionally, it acts as a jumping off point for accessing the messages view, the other main screen.
|
| 214 |
+
|
| 215 |
+
< g r a p h i c s >
|
| 216 |
+
|
| 217 |
+
Figure 8: Galaxy view of Slacktivity.
|
| 218 |
+
|
| 219 |
+
§ 4.3.1 CLUSTERS (D4)
|
| 220 |
+
|
| 221 |
+
In the center of the screen are a series of hierarchical clusters (Figure 1). There are three concepts when describing the visualization to keep in mind: channel, prefix groups, and divisions. We will describe the clustering from the bottom upwards starting at the channels. Each circle is a channel. The radius of the circle is proportional to the square root of the number of members in the channel. The colour of the circle is determined by the organization from which the employees in that channel are a part of. If there is no majority ( $< {50}\%$ of workers from a single organization), then the circle is coloured grey. The data regarding employee organization is obtained from an HR data source (D5), as Slack does not provide it. Additionally, the saturation of each circle is determined by the magnitude of the employee composition. For example, a pure dark blue circle will consist almost entirely of employees from the blue division.
|
| 222 |
+
|
| 223 |
+
The next level of clustering, prefixes, is formed by the prefix of the channel. shows the #spg- group of channels. Large channel prefixes are identified by their label, but smaller prefixes are unlabelled to improve readability.
|
| 224 |
+
|
| 225 |
+
There is one final level of clustering, divisions. This final level clusters the prefixes by the majority of the channel colours in a prefix. For example, the #tech- channels are mostly grey, therefore that group of channels appears with other Misc channels. This last level of clustering is meant to represent the different divisions in the organization.
|
| 226 |
+
|
| 227 |
+
The visualization is generated using a circle packing algorithm from D3.js, with the hierarchy levels described here. It also supports zooming to allow users to more carefully select a channel they might be interested in, and to explore some of the smaller channels (D2).
|
| 228 |
+
|
| 229 |
+
The galaxy visualization reveals interesting features about the organization. For example, the green channels consist of support teams. This explains why there are often green channels inside of other divisions. These channels bridge the gap between support and product teams. Additionally, although most of the grey channels inside of "Misc" such as #tech- or #fun- channels are expressly created for all employees, there are some groups of channels that represent a product of BigCorp, like #<eggplant>- . These products are inter-division products that combine employees from across BigCorp, hence why they are classified as Misc.
|
| 230 |
+
|
| 231 |
+
§ 4.3.2 THUMBNAILS (D2, D4)
|
| 232 |
+
|
| 233 |
+
Upon hovering over a channel in the cluster visualization a thumbnail will appear displaying useful information (Figure 9). It shows a line graph with the number of messages sent since the Slack workspace was created, the channel name, number of members, the total number of messages, and the top reactions sent in this channel. A user can use these features to easily determine if they would like to join a channel or explore it further. It provides information about how casual the channel is (eg. if the reaction contains a party parrot the channel is likely to be casual), how active the channel is, and how many people a user will need to interact with. Thumbnails encourage exploration by allowing users to interactively investigate channels they might find interesting. Furthermore, it provides a level of detail that Slack does not by showing the lifetime of a channel.
|
| 234 |
+
|
| 235 |
+
< g r a p h i c s >
|
| 236 |
+
|
| 237 |
+
Figure 9: Channel thumbnail, the channel is listed across the top, the current number of members, total number of messages, a small trend line indicating messages sent over time, and the top 5 used reactions.
|
| 238 |
+
|
| 239 |
+
§ 4.3.3 STREAM
|
| 240 |
+
|
| 241 |
+
On the right side of the screen is a stream of all messages as they come into the Slack workspace (Figure 8). This is a livestream of all messages; no messages are filtered. However, they are grouped into their respective channels with the previous messages kept for context. As a message comes into the stream it is also highlighted in the galaxy view with a sonar effect that slowly dissipates as the message grows older. Some sonar effects are seen in the blue division. The clustering design of the visualization gives a few benefits can be used to see which parts of the organization are more active than others. By seeing which parts of the company are active an employee can keep up with the news of the company.
|
| 242 |
+
|
| 243 |
+
§ 4.3.4 TRENDING
|
| 244 |
+
|
| 245 |
+
On the left side of the screen, a single trending channel is chosen to be displayed (Figure 8, left). We compute the importance score for each message and take the sum of the importance of all messages sent to a channel weighted inversely by the seconds since it was sent. This means recent messages will have a higher weighting. For each individual message we calculate four metrics: the length of the message in characters (capped at 80), the total number of replies the message has received, the total number of reactions, the number of unique reactions, and the number of attachments the message has. Each of these values is normalised by taking the largest values and dividing them out so ] all numbers fall between 0 and 1 . We then take the sum of these values.
|
| 246 |
+
|
| 247 |
+
importance $= \left| \text{ replies }\right| + \min \left( {\text{ len }\left( \text{ msg }\right) ,{80}}\right) + \left| {\text{ reactions }}_{\text{ unique }}\right| + \left| \text{ reactions }\right|$
|
| 248 |
+
|
| 249 |
+
By capturing a trending channel, we give employees the ability to keep up with the company, including paying attention to AMA's as they happen.
|
| 250 |
+
|
| 251 |
+
§ 4.3.5 SEARCH (D2)
|
| 252 |
+
|
| 253 |
+
The user can also search for a channel name by typing into the text box across the top of the screen (Figure 8). The search results contain the same visualization as the thumbnails used in the cluster visualization (Figure 9). Users can search to discover new channels they might not already be a part of as well as explore the Slack workspace as a whole.
|
| 254 |
+
|
| 255 |
+
§ 4.4 MESSAGES VIEW
|
| 256 |
+
|
| 257 |
+
While the galaxy view gives an overview of the Slack workspace, the messages view (Figure 10), is designed to support detailed exploration and finding historical information. The messages view can be accessed by either searching for a channel from the galaxy view or clicking on a channel in the cluster visualization.
|
| 258 |
+
|
| 259 |
+
§ 4.4.1 CHANNEL OVERVIEW
|
| 260 |
+
|
| 261 |
+
Across the top of the page is the channel overview (Figure 10A). This includes the channel name, topic, and description. Additionally, there is a deep link beside the channel name to go directly into the Slack application to that channel (D1).
|
| 262 |
+
|
| 263 |
+
§ 4.4.2 ALL MESSAGE VISUALIZATION
|
| 264 |
+
|
| 265 |
+
The primary view of this visualisation is the all message visualisation (Figure 10C). It represents all of the messages visible in the current time slice. Each message is represented as a horizontal bar, where the colour represents the user who sent the message and the length of the bar is proportional to the length of the message. Only the 5 users who have sent the most messages across the history of the channel are coloured to reduce visual clutter. Threaded replies appear as smaller boxes underneath the message they belong to. Each column of messages represents either a week, or month depending on the period of time that is currently selected. The y-vertical position of each message is decided by simply stacking bars upwards until all of the messages have been stacked. The height of each set of messages therefore represents how active the channel was for that time bucket. Hovering over a message will display a thumbnail showing the message and any threaded replies it has.
|
| 266 |
+
|
| 267 |
+
< g r a p h i c s >
|
| 268 |
+
|
| 269 |
+
Figure 10: Messages view for a single channel with sections for A) channel overview, B) filters, C) all message visualization, D) timeslice, and E) message pane.
|
| 270 |
+
|
| 271 |
+
§ 4.4.3 MESSAGE PANE (D1)
|
| 272 |
+
|
| 273 |
+
Along the right side is the messages pane which is designed to anchor the user back into the Slack workspace (Figure 10E). It is sorted reverse chronologically just like Slack, with newest messages appearing at the bottom. Each message is deep linked back into Slack. There is a light grey viewfinder in the channel visualization to represent the scroll position of where the message pane currently is. Clicking on a message in the visualization will scroll to that message in the message pane.
|
| 274 |
+
|
| 275 |
+
§ 4.4.4 TIMESLICE (D4)
|
| 276 |
+
|
| 277 |
+
The timeslice is chosen using the timeline across the bottom of the page (Figure 10D). The line graph displays how many messages were sent during that month. Each point in the graph is a month, with the y-axis representing the total number of messages sent that month. There are two preset buttons for choosing "this month" as well as "this year". The timeslice also limits the number of messages displayed in the message pane.
|
| 278 |
+
|
| 279 |
+
§ 4.4.5 FILTERING
|
| 280 |
+
|
| 281 |
+
All filtering is done by the filter bar just below the channel overview (Figure 10B and Figure 11). Filters include message sender, important messages, reaction, and exact text matching by message. When a message is filtered out it is removed from the messages pane and also given reduced opacity in the visualization. Clicking a user profile image filters to only show messages from that user. This filter also acts as a legend.
|
| 282 |
+
|
| 283 |
+
< g r a p h i c s >
|
| 284 |
+
|
| 285 |
+
Figure 11: Filters for the messages view, allowing a user to explore a Slack channel by reducing the number of messages they need to read.
|
| 286 |
+
|
| 287 |
+
The slider filters messages using the same importance score used to detect trending. It works by adjusting the slider to control a percentile of the messages displayed. This allows users to quickly scan and review a channel by adjusting the slider to condense a channel and read only the most relevant messages.
|
| 288 |
+
|
| 289 |
+
Users can also filter by the 5 most common reactions used in the channel. Clicking one of the reactions filters to display only messages with that reaction. Supporting reaction filters allows for emergent behaviour by giving flexible filters (D3). Filtering by exact text matching is done by typing into the search bar on the far right side of the filters.
|
| 290 |
+
|
| 291 |
+
< g r a p h i c s >
|
| 292 |
+
|
| 293 |
+
Figure 12: The messages view displaying a person rather than a channel.
|
| 294 |
+
|
| 295 |
+
§ 4.4.6 USER MESSAGES VIEW
|
| 296 |
+
|
| 297 |
+
Although we have presented the messages view as built for channels, it is also possible to invert the relationship so that the view displays a user's messages rather than a channel's messages (Figure 12Error! Reference source not found.). Oftentimes when searching for information a person is more important than content [8]. By allowing exploration of messages sent by a user we enable exploration of a user (D2, D4).
|
| 298 |
+
|
| 299 |
+
The user messages view can be entered by clicking on a username or profile picture from any message in Slacktivity, for example the message stream from the galaxy view or the messages pane in the messages view.
|
| 300 |
+
|
| 301 |
+
This view differs from the messages view of a channel only in that the colours in the all message visualization reflect the channel a message was sent in rather than the user who sent it. Additionally, the filters support filtering by channel rather than the message sender.
|
| 302 |
+
|
| 303 |
+
§ 4.5 EXAMPLE TASKS
|
| 304 |
+
|
| 305 |
+
Slacktivity can be used in several ways, but we present three sample walkthrough tasks to illustrate some useful tasks.
|
| 306 |
+
|
| 307 |
+
§ 4.5.1 TASK 1: A NEW EMPLOYEE JOINS AI RESEARCH
|
| 308 |
+
|
| 309 |
+
Often people transfer teams or join the company as new employees. New and transferring employees have many questions they need to answer, such as understanding how their new team functions, and discovering how their team interacts with the rest of the organization. The galaxy view can give the new employee a broad overview of the scale of the company they have joined, and how big of a niche their team is in. They can also easily compare how their coworkers use Slack by watching for activity from the prefixes their team channel belongs to and contrasting that with other divisions and prefixes.
|
| 310 |
+
|
| 311 |
+
§ 4.5.2 TASK 2: EXPLORING A TOPIC ACROSS THE ORGANIZATION
|
| 312 |
+
|
| 313 |
+
Another employee might be interested in exploring work BigCorp has done with a particular machine learning technique. The employee might know there is a research division that researches artificial intelligence, so they begin at the galaxy view by searching for channels that begin with #research. Upon seeing an active channel, they can navigate to the messages view for that channel. The channel has thousands of messages and they do not have time to scroll through them all, so they move the importance slider to show only the top 5% of messages in the channel. From here they scroll the messages pane and read the full text of the remaining messages. Finally, they notice that about half of the important messages were sent by a single person. From this exploration the employee learned about a new channel they could join, found a knowledgeable person in the organization to ask further questions, and learned specific details from reading important chat messages.
|
| 314 |
+
|
| 315 |
+
§ 4.5.3 TASK 3: KEEPING UP WITH THE ORGANIZATION
|
| 316 |
+
|
| 317 |
+
Rather than actively exploring Slacktivity, a user can keep a window or tab open in the background to the galaxy view. The user can then go back to the tab occasionally throughout their day and read only the trending channel. Through this, the user can discover if there is an ongoing AMA, or important discussion they can either follow or take part in. They might also see new channels they are interested in. This changes the paradigm of Slack from push communication to pull, allowing a user to keep up with the company by only spending the amount of time they would like.
|
| 318 |
+
|
| 319 |
+
§ 5 DISCUSSION
|
| 320 |
+
|
| 321 |
+
In this paper we described a case study of Slack use at BigCorp. Our case study identified several interesting behaviours and pain points with Slack use at scale. To address this, we designed Slacktivity to augment Slack at BigCorp. However, there are still interesting implications and unanswered questions regarding both our case study and Slacktivity, such as generalisability and evaluation.
|
| 322 |
+
|
| 323 |
+
§ 5.1 INCREASED SLACK USE
|
| 324 |
+
|
| 325 |
+
We hope that Slacktivity helps employees get value out of using Slack. However, this might have the indirect effect of increasing the amount of time employees spend on Slack. Although increased group chat use might yield many benefits for communicating in the workplace, encouraging its use could have drawbacks. In a study on instant messaging use, Cameron and Webster [29] reported that the use of instant messaging results in more interruptions throughout the day. It is unclear whether maximising the effectiveness of Slack would produce a net benefit when contrasted with increased use.
|
| 326 |
+
|
| 327 |
+
§ 5.2 SLACKTIVITY'S EFFECT ON PRIVACY
|
| 328 |
+
|
| 329 |
+
Slacktivity increases the number of ways an employee can view and discover a message, essentially making all messages a little bit less private. For example, the stream in the galaxy view has the added effect of showing employees how public their messages really are. Our case study revealed that some employees might not understand how private their messages are. Depending on the perspective, helping employees understand how public their messages are could be seen as either a positive or a negative. It is negative from an organizational perspective because users might be discouraged from using public channels. BigCorp explicitly wants employees to participate in public channels to encourage an environment of collaboration and openness. However, from an employee's perspective having a realistic view of how private they are is important to maintain trust in the organization.
|
| 330 |
+
|
| 331 |
+
§ 5.3 SCALING UP
|
| 332 |
+
|
| 333 |
+
BigCorp is a large organization, having more than 10,000 employees. However, there exist even larger organizations with over 100,000 employees. It is unclear whether our system scales to such a large organization, or if an organization that large would use Slack in a similar fashion. For example, if the ratio between channels(8,204)and users(12,084)remains constant then one can expect a workspace with 100,000 employees to have 67,891 channels. It seems likely so many channels require another form of organization, maybe by adding another level of hierarchy to organize channels on top of divisions and prefixes.
|
| 334 |
+
|
| 335 |
+
We suspect it is unlikely that the median channel gets larger as an organization gets larger. Project and team channels are probably the same size whether an organization has 1000 employees or 100,000 employees. However, some channels are necessarily company wide. These channels already suffer because of the huge population that messages to them address. Sometimes mistakes happen and somebody notifies the entire channel by erroneously using @channel. This problem might be exacerbated when even more employees are in a channel.
|
| 336 |
+
|
| 337 |
+
§ 5.4 SCALING DOWN
|
| 338 |
+
|
| 339 |
+
On the opposite end of the spectrum it is interesting to consider whether our case study and Slacktivity also generalise to small businesses with 100 or even medium businesses with 1,000 employees. The messages view probably has the same value because it was designed to work with small channels as well as with massive channels. Most of our findings regarding how Slack is used probably scale to medium sized organizations, as managing that many channels will require strict naming schemes and other forms of organization.
|
| 340 |
+
|
| 341 |
+
§ 5.5 GENERALISABILITY
|
| 342 |
+
|
| 343 |
+
We have presented a case study of the use of Slack in a large organization. The question stands: do our results generalise to other organizations? This is a difficult question to answer because most organizations are unwilling to publicly disclose the detailed information described in our case study. However, we have some confidence that some aspects do generalise. 65 of the Fortune 100 companies use Slack, indicating that other companies do use Slack at a large scale. Other large organizations like IBM have channels "public by default" [30]. Not only do some of the more critical components of our case study generalise, but other parts such as issue specific channels are used in companies like Intuit [31] and other companies host events using Slack [32], [33]. It stands to reason that many of the other problems and descriptions of Slack use at scale described in this paper generalise to other large organizations.
|
| 344 |
+
|
| 345 |
+
A limitation of our work is that we do not evaluate Slacktivity with a user study. However, our example walkthroughs make an argument as to how Slacktivity might be used to solve some of the problems in our case study. Furthermore, this paper introduced novel methods of visualising Slack at scale and a detailed case study of how Slack is used at scale today. Future iterations of this work will be deployed at BigCorp, allowing us to study the long term impacts a deployment of Slacktivity might have.
|
| 346 |
+
|
| 347 |
+
§ 6 CONCLUSION
|
| 348 |
+
|
| 349 |
+
In this paper we presented a case study of how Slack is used at BigCorp, a large organization with over 10,000 employees. Our case study highlighted interesting behaviour of Slack usage at a large corporation, and problems encountered by employees using Slack. Based on the case study we introduced a novel visualization technique, Slacktivity, designed to augment Slack in the workplace and enable its use at scale.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,293 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A stricter constraint produces outstanding matching: Learning reliable image matching with improved networks
|
| 2 |
+
|
| 3 |
+
Anonymous Authors
|
| 4 |
+
|
| 5 |
+
Author's Affiliation
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Image matching is widely used in many applications, such as visual-based localization and 3D construction. Comparing with Traditional local features (e.g., SIFT) and outlier elimination method (e.g, RANSAC), learning-based image matching methods, e.g., HardNet and OANet, show promising performance under challenging environments and large-scale benchmarks. However, the existing learning-based methods suffer the noise in training data and the existing loss function, e.g., hinge loss, does not work well in image matching network. In this paper, we propose an end-to-end image matching method with less training dataset to obtain a more accurate and robust performance. First, a novel data cleaning strategy is proposed to remove the noise in the training dataset. Second, we strengthen the matching constraints by proposing a novel quadratic hinge triplet (QHT) loss function to improve the network. Finally, we apply stricter OANet in sample judgement to produce more outstanding matching. The proposed method shows the state-of-the-art performance on a large-scale and challenging Phototourism dataset, and also reported the 1st place in the CVPR 2020 Image Matching Challenges Workshop Track1 with the metric measure of reconstructed pose accuracy. Keywords: Image matching, HardNet, OANet, SIFT, large scale, challenging environments, pose accuracy.
|
| 10 |
+
|
| 11 |
+
Keywords: Image matching, HardNet, OANet, SIFT, large scale, challenging environments, pose accuracy.
|
| 12 |
+
|
| 13 |
+
## 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Image matching is the foundation task of some high-level 3D computer vision tasks, such as 3D reconstruction, structure-from-motion, camera pose estimation, etc. The goal of the task is to solve the corresponding pixels relationship within a same physical region from two images which have a common vision area [1]. Nowadays, with the widespread development of deep learning technologies on many computer vision areas and the need for long-term large-scale environments tasks, the shortcomings of traditional keypoint-based image matching method gradually appear, the major difficulty lies in that usually local features are evaluated through the accuracy of descriptors on small datasets, which only illustrates the matching performance, and is not suitable for integrating and evaluating some downstream tasks.
|
| 16 |
+
|
| 17 |
+
The use of deep learning based solution for image matching task is hopeful to outperform the disadvantages of traditional keypoint-based methods, which demonstrated the ability to integrate multi-stage tasks in a lot of works [2][3] and could optimize and evaluate different evaluation metrics in large dataset. Although without complex hand-crafted efforts and show convenient pipeline in further tasks, the methods face with challenging conditions, such as illumination variation, viewpoint changes, repeated texture, which is apparent in outdoor datasets where scenes scales and conditions are highly variable. This leads to the poor performance result both in accuracy and robustness.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Figure 1: Our matching method's correspondences in extreme illumination variation and viewpoint changes.
|
| 22 |
+
|
| 23 |
+
To solve this problem, further end-to-end solutions and modified descriptors have been proposed with the premise being that the descriptor or network could learn more reliable features across image pairs. A novel representation of features with log-polar sampling method [4] is proposed to achieve scale invariance Some works [2-3] propose a jointly learning feature detection and description method separately to improve the image matching performance.
|
| 24 |
+
|
| 25 |
+
Inspired by the mentioned knowledge, in this work, we propose a three-stage pipeline, including feature extraction, feature matching and outlier pre-filtering steps, to compute the corresponding pixels from image pairs, as shown in Figure 1. For each step, we add some constraints to enforce the algorithm converge to better matching results. Unlike the previous methods, our proposed method only requires light-weight model and does not need to train on multi-branches separately.
|
| 26 |
+
|
| 27 |
+
We show that our method outperforms previous algorithms and achieves the state-of-the-art performance using the Phototourism benchmark with large-scale environments and challenging conditions. We provide a detailed insight into how improved data processing strategy, HardNet [8] loss function, modified OANet [9] combined with guided match algorithm [10] could help the
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
* email address
|
| 32 |
+
|
| 33 |
+
---
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+
Figure 2: Image matching based pose estimation pipeline, with some popular traditional and learning-based methods. Our method is based on the pipeline with some improvement on the chosen methods (indicated by a green box with a red tick).
|
| 38 |
+
|
| 39 |
+
pipeline to achieve accurate and robust matching results and evaluate the method on pose estimation task.
|
| 40 |
+
|
| 41 |
+
The main contributions include:
|
| 42 |
+
|
| 43 |
+
- We construct a new patch dataset based on the given Phototourism images and sparse model, which is similar to the UBC Phtotour dataset [11], and pre-train our model on that dataset;
|
| 44 |
+
|
| 45 |
+
- We propose a novel quadratic hinge triplet (QHT) loss function for the feature descriptor network HardNet, and an improved OANet combined with a guided matching strategy to compute reliable feature matches;
|
| 46 |
+
|
| 47 |
+
- Through abundant experiments and the ablation study for each module, we show that our algorithm achieves state-of-the-art performance in the reconstructed poses, which ranks first on both tracks of stereo and multi-view matching on the public Phototourism Image Matching challenge 2020 [7].
|
| 48 |
+
|
| 49 |
+
## 2 RELATED WORK
|
| 50 |
+
|
| 51 |
+
Image matching plays an important role in many high-level tasks in computer vision area [14-15], which is to find the corresponding pixels from the image pairs in the same physical region in which the scene imaged with geometrical constraints [1]. Feature extraction, feature matching and outlier pre-filtering are the most vital components in image matching pipeline, in which traditional methods and deep learning based methods are implemented and completed in recent years. Figure 2 shows the main parts of image matching based pose estimation pipeline, including some common methods and our chosen target algorithm to further improve.
|
| 52 |
+
|
| 53 |
+
Feature Extraction. End-to-end feature extraction and matching methods are classified into detect-then-describe (e.g. SuperPoint [16]), detect-and-describe (e.g. R2D2 [3], D2-Net [2]), and describe-then-detect (e.g. DELF [19]) strategies according to the sequences for executing the detection and description processes, which are utilized in different needs. However, they either have poor performance on challenging conditions and lack of repeatability on keypoints or show low efficiency on matching and storage. Traditionally, detectors and descriptors are separately applied in the pipeline, SIFT [5] (and RootSIFT [26]) and SURF [21] are most popular detectors while some descriptors are followed, in which LogPolar [4] shows better performance than ContextDesc [24] from the relative pose error, while SOSNet [25] and HardNet [8] overpass GIFT [23] in the public validation set.
|
| 54 |
+
|
| 55 |
+
Outlier pre-filtering. There are a large number of incorrect matches in the matching pair, which will bring noise to the subsequent pose estimation. Traditional outlier removal methods include ratio-test [5], GMS [6], guided-match [10] methods, etc.
|
| 56 |
+
|
| 57 |
+
A method based on deep learning is to judge whether the match is an outlier through the pose relationship obtained by the regression of the convolutional network, but it is usually difficult for network training to converge. Another way is to convert the pose to the label of whether the match is correct through the epipolar constraint, and then the regression problem can be converted into a classification problem to determine whether the match is inlier or outlier. The standard binary cross-entropy loss can be used to learn model.
|
| 58 |
+
|
| 59 |
+
However, the input matching pairs are unordered, so the network needs to be permutation-invariant (insensitive) to match sequence transformations, so it is not applicable to convolutional networks or fully connected networks. CNe [12] draws on the idea of pointNet [13] and proposes a multi-layer weight-sharing perceptron to solve the irrelevant transformation problem. For each input matching pair, the same perceptron is used to process iterations, but this also leads that each matching pair is processed independently, lacks information circulation, and cannot integrate the predicted matching pair judgment results, and thus cannot learn the useful information in the entire image matching pair. Context normalization [17] is an instance normalization method that is often used widely in image migration [29] and GAN [27]. It normalizes the processing results of image matching pairs to exchange and circulate information. But this simple and violent use of mean variance ignores the complexity of global features, and at the same time cannot capture the correlation information between parts. OANet [9] proposed the use of DiffPool and DiffUnPool layers to promote the information flow and communication of internal neurons.
|
| 60 |
+
|
| 61 |
+
To estimate a more accurate pose of the given image pairs or 3D reconstruction through the matching process, we consider improving the pipeline with respect to the following aspects:
|
| 62 |
+
|
| 63 |
+
- Obtain stable and accurate keypoints. The keypoints should appear invariably, e.g., a certain point in the first image could also appear stably in the second image taken under a different viewpoint or illumination change.
|
| 64 |
+
|
| 65 |
+
- Provide reliable and distinguishable descriptors. Under different environmental conditions, the same keypoint should have similar descriptor, so that the same keypoint in different images can be paired using nearest neighbor matching.
|
| 66 |
+
|
| 67 |
+
- Hold powerful outlier pre-filtering ability. Filtering the wrong matches to get most of the correct matching pairs could help to get an accurate pose estimation result finally.
|
| 68 |
+
|
| 69 |
+
## 3 Method
|
| 70 |
+
|
| 71 |
+
In this paper, we aim to learn accurate and reliable image matching in large-scale challenging conditions. In contrast with previous works which use end-to-end solutions of insufficient accuracy at the feature extraction stage or traditional solvers through the image matching pipeline, we combined traditional detector solver with an improved HardNet and OANet to pretrain on a constructed dataset. In particular, we propose a dataset construction method to optimize hypermeters before training on large-scale dataset and propose a new loss function in the description stage based on HardNet network, during outlier pre-filtering stage, we combined guided match with proposed stricter OANet to learn more accurate matches.
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Figure 3: The whole architecture with proposed methods. We use SIFT feature detector firstly, and then apply HardNet from the 32x32 patch input into the 128 dimensions descriptors, after the nearest neighbor match and outlier pre-filtering, we could get the final matches, which is further used to compute the pose estimation.
|
| 76 |
+
|
| 77 |
+
The whole architecture of the proposed method is demonstrated in Figure 3. Firstly, 128 dimensions of features are extracted from a basic HardNet with the input ${32} \times {32}$ patch. Then, after the matching stage, with the proposed stricter OANet, the correspondences could be transferred into $\mathrm{N}$ binary classification (inlier / outlier), which is permutation-invariant and feasible with convolutional layers. Finally, the pre-filtered matches are used to compute pose estimation in stereo and multi-view tasks.
|
| 78 |
+
|
| 79 |
+
### 3.1 Dataset Construction
|
| 80 |
+
|
| 81 |
+
#### 3.1.1 UBC Dataset Construction for Pretraining
|
| 82 |
+
|
| 83 |
+
In order to quickly train our lightweight method and search for some hyperparameters to further use into the pretrained model, thus we use the UBC Phototour dataset for the pretraining, which is patch data and suitable for HardNet training.
|
| 84 |
+
|
| 85 |
+
#### 3.1.2 Phototourism Dataset Construction for Training
|
| 86 |
+
|
| 87 |
+
After pretraining on the constructed UBC Phototour dataset for fast parameter selection, for the training period, we construct a clean dataset with low noise and less redundancy from Phototourism training scenes. Removing the redundant data could largely shorten the training period, while noise label reducing could help the loss function gradient descent optimization to improve network performance.
|
| 88 |
+
|
| 89 |
+
To reduce the noise label in the dataset, we set the threshold of discard the images with low confidence, in which 25% with the least $3\mathrm{\;d}$ points and $3\mathrm{\;d}$ points whose tracks are less than 5 . For the redundant data removing, we sample the $3\mathrm{\;d}$ points that have been tracked more than 5 times, and only keep the $3\mathrm{\;d}$ points which are tracked 5 times. The sampling is repeated 10 times, and the NCC value is calculated for each sampling (the lower of the NCC indicates the lower similarity of the two images and the larger the difference of the matching), and the result with the lowest NCC is retained. In addition, we also applied data augmentation with random flip and random rotation strategies in both pretraining and training processes.
|
| 90 |
+
|
| 91 |
+
### 3.2 Feature Extraction with Improved HardNet
|
| 92 |
+
|
| 93 |
+
Feature extraction includes keypoint detection and feature description processes. SIFT detector [5] is used firstly to extract the position and scale information on the selected keypoint of the given input image. We adopted OpenCV implemented SIFT with a low detection threshold to generate up to 8000 points and a fixed orientation ${}^{1}$ . A single pixel on the image cannot describe the physical information of the current key point, so in order to describe the key point and the information around it, the key point is cropped with its scale size, called a patch, and then resized to ${32} \times {32}$ as the input of the HardNet network to extract the patch’s descriptor.
|
| 94 |
+
|
| 95 |
+
After a relatively simple 7-layer HardNet, the input ${32} \times {32}$ patch is described as a 128-dimensional feature vector, which represents the description generated for each patch. We remain the network structure, and made improvements to its loss function to train the network stably and efficiently on the constructed dataset.
|
| 96 |
+
|
| 97 |
+
HardNet uses triplet loss embedded with difficult sample mining to optimize so that the distance of the sample descriptor within the class is small, and the distance of the sample description vector between the classes is large.
|
| 98 |
+
|
| 99 |
+
In order to further improve the effectiveness and stability of model learning, we apply a quadratic hinge triplet (QHT) loss based on the triplet loss, which is inspired from the first order similarity loss in SOSNet [25], by adding the square term of the triple, which shares the same mining strategy as in HardNet to find the "hardest-within-batch" negatives. The description loss function ${\mathcal{L}}_{\text{des }}$ is expressed as:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{\mathcal{L}}_{\text{des }} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\max {\left( 0,1 + {d}_{i}^{\text{pos }} - {d}_{i}^{\text{neg }}\right) }^{2} \tag{1}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{d}_{i}^{pos} = d\left( {{a}_{i},{pos}_{i}}\right) \tag{2}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{d}_{i}^{neg} = \mathop{\min }\limits_{{\forall j \neq i}}\left( {d\left( {{a}_{i},{neg}_{j}}\right) }\right) \tag{3}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
Where ${d}_{i}^{\text{pos }}$ represents the L2 distance between anchor descriptor and positive descriptor, ${d}_{i}^{\text{neg }}$ represents the minimum distance in all the distances between anchor descriptor and the negative descriptor. The positive sample represents the patch on different images corresponding to the same $3\mathrm{\;d}$ point with the given anchor in the real world, on the contrary, the negative sample represents the patch obtained from different $3\mathrm{\;d}$ points.
|
| 114 |
+
|
| 115 |
+
Quadratic triplet loss weights the gradients with respect to the parameters of the network by the magnitude of the loss. Compared with HardNet, the gradient of QHT loss is more sensitive to changes in loss. Larger ${d}_{i}^{\text{pos }} - {d}_{i}^{\text{neg }}$ results in smaller loss gradient, thus the model learning is more effective and stable.
|
| 116 |
+
|
| 117 |
+
In addition, the improved model would be sensitive to the noise in the data. Wrong positive and negative sample labels will degrade the performance of the model, this risk could be avoided through the data denoising work proposed before.
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
${}^{1}$ https://github.com/vcg-uvic/image-matching-benchmark-baselines
|
| 122 |
+
|
| 123 |
+
---
|
| 124 |
+
|
| 125 |
+
### 3.3 Stricter OANet with Dynamic Guided Matching
|
| 126 |
+
|
| 127 |
+
OANet proposed to use Diffpool and DiffUnPool layers on the basis of CNe to promote the information circulation and communication of internal neurons. The two network's models can be abstracted as Figure 4, $\mathrm{N}$ matching pairs with $\mathrm{D}$ - dimensional are processed by the perceptron of the CNe network to obtain output results of the same dimension. The OANet network reduces the input dimension to an $\mathrm{M} \times \mathrm{D}$ matrix through the Diffpool layer, and then raises the dimension back to $\mathrm{N} \times \mathrm{D}$ output through DiffUnpool, and merges the information in the middle. Diffpool maps the input $\mathrm{N}$ matches to $\mathrm{M}$ by learning the weights of soft distribution to aggregate the information, and then DiffUnpool reorganizes the information back to $\mathrm{N}$ dimensions. The network is not sensitive to disordered disturbances of input matching pairs.
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+
Figure 4: Our matching method's correspondences in extreme illumination variation and viewpoint changes.
|
| 132 |
+
|
| 133 |
+
We convert the accuracy of the pose to the accuracy of the matching to learn, so the accuracy of the matching will affect the learning ability of the model. We have made the following improvements to OANet to make it more rigorous to learn from positive samples, and to make matching judgments the label be more accurate, which effectively filters the exception points and improves the matching and pose estimation performance. We shortened the threshold condition of the geometric error constraint from 1e-4 to 1e-6. In addition, we also added a point-to-point constraint based on the OANet point-to-line constraint. A point is judged as a negative sample when the projection distance is greater than 10 pixels.
|
| 134 |
+
|
| 135 |
+
Generally, inadequate matches may lead to inaccurate pose estimation. Apart from OANet, for the image pairs with less than 100 matches, we also proposed dynamic guided matching (DGM) to increase matches. Different from traditional guided matching [10], the dynamic threshold is applied on the Sampson distance constraint according to the number of matches of an image pair. We argue that a smaller number of matches requires greater threshold. The dynamic threshold ${th}$ is set by the formula as:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{th} = t{h}_{\text{init }} - \frac{n}{15} \tag{4}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
where $t{h}_{\text{init }}$ is 6 and $\mathrm{n}$ is the number of original matches of an image pair. For the image pairs with more than 100 matches, we directly applied DEGENSAC [21] to obtain the inliers for submission.
|
| 142 |
+
|
| 143 |
+
### 3.4 Implementation Details
|
| 144 |
+
|
| 145 |
+
We train the proposed model integrated with improved SIFT, HardNet, OANet on Phototourism dataset [7], in which HardNet is pretrained on the constructed UBC Patch dataset [11]. The whole model is implemented with Pytorch [18] on a NVIDIA Titan X GPU. All models were trained from scratch and no pre-trained models were used. For stereo task, the co-visibility threshold is restricted to 0.1 while for multi-view task, the minimum 3D points to set to 100 with no more than 25 images are evaluated at a time.
|
| 146 |
+
|
| 147 |
+
In description stage, the input patch size is cropped to ${32} \times {32}$ with the random flip and random rotation. Optimization was done by SGD solver [28] with learning rate of 10 and weight decay of 0.0001 . Learning rate was linearly decayed to zero within 15 epochs.
|
| 148 |
+
|
| 149 |
+
In outlier pre-filtering stage, for training, we employed dynamic learning rate schedule where the learning rate was increased from zero to maximum linearly during the warm-up period (the first 10,000 steps), and decayed step-by-step after that. Typically, the maximum learning rate was $1\mathrm{e} - 5$ , and the decay rate was $1\mathrm{e} - 8$ . Besides, we trained the fundamental estimation model with setting the value of parameter geo_loss_margin to 0.1 . The side information of ratio test [5] with threshold of 0.8 and mutual check was also added into the model input. In inference time, the output of the model will undergo a DEGENSAC to further eliminate unreliable matches. The DEGENSAC in dynamic guided match all shares the same configuration with the iteration number of 100000, error type of Sampson, inlier threshold of 0.5, confidence of 0.999999 and enabled degeneracy check.
|
| 150 |
+
|
| 151 |
+
## 4 EXPERIMENTS
|
| 152 |
+
|
| 153 |
+
In this section, we evaluate the proposed method on the public Phototourism benchmark with the features results, the final pose estimation results are computed online. We firstly conduct performance for several separate modules, then qualitatively and quantitively analysis the effectiveness of our method.
|
| 154 |
+
|
| 155 |
+
### 4.1 Experimental Settings
|
| 156 |
+
|
| 157 |
+
#### 4.1.1 Datasets
|
| 158 |
+
|
| 159 |
+
UBC Phototour dataset [11] is a dataset which involves corresponding patches from 3D reconstruction of Liberty, Notre Dame and Yosemite. Every tour consists of several bitmap images at the resolution of ${1024} \times {1024}$ pixels, each image contains image patches as a ${16} \times {16}$ array, in which the patch is sampled as ${64} \times {64}$ grayscale. In addition, the matching information (sample 3D point index of each patch) and keypoint information (reference image index, orientation, scale, and position are supplied separately in the file.
|
| 160 |
+
|
| 161 |
+
Phototourism dataset [7] was collected from multi-sensors in 26 popular tourism landmarks. The 3D reconstruction ground truth is obtained with Structure from Motion (SfM) using Colmap, which includes poses, co-visibility estimates and depth maps. The dataset is divided into training set (16 scenes), validation set (3 scenes of training set) and test set (10 scenes). As a benchmark large-scale dataset, the scenes contained in the dataset includes various challenging conditions, e.g., occlusions, changing viewpoints, different time, repeated textures, etc.
|
| 162 |
+
|
| 163 |
+
#### 4.1.2 Tasks and Evaluation metrics
|
| 164 |
+
|
| 165 |
+
We implement and compare our method with different approaches on two downstream tasks: stereo and multi-view reconstruction with SfM, they both evaluate the intermediate metrics for further comparisons. The two tasks evaluate different format of dataset which is subsampled into smaller random subsets from the Phototourism dataset, Stereo task evaluates image pairs while Multi-view task evaluates $5 \sim {25}$ images with SfM reconstruction. Stereo task uses the random sampling consensus method (RANSAC) [20] to estimate the matches between the two images that meets the motion consistency, thereby decomposing the pose rotation $\mathrm{R}$ and translation t. Multiview task recovers the pose $\mathrm{R}$ and $\mathrm{t}$ of each image through $3\mathrm{D}$ reconstruction. By calculating the cosine distance of the vector between the estimated pose and the ground-truth pose, the performance of image matching is measured by the size of the computed angle.
|
| 166 |
+
|
| 167 |
+
Given an image pair with co-visibility constraints in stereo task or the 3D reconstruction from multi-view images, we may compute the following metrics in different experiments, the final results are compared with the mAA metric.
|
| 168 |
+
|
| 169 |
+
- Mean average accuracy (%mAA) is computed by integrating the area under curve up to a maximum threshold, which indicate the angular angle between the ground-truth and estimated pose vector.
|
| 170 |
+
|
| 171 |
+
- Keypoint repeatability (%Rep.) measures the ratio of possible matches and the minimum number of key points in the co-visible view.
|
| 172 |
+
|
| 173 |
+
- Descriptor matching score(%M.S.) is the average ratio of correct matches and the minimum number of key points in the co-visible view.
|
| 174 |
+
|
| 175 |
+
- Mean absolute trajectory error (%mATE) measures the average deviation from ground truth trajectory per frame.
|
| 176 |
+
|
| 177 |
+
- False positive rate (%FPT) is the ratio between the number of negative events wrongly categorized as positive and the total number of actual negative events
|
| 178 |
+
|
| 179 |
+
### 4.2 Qualitative and Quantative Results
|
| 180 |
+
|
| 181 |
+
The data construction information on the Phototourism dataset is given in Table 1, it could be seen that the proposed dataset construction method could largely reduce the amount of dataset size thus further speed up the training process and help improve the model performance. The training loss and FRP95 metric trend is shown in Figure 5 and Figure 6 on UBC Phototour dataset, which indicates the loss stability and help us know about the data characteristics. The comparison of matching visualization performance on stereo task is shown in Figure 7, in which we could see our proposed method outperforms the traditional RootSIFT based method with more accurate matches. Figure 8 shows the visualization on Phototourism dataset on stereo task and multi-view task.
|
| 182 |
+
|
| 183 |
+
To validate the performance of loss of different scenes, Table 2 lists the evaluation of correspondence on UBC Phototour dataset, which indicates the effects and improvements of our proposed QHT loss compared to HT (hinge triplet) loss. Table 3 demonstrates our submitted final results on Phototourism challenge Track1, we rank 1st on both tasks.
|
| 184 |
+
|
| 185 |
+
Table 1: Dataset construction information comparison of two selected scenes and the average of Phototourism dataset
|
| 186 |
+
|
| 187 |
+
<table><tr><td>Scenes</td><td>Type</td><td>Images</td><td>3D Points</td><td>Patches</td></tr><tr><td>Buckingham</td><td>original</td><td>1676</td><td>246035</td><td>4352977</td></tr><tr><td>palace</td><td>sampled</td><td>1257</td><td>72814</td><td>364070</td></tr><tr><td>Grand_place</td><td>original</td><td>1080</td><td>209550</td><td>3206171</td></tr><tr><td>Brussels</td><td>sampled</td><td>810</td><td>78108</td><td>390540</td></tr><tr><td>All scenes</td><td>original</td><td>1183.9</td><td>165124.8</td><td>3567694.5</td></tr><tr><td>(Average)</td><td>sampled</td><td>887.6</td><td>63433.6</td><td>317168</td></tr></table>
|
| 188 |
+
|
| 189 |
+
Table 2: Patch correspondence evaluation performance with the metric of FPR95. We compare different loss functions on UBC Phototour dataset with HardNet as the description network (* means the network is trained by our implement). The best results are in bold.
|
| 190 |
+
|
| 191 |
+
<table><tr><td>Train</td><td colspan="2">Liberty</td><td colspan="2">NotreDame</td></tr><tr><td>Test</td><td>Notre- Dame</td><td>Youse- mite</td><td>Liberty</td><td>Youse- mite</td></tr><tr><td>HartNet [8]</td><td>0.62</td><td>2.14</td><td>1.47</td><td>2.67</td></tr><tr><td>HardNet-HT [8]</td><td>0.53</td><td>1.96</td><td>1.49</td><td>1.84</td></tr><tr><td>HardNet-HT*</td><td>0.50</td><td>1.96</td><td>1.48</td><td>1.61</td></tr><tr><td>HardNet-QHT*</td><td>0.45</td><td>1.83</td><td>1.228</td><td>1.52</td></tr></table>
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
|
| 195 |
+
Figure 5: The loss of the training model in different scenes of the UBC Phototour dataset, the difference between different losses is not big, indicating that the improved loss is stable on the model training
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
|
| 199 |
+
Figure 6: FRP95 metric trend, using different scene combinations in the UBC phototour dataset as the training set and the validation set. The results show that on the same validation set, the FRP95 trained on the Yosemite scene is higher, indicating that there are false matches or difficult matches.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+
Figure 7: Visualization of correspondence on stereo task based on traditional RootSIFT based method and our proposed method (green represents correct match, yellow encodes match error within 5 pixels, red shows the wrong match)
|
| 204 |
+
|
| 205 |
+
Table 3: Phototourism challenge. Mean average accuracy in pose estimation with an error threshold of ${10}^{ \circ }$ . The results are submitted with online evaluation. (We only remain the result with the highest rank in the submitted results from each team)
|
| 206 |
+
|
| 207 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">stereo-task</td><td colspan="2">multi-view task</td><td>avg.</td></tr><tr><td>$\mathrm{{mA}}{\mathrm{A}}^{@{10}^{ \circ }}$</td><td>rank?</td><td>$\mathrm{{mA}}{\mathrm{A}}^{@{10}^{ \circ }}$</td><td>rank?</td><td>rank?</td></tr><tr><td>Ours</td><td>0.61081</td><td>1</td><td>0.78288</td><td>2</td><td>1</td></tr><tr><td>Luca et al.</td><td>0.58300</td><td>12</td><td>0.77056</td><td>3</td><td>2</td></tr><tr><td>Jiahui et al.</td><td>0.57826</td><td>17</td><td>0.77056</td><td>5</td><td>3</td></tr><tr><td>Ximin et al.</td><td>0.5887</td><td>5</td><td>0.75127</td><td>14</td><td>4</td></tr></table>
|
| 208 |
+
|
| 209 |
+
### 4.3 Ablation Study
|
| 210 |
+
|
| 211 |
+
To fully understand each module in our proposed method, we evaluate different module above the Phototourism validation dataset. In Table 4, our method is compared with several versions to see how data construction and improved HardNet could help to improve the descriptor and further to enhance the performance on both multi-view and stereo tasks. We evaluate four variants as the feature descriptor while the other parts keep the same with results: 1) RootSIFT; 2) the original pret rained HardNet; 3) our proposed improved HardNet; 4) our proposed improved HardNet with a data construction technology. The comparison of different feature descriptors indicates that the model performance increases largely by adding the loss constraints on HardNet: it shows an $8\%$ improvement in average mAA of both tasks; with data construction, improved HardNet yields the best performance, obtaining an average mAA of 0.7894 .
|
| 212 |
+
|
| 213 |
+
Table 4: Ablation study of proposed HardNet and data construction method on Phototourism validation set
|
| 214 |
+
|
| 215 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">mAA@10°</td></tr><tr><td>Stereo</td><td>Multi-view</td><td>Avg</td></tr><tr><td>RootSIFT</td><td>0.6698</td><td>0.7258</td><td>0.6978</td></tr><tr><td>Pretrained HardNet</td><td>0.7317</td><td>0.7924</td><td>0.7621</td></tr><tr><td>Improved HardNet</td><td>0.7285</td><td>0.8159</td><td>0.7722</td></tr><tr><td>Improved HardNet+CleanData</td><td>0.7537</td><td>0.8252</td><td>0.7894</td></tr></table>
|
| 216 |
+
|
| 217 |
+
To evaluate whether the proposed modified OANet could help improve the performance of outlier pre-filtering, Table 5 lists the performance comparison of four different outlier pre-filtering methods: 1) use of RootSIFT as feature descriptor while ratio test and cross check as the outlier filtering; 2) use of pretrained HardNet with ratio test and cross check; 3) use of proposed improved HardNet with ratio test and cross check; 4) use of proposed improved HardNet with modified OANet. The comparison of different outlier removals indicates that the performance of modified OANet overpasses the ratio test with cross check method by $4\%$ in average mAA.
|
| 218 |
+
|
| 219 |
+
Table 5: Table 6: Ablation study of proposed OANet method on Phototourism validation set.
|
| 220 |
+
|
| 221 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">mAA@10°</td></tr><tr><td>Stereo</td><td>Multi-view</td><td>Avg</td></tr><tr><td>RootSIFT+RT+CC</td><td>0.6698</td><td>0.7258</td><td>0.6978</td></tr><tr><td>Pretrained HardNet+RT+CC</td><td>0.7317</td><td>0.7924</td><td>0.7621</td></tr><tr><td>Improved HardNet+RT+CC</td><td>0.7537</td><td>0.8252</td><td>0.7894</td></tr><tr><td>Improved HardNet+OANet</td><td>0.7918</td><td>0.8658</td><td>0.8288</td></tr></table>
|
| 222 |
+
|
| 223 |
+
## 5 CONCLUSION
|
| 224 |
+
|
| 225 |
+
This paper presents a novel image matching approach, which integrates a data construction method to remove noise and to ensure more efficient training, an improved HardNet QHT loss function to make the network more sensitive to noise gradient descent, and stricter OANet combined with guided match algorithm to reduce the recall rate of outlier pre-filtering. By proposing QHT loss, which strengthened the distance constraints and the improved OANet with a stricter judgment on positive samples, the more outstanding matching is produced by these stricter constraints. The proposed method is crucial to obtain high-quality correspondences on a large-scale challenging dataset with illumination vibrances, viewpoint changes and repeated textures. Our experiments show that this method achieves state-of-the-art performance on the public Phototourism Image Matching challenge.
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
|
| 229 |
+
Figure 8: Visualization of correspondence on stereo task and reconstruction points in multi-view task. (green represents correct match, yellow encodes match error within 5 pixels, red shows the wrong match, blue shows the keypoints with correct 3D points)
|
| 230 |
+
|
| 231 |
+
## REFERENCES
|
| 232 |
+
|
| 233 |
+
[1] X. L Dai, Lu J. An object-based approach to automated image matching[C]//IEEE 1999 International Geoscience and Remote Sensing Symposium. IGARSS'99 (Cat. No. 99CH36293). IEEE, 1999, 2: 1189-1191.
|
| 234 |
+
|
| 235 |
+
[2] M. Dusmanu, I. Rocco, T. Pajdla, et al. D2-net: A trainable cnn for joint description and detection of local features[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 8092-8101.
|
| 236 |
+
|
| 237 |
+
[3] J. Revaud, P. Weinzaepfel, C. De Souza, et al. R2D2: repeatable and reliable detector and descriptor[J]. arXiv preprint arXiv:1906.06195, 2019.
|
| 238 |
+
|
| 239 |
+
[4] P. Ebel, A. Mishchuk, M. Yi K, et al. Beyond cartesian representations for local descriptors[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 253-262.
|
| 240 |
+
|
| 241 |
+
[5] G. Lowe D. Distinctive image features from scale-invariant keypoints[J]. International journal of computer vision, 2004, 60(2): 91-110.
|
| 242 |
+
|
| 243 |
+
[6] W. Bian J, Y. Lin W, Y. Matsushita, et al. Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence[C] //Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4181-4190.
|
| 244 |
+
|
| 245 |
+
[7] Phototourism Challenge, CVPR 2020 Image Matching Workshop. https://www.cs.ubc.ca/research/image-matching-challenge/.
|
| 246 |
+
|
| 247 |
+
[8] A. Mishchuk, D. Mishkin, F. Radenovic, et al. Working hard to know your neighbor's margins: Local descriptor learning loss[J].
|
| 248 |
+
|
| 249 |
+
arXiv preprint arXiv:1705.10872, 2017.
|
| 250 |
+
|
| 251 |
+
[9] J. Zhang, D. Sun, Z. Luo, et al. Learning two-view correspondences and geometry using order-aware network[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 5845-5854.
|
| 252 |
+
|
| 253 |
+
[10] M. Andrew A. Multiple view geometry in computer vision[J]. Kybernetes, 2001.
|
| 254 |
+
|
| 255 |
+
[11] M. Brown, G. Lowe D. Automatic panoramic image stitching using invariant features[J]. International journal of computer vision, 2007, 74(1): 59-73.
|
| 256 |
+
|
| 257 |
+
[12] M. Yi K, E. Trulls, Y. Ono, et al. Learning to find good correspondences $\left\lbrack \mathrm{C}\right\rbrack //$ Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 2666-2674.
|
| 258 |
+
|
| 259 |
+
[13] R. Qi C, H. Su, K. Mo, et al. Pointnet: Deep learning on point sets for 3d classification and segmentation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 652- 660.
|
| 260 |
+
|
| 261 |
+
[14] H. Mughal M, J. Khokhar M, M. Shahzad. Assisting UAV Localization Via Deep Contextual Image Matching[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 2445-2457.
|
| 262 |
+
|
| 263 |
+
[15] T. Song, B. Cao L, F. Zhao M, et al. Image tracking and matching algorithm of semi-dense optical flow method[J]. International Journal of Wireless and Mobile Computing, 2021, 20(1): 93-98.
|
| 264 |
+
|
| 265 |
+
[16] D. DeTone, T. Malisiewicz, A. Rabinovich. Superpoint: Self-supervised interest point detection and description[C]//Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2018: 224-236.
|
| 266 |
+
|
| 267 |
+
[17] A. Ortiz, C. Robinson, D. Morris, et al. Local context normalization: Revisiting local normalization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 11276-11285.
|
| 268 |
+
|
| 269 |
+
[18] A. Paszke, S. Gross, S. Chintala, et al. Automatic differentiation in pytorch[J]. 2017.
|
| 270 |
+
|
| 271 |
+
[19] H. Noh, A. Araujo, J. Sim, et al. Large-scale image retrieval with attentive deep local features[C]//Proceedings of the IEEE international conference on computer vision. 2017: 3456-3465.
|
| 272 |
+
|
| 273 |
+
[20] A. Fischler M, C. Bolles R. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM, 1981, 24(6): 381-395.
|
| 274 |
+
|
| 275 |
+
[21] H. Bay, T. Tuytelaars, L. Van Gool. Surf: Speeded up robust features[C]//European conference on computer vision. Springer, Berlin, Heidelberg, 2006: 404-417.
|
| 276 |
+
|
| 277 |
+
[22] O. Chum, T. Werner, J. Matas. Two-view geometry estimation unaffected by a dominant plane[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). IEEE, 2005, 1: 772-779.
|
| 278 |
+
|
| 279 |
+
[23] Y. Liu, Z. Shen, Z. Lin, et al. Gift: Learning transformation-invariant dense visual descriptors via group cnns[J]. arXiv preprint arXiv:1911.05932, 2019.
|
| 280 |
+
|
| 281 |
+
[24] Z. Luo, T. Shen, L. Zhou, et al. Contextdesc: Local descriptor augmentation with cross-modality context[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 2527-2536.
|
| 282 |
+
|
| 283 |
+
[25] Y. Tian, X. Yu, B. Fan, et al. Sosnet: Second order similarity regularization for local descriptor learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 11016-11025.
|
| 284 |
+
|
| 285 |
+
[26] R. Arandjelović, A. Zisserman. Three things everyone should know to improve object retrieval[C]//2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012: 2911-2918.
|
| 286 |
+
|
| 287 |
+
[27] Y. Wang, C. Chen Y, X. Zhang, et al. Attentive normalization for conditional image generation $\left\lbrack \mathrm{C}\right\rbrack //$ Proceedings of the IEEE/CVF
|
| 288 |
+
|
| 289 |
+
Conference on Computer Vision and Pattern Recognition. 2020: 5094-5103.
|
| 290 |
+
|
| 291 |
+
[28] L. Bottou. Large-scale machine learning with stochastic gradient descent[M]//Proceedings of COMPSTAT'2010. Physica-Verlag HD, 2010: 177-186.
|
| 292 |
+
|
| 293 |
+
[29] X. Huang, S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization[C]//Proceedings of the IEEE International Conference on Computer Vision. 2017: 1501-1510.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/3baVPWLmUiO/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,319 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ A STRICTER CONSTRAINT PRODUCES OUTSTANDING MATCHING: LEARNING RELIABLE IMAGE MATCHING WITH IMPROVED NETWORKS
|
| 2 |
+
|
| 3 |
+
Anonymous Authors
|
| 4 |
+
|
| 5 |
+
Author's Affiliation
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Image matching is widely used in many applications, such as visual-based localization and 3D construction. Comparing with Traditional local features (e.g., SIFT) and outlier elimination method (e.g, RANSAC), learning-based image matching methods, e.g., HardNet and OANet, show promising performance under challenging environments and large-scale benchmarks. However, the existing learning-based methods suffer the noise in training data and the existing loss function, e.g., hinge loss, does not work well in image matching network. In this paper, we propose an end-to-end image matching method with less training dataset to obtain a more accurate and robust performance. First, a novel data cleaning strategy is proposed to remove the noise in the training dataset. Second, we strengthen the matching constraints by proposing a novel quadratic hinge triplet (QHT) loss function to improve the network. Finally, we apply stricter OANet in sample judgement to produce more outstanding matching. The proposed method shows the state-of-the-art performance on a large-scale and challenging Phototourism dataset, and also reported the 1st place in the CVPR 2020 Image Matching Challenges Workshop Track1 with the metric measure of reconstructed pose accuracy. Keywords: Image matching, HardNet, OANet, SIFT, large scale, challenging environments, pose accuracy.
|
| 10 |
+
|
| 11 |
+
Keywords: Image matching, HardNet, OANet, SIFT, large scale, challenging environments, pose accuracy.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Image matching is the foundation task of some high-level 3D computer vision tasks, such as 3D reconstruction, structure-from-motion, camera pose estimation, etc. The goal of the task is to solve the corresponding pixels relationship within a same physical region from two images which have a common vision area [1]. Nowadays, with the widespread development of deep learning technologies on many computer vision areas and the need for long-term large-scale environments tasks, the shortcomings of traditional keypoint-based image matching method gradually appear, the major difficulty lies in that usually local features are evaluated through the accuracy of descriptors on small datasets, which only illustrates the matching performance, and is not suitable for integrating and evaluating some downstream tasks.
|
| 16 |
+
|
| 17 |
+
The use of deep learning based solution for image matching task is hopeful to outperform the disadvantages of traditional keypoint-based methods, which demonstrated the ability to integrate multi-stage tasks in a lot of works [2][3] and could optimize and evaluate different evaluation metrics in large dataset. Although without complex hand-crafted efforts and show convenient pipeline in further tasks, the methods face with challenging conditions, such as illumination variation, viewpoint changes, repeated texture, which is apparent in outdoor datasets where scenes scales and conditions are highly variable. This leads to the poor performance result both in accuracy and robustness.
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Figure 1: Our matching method's correspondences in extreme illumination variation and viewpoint changes.
|
| 22 |
+
|
| 23 |
+
To solve this problem, further end-to-end solutions and modified descriptors have been proposed with the premise being that the descriptor or network could learn more reliable features across image pairs. A novel representation of features with log-polar sampling method [4] is proposed to achieve scale invariance Some works [2-3] propose a jointly learning feature detection and description method separately to improve the image matching performance.
|
| 24 |
+
|
| 25 |
+
Inspired by the mentioned knowledge, in this work, we propose a three-stage pipeline, including feature extraction, feature matching and outlier pre-filtering steps, to compute the corresponding pixels from image pairs, as shown in Figure 1. For each step, we add some constraints to enforce the algorithm converge to better matching results. Unlike the previous methods, our proposed method only requires light-weight model and does not need to train on multi-branches separately.
|
| 26 |
+
|
| 27 |
+
We show that our method outperforms previous algorithms and achieves the state-of-the-art performance using the Phototourism benchmark with large-scale environments and challenging conditions. We provide a detailed insight into how improved data processing strategy, HardNet [8] loss function, modified OANet [9] combined with guided match algorithm [10] could help the
|
| 28 |
+
|
| 29 |
+
* email address
|
| 30 |
+
|
| 31 |
+
Detector: Descriptor: Nearest Ratio-test, GMS Guided match $\checkmark$ Task1: RANSAC Stereo Outlier Pose Pre-filtering Task2: SfM Error Multi-view CNe OANetV Traditional methods L Learning based methods SIFTV SOSNet, GIFT Neighbor V SURF HardNet V Matching Input Image Feature Extraction Feature Pairs Matching End-to-end method: SuperPoint, D2-Net, R2D2
|
| 32 |
+
|
| 33 |
+
Figure 2: Image matching based pose estimation pipeline, with some popular traditional and learning-based methods. Our method is based on the pipeline with some improvement on the chosen methods (indicated by a green box with a red tick).
|
| 34 |
+
|
| 35 |
+
pipeline to achieve accurate and robust matching results and evaluate the method on pose estimation task.
|
| 36 |
+
|
| 37 |
+
The main contributions include:
|
| 38 |
+
|
| 39 |
+
* We construct a new patch dataset based on the given Phototourism images and sparse model, which is similar to the UBC Phtotour dataset [11], and pre-train our model on that dataset;
|
| 40 |
+
|
| 41 |
+
* We propose a novel quadratic hinge triplet (QHT) loss function for the feature descriptor network HardNet, and an improved OANet combined with a guided matching strategy to compute reliable feature matches;
|
| 42 |
+
|
| 43 |
+
* Through abundant experiments and the ablation study for each module, we show that our algorithm achieves state-of-the-art performance in the reconstructed poses, which ranks first on both tracks of stereo and multi-view matching on the public Phototourism Image Matching challenge 2020 [7].
|
| 44 |
+
|
| 45 |
+
§ 2 RELATED WORK
|
| 46 |
+
|
| 47 |
+
Image matching plays an important role in many high-level tasks in computer vision area [14-15], which is to find the corresponding pixels from the image pairs in the same physical region in which the scene imaged with geometrical constraints [1]. Feature extraction, feature matching and outlier pre-filtering are the most vital components in image matching pipeline, in which traditional methods and deep learning based methods are implemented and completed in recent years. Figure 2 shows the main parts of image matching based pose estimation pipeline, including some common methods and our chosen target algorithm to further improve.
|
| 48 |
+
|
| 49 |
+
Feature Extraction. End-to-end feature extraction and matching methods are classified into detect-then-describe (e.g. SuperPoint [16]), detect-and-describe (e.g. R2D2 [3], D2-Net [2]), and describe-then-detect (e.g. DELF [19]) strategies according to the sequences for executing the detection and description processes, which are utilized in different needs. However, they either have poor performance on challenging conditions and lack of repeatability on keypoints or show low efficiency on matching and storage. Traditionally, detectors and descriptors are separately applied in the pipeline, SIFT [5] (and RootSIFT [26]) and SURF [21] are most popular detectors while some descriptors are followed, in which LogPolar [4] shows better performance than ContextDesc [24] from the relative pose error, while SOSNet [25] and HardNet [8] overpass GIFT [23] in the public validation set.
|
| 50 |
+
|
| 51 |
+
Outlier pre-filtering. There are a large number of incorrect matches in the matching pair, which will bring noise to the subsequent pose estimation. Traditional outlier removal methods include ratio-test [5], GMS [6], guided-match [10] methods, etc.
|
| 52 |
+
|
| 53 |
+
A method based on deep learning is to judge whether the match is an outlier through the pose relationship obtained by the regression of the convolutional network, but it is usually difficult for network training to converge. Another way is to convert the pose to the label of whether the match is correct through the epipolar constraint, and then the regression problem can be converted into a classification problem to determine whether the match is inlier or outlier. The standard binary cross-entropy loss can be used to learn model.
|
| 54 |
+
|
| 55 |
+
However, the input matching pairs are unordered, so the network needs to be permutation-invariant (insensitive) to match sequence transformations, so it is not applicable to convolutional networks or fully connected networks. CNe [12] draws on the idea of pointNet [13] and proposes a multi-layer weight-sharing perceptron to solve the irrelevant transformation problem. For each input matching pair, the same perceptron is used to process iterations, but this also leads that each matching pair is processed independently, lacks information circulation, and cannot integrate the predicted matching pair judgment results, and thus cannot learn the useful information in the entire image matching pair. Context normalization [17] is an instance normalization method that is often used widely in image migration [29] and GAN [27]. It normalizes the processing results of image matching pairs to exchange and circulate information. But this simple and violent use of mean variance ignores the complexity of global features, and at the same time cannot capture the correlation information between parts. OANet [9] proposed the use of DiffPool and DiffUnPool layers to promote the information flow and communication of internal neurons.
|
| 56 |
+
|
| 57 |
+
To estimate a more accurate pose of the given image pairs or 3D reconstruction through the matching process, we consider improving the pipeline with respect to the following aspects:
|
| 58 |
+
|
| 59 |
+
* Obtain stable and accurate keypoints. The keypoints should appear invariably, e.g., a certain point in the first image could also appear stably in the second image taken under a different viewpoint or illumination change.
|
| 60 |
+
|
| 61 |
+
* Provide reliable and distinguishable descriptors. Under different environmental conditions, the same keypoint should have similar descriptor, so that the same keypoint in different images can be paired using nearest neighbor matching.
|
| 62 |
+
|
| 63 |
+
* Hold powerful outlier pre-filtering ability. Filtering the wrong matches to get most of the correct matching pairs could help to get an accurate pose estimation result finally.
|
| 64 |
+
|
| 65 |
+
§ 3 METHOD
|
| 66 |
+
|
| 67 |
+
In this paper, we aim to learn accurate and reliable image matching in large-scale challenging conditions. In contrast with previous works which use end-to-end solutions of insufficient accuracy at the feature extraction stage or traditional solvers through the image matching pipeline, we combined traditional detector solver with an improved HardNet and OANet to pretrain on a constructed dataset. In particular, we propose a dataset construction method to optimize hypermeters before training on large-scale dataset and propose a new loss function in the description stage based on HardNet network, during outlier pre-filtering stage, we combined guided match with proposed stricter OANet to learn more accurate matches.
|
| 68 |
+
|
| 69 |
+
Sift->position+scale Ourlier pre-filtering Nearest Improved OANet Neighbor Match correspondance matches Crop + Resize Improved HardNet Input image ${32} \times {32}$ patch 128d descriptor
|
| 70 |
+
|
| 71 |
+
Figure 3: The whole architecture with proposed methods. We use SIFT feature detector firstly, and then apply HardNet from the 32x32 patch input into the 128 dimensions descriptors, after the nearest neighbor match and outlier pre-filtering, we could get the final matches, which is further used to compute the pose estimation.
|
| 72 |
+
|
| 73 |
+
The whole architecture of the proposed method is demonstrated in Figure 3. Firstly, 128 dimensions of features are extracted from a basic HardNet with the input ${32} \times {32}$ patch. Then, after the matching stage, with the proposed stricter OANet, the correspondences could be transferred into $\mathrm{N}$ binary classification (inlier / outlier), which is permutation-invariant and feasible with convolutional layers. Finally, the pre-filtered matches are used to compute pose estimation in stereo and multi-view tasks.
|
| 74 |
+
|
| 75 |
+
§ 3.1 DATASET CONSTRUCTION
|
| 76 |
+
|
| 77 |
+
§ 3.1.1 UBC DATASET CONSTRUCTION FOR PRETRAINING
|
| 78 |
+
|
| 79 |
+
In order to quickly train our lightweight method and search for some hyperparameters to further use into the pretrained model, thus we use the UBC Phototour dataset for the pretraining, which is patch data and suitable for HardNet training.
|
| 80 |
+
|
| 81 |
+
§ 3.1.2 PHOTOTOURISM DATASET CONSTRUCTION FOR TRAINING
|
| 82 |
+
|
| 83 |
+
After pretraining on the constructed UBC Phototour dataset for fast parameter selection, for the training period, we construct a clean dataset with low noise and less redundancy from Phototourism training scenes. Removing the redundant data could largely shorten the training period, while noise label reducing could help the loss function gradient descent optimization to improve network performance.
|
| 84 |
+
|
| 85 |
+
To reduce the noise label in the dataset, we set the threshold of discard the images with low confidence, in which 25% with the least $3\mathrm{\;d}$ points and $3\mathrm{\;d}$ points whose tracks are less than 5 . For the redundant data removing, we sample the $3\mathrm{\;d}$ points that have been tracked more than 5 times, and only keep the $3\mathrm{\;d}$ points which are tracked 5 times. The sampling is repeated 10 times, and the NCC value is calculated for each sampling (the lower of the NCC indicates the lower similarity of the two images and the larger the difference of the matching), and the result with the lowest NCC is retained. In addition, we also applied data augmentation with random flip and random rotation strategies in both pretraining and training processes.
|
| 86 |
+
|
| 87 |
+
§ 3.2 FEATURE EXTRACTION WITH IMPROVED HARDNET
|
| 88 |
+
|
| 89 |
+
Feature extraction includes keypoint detection and feature description processes. SIFT detector [5] is used firstly to extract the position and scale information on the selected keypoint of the given input image. We adopted OpenCV implemented SIFT with a low detection threshold to generate up to 8000 points and a fixed orientation ${}^{1}$ . A single pixel on the image cannot describe the physical information of the current key point, so in order to describe the key point and the information around it, the key point is cropped with its scale size, called a patch, and then resized to ${32} \times {32}$ as the input of the HardNet network to extract the patch’s descriptor.
|
| 90 |
+
|
| 91 |
+
After a relatively simple 7-layer HardNet, the input ${32} \times {32}$ patch is described as a 128-dimensional feature vector, which represents the description generated for each patch. We remain the network structure, and made improvements to its loss function to train the network stably and efficiently on the constructed dataset.
|
| 92 |
+
|
| 93 |
+
HardNet uses triplet loss embedded with difficult sample mining to optimize so that the distance of the sample descriptor within the class is small, and the distance of the sample description vector between the classes is large.
|
| 94 |
+
|
| 95 |
+
In order to further improve the effectiveness and stability of model learning, we apply a quadratic hinge triplet (QHT) loss based on the triplet loss, which is inspired from the first order similarity loss in SOSNet [25], by adding the square term of the triple, which shares the same mining strategy as in HardNet to find the "hardest-within-batch" negatives. The description loss function ${\mathcal{L}}_{\text{ des }}$ is expressed as:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{\mathcal{L}}_{\text{ des }} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\max {\left( 0,1 + {d}_{i}^{\text{ pos }} - {d}_{i}^{\text{ neg }}\right) }^{2} \tag{1}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{d}_{i}^{pos} = d\left( {{a}_{i},{pos}_{i}}\right) \tag{2}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{d}_{i}^{neg} = \mathop{\min }\limits_{{\forall j \neq i}}\left( {d\left( {{a}_{i},{neg}_{j}}\right) }\right) \tag{3}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
Where ${d}_{i}^{\text{ pos }}$ represents the L2 distance between anchor descriptor and positive descriptor, ${d}_{i}^{\text{ neg }}$ represents the minimum distance in all the distances between anchor descriptor and the negative descriptor. The positive sample represents the patch on different images corresponding to the same $3\mathrm{\;d}$ point with the given anchor in the real world, on the contrary, the negative sample represents the patch obtained from different $3\mathrm{\;d}$ points.
|
| 110 |
+
|
| 111 |
+
Quadratic triplet loss weights the gradients with respect to the parameters of the network by the magnitude of the loss. Compared with HardNet, the gradient of QHT loss is more sensitive to changes in loss. Larger ${d}_{i}^{\text{ pos }} - {d}_{i}^{\text{ neg }}$ results in smaller loss gradient, thus the model learning is more effective and stable.
|
| 112 |
+
|
| 113 |
+
In addition, the improved model would be sensitive to the noise in the data. Wrong positive and negative sample labels will degrade the performance of the model, this risk could be avoided through the data denoising work proposed before.
|
| 114 |
+
|
| 115 |
+
${}^{1}$ https://github.com/vcg-uvic/image-matching-benchmark-baselines
|
| 116 |
+
|
| 117 |
+
§ 3.3 STRICTER OANET WITH DYNAMIC GUIDED MATCHING
|
| 118 |
+
|
| 119 |
+
OANet proposed to use Diffpool and DiffUnPool layers on the basis of CNe to promote the information circulation and communication of internal neurons. The two network's models can be abstracted as Figure 4, $\mathrm{N}$ matching pairs with $\mathrm{D}$ - dimensional are processed by the perceptron of the CNe network to obtain output results of the same dimension. The OANet network reduces the input dimension to an $\mathrm{M} \times \mathrm{D}$ matrix through the Diffpool layer, and then raises the dimension back to $\mathrm{N} \times \mathrm{D}$ output through DiffUnpool, and merges the information in the middle. Diffpool maps the input $\mathrm{N}$ matches to $\mathrm{M}$ by learning the weights of soft distribution to aggregate the information, and then DiffUnpool reorganizes the information back to $\mathrm{N}$ dimensions. The network is not sensitive to disordered disturbances of input matching pairs.
|
| 120 |
+
|
| 121 |
+
$N \times D$ $N \times D$ $N \times D$ $N \times D$ $M \times D$ DiffPool DiffUnpool ${X}_{l}^{\prime }$ ${X}_{l + 1}$ OANet ${X}_{l}$ P ${X}_{l}^{\prime }$ ${X}_{l}$ CNe
|
| 122 |
+
|
| 123 |
+
Figure 4: Our matching method's correspondences in extreme illumination variation and viewpoint changes.
|
| 124 |
+
|
| 125 |
+
We convert the accuracy of the pose to the accuracy of the matching to learn, so the accuracy of the matching will affect the learning ability of the model. We have made the following improvements to OANet to make it more rigorous to learn from positive samples, and to make matching judgments the label be more accurate, which effectively filters the exception points and improves the matching and pose estimation performance. We shortened the threshold condition of the geometric error constraint from 1e-4 to 1e-6. In addition, we also added a point-to-point constraint based on the OANet point-to-line constraint. A point is judged as a negative sample when the projection distance is greater than 10 pixels.
|
| 126 |
+
|
| 127 |
+
Generally, inadequate matches may lead to inaccurate pose estimation. Apart from OANet, for the image pairs with less than 100 matches, we also proposed dynamic guided matching (DGM) to increase matches. Different from traditional guided matching [10], the dynamic threshold is applied on the Sampson distance constraint according to the number of matches of an image pair. We argue that a smaller number of matches requires greater threshold. The dynamic threshold ${th}$ is set by the formula as:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{th} = t{h}_{\text{ init }} - \frac{n}{15} \tag{4}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $t{h}_{\text{ init }}$ is 6 and $\mathrm{n}$ is the number of original matches of an image pair. For the image pairs with more than 100 matches, we directly applied DEGENSAC [21] to obtain the inliers for submission.
|
| 134 |
+
|
| 135 |
+
§ 3.4 IMPLEMENTATION DETAILS
|
| 136 |
+
|
| 137 |
+
We train the proposed model integrated with improved SIFT, HardNet, OANet on Phototourism dataset [7], in which HardNet is pretrained on the constructed UBC Patch dataset [11]. The whole model is implemented with Pytorch [18] on a NVIDIA Titan X GPU. All models were trained from scratch and no pre-trained models were used. For stereo task, the co-visibility threshold is restricted to 0.1 while for multi-view task, the minimum 3D points to set to 100 with no more than 25 images are evaluated at a time.
|
| 138 |
+
|
| 139 |
+
In description stage, the input patch size is cropped to ${32} \times {32}$ with the random flip and random rotation. Optimization was done by SGD solver [28] with learning rate of 10 and weight decay of 0.0001 . Learning rate was linearly decayed to zero within 15 epochs.
|
| 140 |
+
|
| 141 |
+
In outlier pre-filtering stage, for training, we employed dynamic learning rate schedule where the learning rate was increased from zero to maximum linearly during the warm-up period (the first 10,000 steps), and decayed step-by-step after that. Typically, the maximum learning rate was $1\mathrm{e} - 5$ , and the decay rate was $1\mathrm{e} - 8$ . Besides, we trained the fundamental estimation model with setting the value of parameter geo_loss_margin to 0.1 . The side information of ratio test [5] with threshold of 0.8 and mutual check was also added into the model input. In inference time, the output of the model will undergo a DEGENSAC to further eliminate unreliable matches. The DEGENSAC in dynamic guided match all shares the same configuration with the iteration number of 100000, error type of Sampson, inlier threshold of 0.5, confidence of 0.999999 and enabled degeneracy check.
|
| 142 |
+
|
| 143 |
+
§ 4 EXPERIMENTS
|
| 144 |
+
|
| 145 |
+
In this section, we evaluate the proposed method on the public Phototourism benchmark with the features results, the final pose estimation results are computed online. We firstly conduct performance for several separate modules, then qualitatively and quantitively analysis the effectiveness of our method.
|
| 146 |
+
|
| 147 |
+
§ 4.1 EXPERIMENTAL SETTINGS
|
| 148 |
+
|
| 149 |
+
§ 4.1.1 DATASETS
|
| 150 |
+
|
| 151 |
+
UBC Phototour dataset [11] is a dataset which involves corresponding patches from 3D reconstruction of Liberty, Notre Dame and Yosemite. Every tour consists of several bitmap images at the resolution of ${1024} \times {1024}$ pixels, each image contains image patches as a ${16} \times {16}$ array, in which the patch is sampled as ${64} \times {64}$ grayscale. In addition, the matching information (sample 3D point index of each patch) and keypoint information (reference image index, orientation, scale, and position are supplied separately in the file.
|
| 152 |
+
|
| 153 |
+
Phototourism dataset [7] was collected from multi-sensors in 26 popular tourism landmarks. The 3D reconstruction ground truth is obtained with Structure from Motion (SfM) using Colmap, which includes poses, co-visibility estimates and depth maps. The dataset is divided into training set (16 scenes), validation set (3 scenes of training set) and test set (10 scenes). As a benchmark large-scale dataset, the scenes contained in the dataset includes various challenging conditions, e.g., occlusions, changing viewpoints, different time, repeated textures, etc.
|
| 154 |
+
|
| 155 |
+
§ 4.1.2 TASKS AND EVALUATION METRICS
|
| 156 |
+
|
| 157 |
+
We implement and compare our method with different approaches on two downstream tasks: stereo and multi-view reconstruction with SfM, they both evaluate the intermediate metrics for further comparisons. The two tasks evaluate different format of dataset which is subsampled into smaller random subsets from the Phototourism dataset, Stereo task evaluates image pairs while Multi-view task evaluates $5 \sim {25}$ images with SfM reconstruction. Stereo task uses the random sampling consensus method (RANSAC) [20] to estimate the matches between the two images that meets the motion consistency, thereby decomposing the pose rotation $\mathrm{R}$ and translation t. Multiview task recovers the pose $\mathrm{R}$ and $\mathrm{t}$ of each image through $3\mathrm{D}$ reconstruction. By calculating the cosine distance of the vector between the estimated pose and the ground-truth pose, the performance of image matching is measured by the size of the computed angle.
|
| 158 |
+
|
| 159 |
+
Given an image pair with co-visibility constraints in stereo task or the 3D reconstruction from multi-view images, we may compute the following metrics in different experiments, the final results are compared with the mAA metric.
|
| 160 |
+
|
| 161 |
+
* Mean average accuracy (%mAA) is computed by integrating the area under curve up to a maximum threshold, which indicate the angular angle between the ground-truth and estimated pose vector.
|
| 162 |
+
|
| 163 |
+
* Keypoint repeatability (%Rep.) measures the ratio of possible matches and the minimum number of key points in the co-visible view.
|
| 164 |
+
|
| 165 |
+
* Descriptor matching score(%M.S.) is the average ratio of correct matches and the minimum number of key points in the co-visible view.
|
| 166 |
+
|
| 167 |
+
* Mean absolute trajectory error (%mATE) measures the average deviation from ground truth trajectory per frame.
|
| 168 |
+
|
| 169 |
+
* False positive rate (%FPT) is the ratio between the number of negative events wrongly categorized as positive and the total number of actual negative events
|
| 170 |
+
|
| 171 |
+
§ 4.2 QUALITATIVE AND QUANTATIVE RESULTS
|
| 172 |
+
|
| 173 |
+
The data construction information on the Phototourism dataset is given in Table 1, it could be seen that the proposed dataset construction method could largely reduce the amount of dataset size thus further speed up the training process and help improve the model performance. The training loss and FRP95 metric trend is shown in Figure 5 and Figure 6 on UBC Phototour dataset, which indicates the loss stability and help us know about the data characteristics. The comparison of matching visualization performance on stereo task is shown in Figure 7, in which we could see our proposed method outperforms the traditional RootSIFT based method with more accurate matches. Figure 8 shows the visualization on Phototourism dataset on stereo task and multi-view task.
|
| 174 |
+
|
| 175 |
+
To validate the performance of loss of different scenes, Table 2 lists the evaluation of correspondence on UBC Phototour dataset, which indicates the effects and improvements of our proposed QHT loss compared to HT (hinge triplet) loss. Table 3 demonstrates our submitted final results on Phototourism challenge Track1, we rank 1st on both tasks.
|
| 176 |
+
|
| 177 |
+
Table 1: Dataset construction information comparison of two selected scenes and the average of Phototourism dataset
|
| 178 |
+
|
| 179 |
+
max width=
|
| 180 |
+
|
| 181 |
+
Scenes Type Images 3D Points Patches
|
| 182 |
+
|
| 183 |
+
1-5
|
| 184 |
+
Buckingham original 1676 246035 4352977
|
| 185 |
+
|
| 186 |
+
1-5
|
| 187 |
+
palace sampled 1257 72814 364070
|
| 188 |
+
|
| 189 |
+
1-5
|
| 190 |
+
Grand_place original 1080 209550 3206171
|
| 191 |
+
|
| 192 |
+
1-5
|
| 193 |
+
Brussels sampled 810 78108 390540
|
| 194 |
+
|
| 195 |
+
1-5
|
| 196 |
+
All scenes original 1183.9 165124.8 3567694.5
|
| 197 |
+
|
| 198 |
+
1-5
|
| 199 |
+
(Average) sampled 887.6 63433.6 317168
|
| 200 |
+
|
| 201 |
+
1-5
|
| 202 |
+
|
| 203 |
+
Table 2: Patch correspondence evaluation performance with the metric of FPR95. We compare different loss functions on UBC Phototour dataset with HardNet as the description network (* means the network is trained by our implement). The best results are in bold.
|
| 204 |
+
|
| 205 |
+
max width=
|
| 206 |
+
|
| 207 |
+
Train 2|c|Liberty 2|c|NotreDame
|
| 208 |
+
|
| 209 |
+
1-5
|
| 210 |
+
Test Notre- Dame Youse- mite Liberty Youse- mite
|
| 211 |
+
|
| 212 |
+
1-5
|
| 213 |
+
HartNet [8] 0.62 2.14 1.47 2.67
|
| 214 |
+
|
| 215 |
+
1-5
|
| 216 |
+
HardNet-HT [8] 0.53 1.96 1.49 1.84
|
| 217 |
+
|
| 218 |
+
1-5
|
| 219 |
+
HardNet-HT* 0.50 1.96 1.48 1.61
|
| 220 |
+
|
| 221 |
+
1-5
|
| 222 |
+
HardNet-QHT* 0.45 1.83 1.228 1.52
|
| 223 |
+
|
| 224 |
+
1-5
|
| 225 |
+
|
| 226 |
+
Loss Loss-notredame 9 epoch Loss-yosemite 0.8 0.6 0.4 0.0 0 2
|
| 227 |
+
|
| 228 |
+
Figure 5: The loss of the training model in different scenes of the UBC Phototour dataset, the difference between different losses is not big, indicating that the improved loss is stable on the model training
|
| 229 |
+
|
| 230 |
+
0.05 FPR95 FPR95-nd-lib FPR95-nd-yose FPR95-yose-lib FPR95-yose-nc FPR95-lib-yose FPR95-lib-nd 0.04 0.03 0.02 0.01 epoch
|
| 231 |
+
|
| 232 |
+
Figure 6: FRP95 metric trend, using different scene combinations in the UBC phototour dataset as the training set and the validation set. The results show that on the same validation set, the FRP95 trained on the Yosemite scene is higher, indicating that there are false matches or difficult matches.
|
| 233 |
+
|
| 234 |
+
(a) RootSIFT method (b) our proposed method
|
| 235 |
+
|
| 236 |
+
Figure 7: Visualization of correspondence on stereo task based on traditional RootSIFT based method and our proposed method (green represents correct match, yellow encodes match error within 5 pixels, red shows the wrong match)
|
| 237 |
+
|
| 238 |
+
Table 3: Phototourism challenge. Mean average accuracy in pose estimation with an error threshold of ${10}^{ \circ }$ . The results are submitted with online evaluation. (We only remain the result with the highest rank in the submitted results from each team)
|
| 239 |
+
|
| 240 |
+
max width=
|
| 241 |
+
|
| 242 |
+
2*Method 2|c|stereo-task 2|c|multi-view task avg.
|
| 243 |
+
|
| 244 |
+
2-6
|
| 245 |
+
$\mathrm{{mA}}{\mathrm{A}}^{@{10}^{ \circ }}$ rank? $\mathrm{{mA}}{\mathrm{A}}^{@{10}^{ \circ }}$ rank? rank?
|
| 246 |
+
|
| 247 |
+
1-6
|
| 248 |
+
Ours 0.61081 1 0.78288 2 1
|
| 249 |
+
|
| 250 |
+
1-6
|
| 251 |
+
Luca et al. 0.58300 12 0.77056 3 2
|
| 252 |
+
|
| 253 |
+
1-6
|
| 254 |
+
Jiahui et al. 0.57826 17 0.77056 5 3
|
| 255 |
+
|
| 256 |
+
1-6
|
| 257 |
+
Ximin et al. 0.5887 5 0.75127 14 4
|
| 258 |
+
|
| 259 |
+
1-6
|
| 260 |
+
|
| 261 |
+
§ 4.3 ABLATION STUDY
|
| 262 |
+
|
| 263 |
+
To fully understand each module in our proposed method, we evaluate different module above the Phototourism validation dataset. In Table 4, our method is compared with several versions to see how data construction and improved HardNet could help to improve the descriptor and further to enhance the performance on both multi-view and stereo tasks. We evaluate four variants as the feature descriptor while the other parts keep the same with results: 1) RootSIFT; 2) the original pret rained HardNet; 3) our proposed improved HardNet; 4) our proposed improved HardNet with a data construction technology. The comparison of different feature descriptors indicates that the model performance increases largely by adding the loss constraints on HardNet: it shows an $8\%$ improvement in average mAA of both tasks; with data construction, improved HardNet yields the best performance, obtaining an average mAA of 0.7894 .
|
| 264 |
+
|
| 265 |
+
Table 4: Ablation study of proposed HardNet and data construction method on Phototourism validation set
|
| 266 |
+
|
| 267 |
+
max width=
|
| 268 |
+
|
| 269 |
+
2*Method 3|c|mAA@10°
|
| 270 |
+
|
| 271 |
+
2-4
|
| 272 |
+
Stereo Multi-view Avg
|
| 273 |
+
|
| 274 |
+
1-4
|
| 275 |
+
RootSIFT 0.6698 0.7258 0.6978
|
| 276 |
+
|
| 277 |
+
1-4
|
| 278 |
+
Pretrained HardNet 0.7317 0.7924 0.7621
|
| 279 |
+
|
| 280 |
+
1-4
|
| 281 |
+
Improved HardNet 0.7285 0.8159 0.7722
|
| 282 |
+
|
| 283 |
+
1-4
|
| 284 |
+
Improved HardNet+CleanData 0.7537 0.8252 0.7894
|
| 285 |
+
|
| 286 |
+
1-4
|
| 287 |
+
|
| 288 |
+
To evaluate whether the proposed modified OANet could help improve the performance of outlier pre-filtering, Table 5 lists the performance comparison of four different outlier pre-filtering methods: 1) use of RootSIFT as feature descriptor while ratio test and cross check as the outlier filtering; 2) use of pretrained HardNet with ratio test and cross check; 3) use of proposed improved HardNet with ratio test and cross check; 4) use of proposed improved HardNet with modified OANet. The comparison of different outlier removals indicates that the performance of modified OANet overpasses the ratio test with cross check method by $4\%$ in average mAA.
|
| 289 |
+
|
| 290 |
+
Table 5: Table 6: Ablation study of proposed OANet method on Phototourism validation set.
|
| 291 |
+
|
| 292 |
+
max width=
|
| 293 |
+
|
| 294 |
+
2*Method 3|c|mAA@10°
|
| 295 |
+
|
| 296 |
+
2-4
|
| 297 |
+
Stereo Multi-view Avg
|
| 298 |
+
|
| 299 |
+
1-4
|
| 300 |
+
RootSIFT+RT+CC 0.6698 0.7258 0.6978
|
| 301 |
+
|
| 302 |
+
1-4
|
| 303 |
+
Pretrained HardNet+RT+CC 0.7317 0.7924 0.7621
|
| 304 |
+
|
| 305 |
+
1-4
|
| 306 |
+
Improved HardNet+RT+CC 0.7537 0.8252 0.7894
|
| 307 |
+
|
| 308 |
+
1-4
|
| 309 |
+
Improved HardNet+OANet 0.7918 0.8658 0.8288
|
| 310 |
+
|
| 311 |
+
1-4
|
| 312 |
+
|
| 313 |
+
§ 5 CONCLUSION
|
| 314 |
+
|
| 315 |
+
This paper presents a novel image matching approach, which integrates a data construction method to remove noise and to ensure more efficient training, an improved HardNet QHT loss function to make the network more sensitive to noise gradient descent, and stricter OANet combined with guided match algorithm to reduce the recall rate of outlier pre-filtering. By proposing QHT loss, which strengthened the distance constraints and the improved OANet with a stricter judgment on positive samples, the more outstanding matching is produced by these stricter constraints. The proposed method is crucial to obtain high-quality correspondences on a large-scale challenging dataset with illumination vibrances, viewpoint changes and repeated textures. Our experiments show that this method achieves state-of-the-art performance on the public Phototourism Image Matching challenge.
|
| 316 |
+
|
| 317 |
+
< g r a p h i c s >
|
| 318 |
+
|
| 319 |
+
Figure 8: Visualization of correspondence on stereo task and reconstruction points in multi-view task. (green represents correct match, yellow encodes match error within 5 pixels, red shows the wrong match, blue shows the keypoints with correct 3D points)
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,255 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Service-based Analysis and Abstraction for Content Moderation of Digital Images
|
| 2 |
+
|
| 3 |
+
Online Submission ID: 2
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
This paper presents a service-based approach towards content moderation of digital visual media while browsing web pages. It enables the automatic analysis and classification of possibly offensive content, such as images of violence, nudity, or surgery, and applies common image abstraction techniques at different levels of abstraction to these to lower their affective impact. The system is implemented using a microservice architecture that is accessible via a browser extension, which can be installed in most modern web browsers. It can be used to facilitate content moderation of digital visual media such as digital images or to enable parental control for child protection.
|
| 8 |
+
|
| 9 |
+
Index Terms: Computer systems organization-Client-server architectures; Computing methodologies-Image processing; Information systems-Content analysis and feature selection; Information systems-Browsers; Human-centered computing-Web-based interaction; Human-centered computing-Graphical user interfaces;
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
This work's main objective is to support and facilitate human-driven moderation of digital visual media such as digital images, which are a dominant category in the domain of user-generated content, i.e., content that is acquired and uploaded to content providers. Content moderation usually requires humans who view and decide if content is considered to be appropriate or not, e.g., regarding violence, racism, nudity, privacy etc.; these aspects may have serious impacts in various directions, including legal and psychological issues.
|
| 14 |
+
|
| 15 |
+
To cope with negative impacts on the viewers psychology and to alleviate legal issues, we propose a combination of content analysis and classification together with suitable image abstraction techniques that first detect inappropriate content and, subsequently, disguise and obfuscate content depictions or specific portions (Fig. 1).
|
| 16 |
+
|
| 17 |
+
### 1.1 Problem Statement and Objectives
|
| 18 |
+
|
| 19 |
+
From a technical perspective, the implementation of such approach needs to be independent of operation systems and processing hardware. Thus, we decide to use a service-based approach to detect and classify visual media content and to perform respective abstraction techniques depending on the detection results. Deep-learning approaches will be used that allow for defining what "offensive" means. Such functionality can be integrated into web browsers using a browser-extension based on a World Wide Web Consortium (W3C) draft standard. This way, the content moderation functionality can be applied and integrated into professional IT solutions or can be used by means of end-user apps.
|
| 20 |
+
|
| 21 |
+
Facilitate Content Moderation (Objective-1): Today, content moderation becomes more and more crucial for digital content providers (e.g., Facebook or YouTube) to fulfill their responsibilities and to implement an ethical content handling [7]. Moderation comprises manual examination for detection and classification of critical or inappropriate content. For some types of visual media content, the detection and classification can already be performed semi-automatically using machine-learning approaches. However, automated moderation being often limited (see e.g., [13]), the final moderation decisions are often made by humans, required to consume the unfiltered content.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
Figure 1: Application example for the combination of service-based analysis and image abstraction used for content moderation functionality provided by a browser extension (a). It enables the classification of digital input images (left) displayed on web sites according to different categories (rows) and their respective disguising using adjustable image abstraction techniques (right), such as pixelation (c), cartoon stylization (e), or blur (g)
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+
Figure 2: Example of using image content analysis in combination with image abstraction techniques to disguise possibly inappropriate content for subsequent manual moderation: (a) input image, (b) results of image segmentation, (c) completely abstracted image, (d) partial abstraction techniques applied to the child only.
|
| 30 |
+
|
| 31 |
+
Recent studies indicate that workers concerned with these tasks are often subject to severe mental health damage due to traumatic experiences or monotonic duties $\left\lbrack {{10},{11},{29}}\right\rbrack$ . Interestingly, non-photo-realistic rendering of these stimuli could potentially reduce their negative impacts $\left\lbrack {1,2,{12}}\right\rbrack$ . Therefore, these negative consequences could be mitigated by reducing the affective responses that arise from consuming the unfiltered content by using a combination of automatic analysis and abstraction techniques as follow (Fig. 2): (i) visual media content is analyzed, e.g., to detect, classify, and possibly perform a semantic segmentation, (ii) abstraction techniques are used to partially or completely disguise possibly offensive content prior to (iii), the interactive visual examination.
|
| 32 |
+
|
| 33 |
+
Service-oriented Architectures (Objective-2): To implement an approach for Objective-1 and to make it available to a wide range of applications and users, we set out to provide a prototypical micro-app (e.g., a browser extension) based on a microservice infrastructure. For this, two separate microservices, for analysis and abstraction respectively, are orchestrated by a content moderator service. This enables a use-case specific exchange, replacement, or extension of specific functionality or complete services without risking the overall functionality. By using a Hyper Text Markup Language (HTML) User Interface (UI) integrated into a W3C browser extension, the abstracted content can be interpolated/blended with the original one interactively.
|
| 34 |
+
|
| 35 |
+
In current state-of-the-art systems, analysis and abstraction of images and videos are mostly performed using on-device computation. Thus, these systems' processing capabilities are limited by the device's hardware (Central Processing Unit (CPU) and Graphics Processing Unit (GPU)) and software (Operating System (OS)). Being subject to high heterogeneity (device ecosystem), this has major drawbacks concerning the applicability, maintainability, and provisioning of content-moderation applications, in particular: a software development process and integration into ${3}^{\mathrm{{rd}}}$ -party applications is aggravated by:(i)different operating systems (e.g., Windows, Linux, macOS, iOS, Android), (ii) heterogeneous hardware configurations of varying efficiency and Application Programming Interfaces (APIs) (e.g., OpenGL, Vulkan, Metal, DirectX), as well as (iii) display sizes and formats. Further, on-device processing does often not scale with respect to the increasing input complexity (e.g., number of images, increasing spatial resolution of camera sensors), which especially poses problems for mobile devices (e.g., battery drain or overheating).
|
| 36 |
+
|
| 37 |
+
### 1.2 Approach and Contributions
|
| 38 |
+
|
| 39 |
+
Concerning the aforementioned drawbacks of on-device processing, the proposed combination of standardized technology for micro-apps together with service-oriented architectures and infrastructures offers a variety of advantages, in particular:(i)implementation and testing of specific analysis and abstraction techniques are required to be performed only once (controlling the systems software and hardware due to virtualization), (ii) functionality is offered to a wide range of web-based applications using standardized protocols, which can be easily integrated into ${3}^{\text{rd }}$ -party applications and extended rapidly. This way, (iii) the proposed service-based approach can automatically scale service-instances with respect to input data complexity and computing power required. Thus software up-to-dateness and exchangeability can be easily achieved. Further, the software development process of web-based thin-clients is less complex compared to rich-clients. Together with the upcoming $5\mathrm{G}$ telecommunication standard featuring (among others) high data rates, reduced latency, and energy saving, the presented approach seems feasible to be applied in stationary as well as mobile contexts. Finally, intellectual property of the service providers can be effectively protected by not shipping respective software components to customers, and thus, impede the possibility of reverse engineering.
|
| 40 |
+
|
| 41 |
+
## 2 BACKGROUND AND RELATED WORK
|
| 42 |
+
|
| 43 |
+
### 2.1 Visual Content Analysis using Neural Networks
|
| 44 |
+
|
| 45 |
+
Image analysis can be performed according to different tasks. Image classification such as the ResNet Convolutional Neural Network (CNN) architecture can be performed [9] to determine how likely an image belongs to one or more specific categories. With object recognition, the goal is to identify objects displayed on images with it respective bounding boxes. For object recognition, CNN architectures such as is YOLOv3 [23] and Single Shot Detector (SSD) [17] can be applied. Another task is image segmentation, where objects and regions of certain semantics are identified on a per-pixel base. This can be performed with the approach R-CNN of Girshick et al. [8]. Different kinds of CNNs need to be trained with datasets consisting of images labeled with categories, objects, or image regions. There are public datasets available to train and to benchmark the performance of different CNN architectures. Popular examples are the Pascal VOC [4] and ADE20K [38] datasets. They contain labeled image data for image classification, object recognition, and even semantic segmentation. For the task of content moderation, the objects and categories are quite general; mostly everyday objects are contained. Another dataset is the Google Open Image dataset [14]. It contains about 9 million images labeled with object information. The object categories are hierarchical organized and span over 6000 categories. Another approach, which spans even more different object categories, is YOLO9000 [22]. YOLO9000 is a variation of the YOLOv3 [23] architecture that was trained on a dataset with more than 9000 different objects. However, this dataset is not publicly available.
|
| 46 |
+
|
| 47 |
+
Further, there are approaches for image analysis that are more directed towards content moderation. These are mostly presented in the form of RESTful APIs allowing for sending images and receive analysis results. The Google Cloud Vision API [26] assigns scores to images depending on how likely they represent the categories adult, violence, medical and spoof. The API of Sightengine [28] analyzes images for the occurrence of categories such as nudity, weapon, alcohol, drugs, scams and other offensive content. Valossa [18] reviews cloud-based vendors supporting the classification of unsafe content and describes the difficulties of defining what inappropriate content actually is. They concluded that content analysis models must be able to understand the context of objects and depicted situations in order to decide whether images contain unsafe content. They offer an evaluation dataset with images in 16 different content categories and benchmark it on several online RESTful APIs. However, these approaches do not offer any public datasets or specify which machine learning models they use exactly. In contrast, Yahoo [19] offers a trained CNN model that can be used free of charge. The Yahoo Open Not Safe For Work (NSFW) model is basically a ResNet [9] that was fine-tuned on a dataset of NSFW images depicting nudity and other offensive content. For a given image, it determines a score how likely it contains unsafe content.
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+
Figure 3: Conceptual overview of the microservice architecture of the presented approach. The individual service components (blue) are communicating via RESTful APIs and are used by a browser extension (orange) that integrates into standard web technologies (gray).
|
| 52 |
+
|
| 53 |
+
### 2.2 Service-based Image Processing
|
| 54 |
+
|
| 55 |
+
Several software architectural patterns are feasible to implement service-based image processing. However, one prominent style of building a web-based processing system for any data is the service-oriented architecture [31]. It enables server developers to set up various processing endpoints, each providing a specific functionality and covering a different use case. These endpoints are accessible as a single entity to the client, i.e., the implementation is hidden for the requesting clients, but can be implemented through an arbitrary number of self-contained services.
|
| 56 |
+
|
| 57 |
+
Since web services are usually designed to maximize their reusability, their functionality should be simple and atomic. Therefore, the composition of services is critical for fulfilling more complex use cases [15]. The two most prominent patterns for implementing such composition are choreography and orchestration. The choreography pattern describes decentralized collaboration directly between modules without a central component. The orchestration pattern describes collaboration through a central module, which requests the different web services and passes the intermediate results between them.
|
| 58 |
+
|
| 59 |
+
In the field of image analysis, Wursch et al. [36] present a web-based tool that enables users to perform various image analysis methods, such as text-line extraction, binarization, and layout analysis. It is implemented using a number of Representational State Transfer (REST) web services. Application examples include multiple web-based applications for different use cases. Further, the viability of implementing image-processing web services using REST has been demonstrated by Winkler et al. [34], including the ease of combination of endpoints. Another example for service-based image processing is Leadtools (https://www.leadtools.com), which provides a fixed set of approx. 200 image-processing functions with a fixed configuration set via a web API.
|
| 60 |
+
|
| 61 |
+
In this work, a similar approach using REST is chosen, however, with a different focus in terms of granularity of services. The advantages of using microservices are(i)increased scalability of the components, (ii) easy deployment and maintainability as well as, (iii) the possibility to introduce various technologies into one system [32]. For our work, we are extending a microservice platform for cloud-based visual analysis and processing that was first presented by Richter et al. [25]. In addition thereto and based on that, Wegen et al. [33] present an approach for performing service-based image processing using software rendering to balance cost-performance relation.
|
| 62 |
+
|
| 63 |
+
In the field of geodata, the Open Geospatial Consortium (OGC) set standards for a complete server-client ecosystem. As part of this specification, different web services for geodata are introduced [20]. Each web service is defined through specific input and output data and the ability to self-describe its functionality. In contrast, in the domain of general image-processing there is no such standardization yet. However, it is possible to transfer concepts from the OGC standard, such as unified data models. These data models are implemented using a platform-independent effect format. In the future, it is possible to transfer even more concepts set by the OGC to the general image-processing domain, such as the standardized self-description of services.
|
| 64 |
+
|
| 65 |
+
## 3 Method
|
| 66 |
+
|
| 67 |
+
With respect to Objective-2, we choose to implement our approach using microservices, which are described as follows.
|
| 68 |
+
|
| 69 |
+
### 3.1 System Overview
|
| 70 |
+
|
| 71 |
+
Fig. 3 shows a conceptual overview of the components as well as their data and control flow. It comprises of the following components that communicate via RESTful APIs:
|
| 72 |
+
|
| 73 |
+
Moderation Browser-Extension: A client-device running a web browser that(i)hosts the moderation browser-extension and (ii) an arbitrary website with visual media content. The web-site's visual media content is hosted by a content provider and referenced by an Unified Resource Locator (URL). The browser extension accesses the URLs via content-script and uses it to query the RESTful API of the CMS asynchronously.
|
| 74 |
+
|
| 75 |
+
Content Moderation Service (CMS): The CMS orchestrates the interplay between instances of the analysis and abstraction services, which encapsulate respective techniques. Upon request, it downloads the image from the given URL and forwards its content to an analysis service instance by querying the analysis RESTful API. Depending on the analysis response, it uses the configuration data to map analysis results to specific parameter values that are used to query the image abstraction service. The response is subsequently forwarded to the browser extension that replaces a placeholder content with the abstracted content.
|
| 76 |
+
|
| 77 |
+
Content Analysis Service (CAS): The CAS identifies if an image contains offensive content and has to be filtered using image abstraction techniques. It receives image data from the CMS and performs image analysis with different image recognition models as well as multiple image classification and object recognition CNNs. It then returns results of the different analysis models in a unified and structured way.
|
| 78 |
+
|
| 79 |
+
Image Abstraction Service (IAS): The IAS provides an interface for applying various image abstraction techniques (e.g., blur, pixelation, or more specific operations such as cartoon stylization, etc.) with presets of different strength to images that are identified by the CAS to possibly contain offensive content.
|
| 80 |
+
|
| 81 |
+
### 3.2 Browser-Extension for Moderation Client
|
| 82 |
+
|
| 83 |
+
The browser extension traverses the Document Object Model (DOM) tree and utilizes a MutationObserver object to detect changes in the respective image and picture tags; a DOM-MutationObserver is provided by the corresponding web browser and is intended to watch for changes being made to the DOM tree. As soon as an image is detected, it is initially blurred using Cascading Style Sheets (CSS) image filters. This prevents users to see possible disturbing image content while the CMS's RESTful API is queried and the image's URL is transmitted to it. The response received by the CMS contains information on whether or not an image contains disturbing content and therefore needs to be disguised. In the case that an image is categorized as offensive, the response also includes a processed version of the image.
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
Figure 4: Filtered image with an overlay shown on mouse-over.
|
| 88 |
+
|
| 89 |
+
Subsequent to receiving the response, the local CSS-filter blur is removed. If the processed image does contain suggestive content, the original image gets replaced - otherwise, the original image is displayed. Finally, an overlay is added for every image (Fig. 4) that provides buttons for(i)allowing users to report misclassified images and (ii) toggling between the original and the processed image. If a user does not agree with the determined content classification, they can propose a more suitable one. A modal (Fig. 5) will appear where the user can select if another category is more suitable, the image is disturbing in a different way, or it should not be filtered.
|
| 90 |
+
|
| 91 |
+
### 3.3 Content Moderation Service
|
| 92 |
+
|
| 93 |
+
The CMS moderates communication and interactions between the browser extension, the analysis service, and the image abstraction service. Clients use it to initiate the analysis process that consists of the following steps. First, the image is downloaded using the URL specified in the analysis request. Then, the image analyzer is queried to detect if the image contains offensive content. Subsequently, the CMS maps the image analysis results to an image abstraction technique and forwards it to the abstraction service for application. Finally, it notifies the client whether the image contains offensive content and, if it does, attaches the processed image to the response. If a user decides to send feedback via the feedback modal (e.g., the chosen scenario is unsuitable), a request to a feedback route is sent and the feedback is stored for further use.
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
|
| 97 |
+
Figure 5: A feedback pop-up enables the correction of misclassifications and for feeding this information back to the server.
|
| 98 |
+
|
| 99 |
+
### 3.4 Content Analysis & Abstraction Service
|
| 100 |
+
|
| 101 |
+
The CAS provides an interface for clients to analyze images for the presence of certain objects and categories. Clients send the image data and receive analysis results that are represented in the form of tags with score and meta data associated. Tags can comprise objects displayed in a image or categories that can be associated with an image. The score describes how likely an object or category is present. The meta data could include an Axis-aligned Bounding Box (AABB) that describes the estimated position of an object within the image.
|
| 102 |
+
|
| 103 |
+
The image analysis is performed with different machine learning models, in our case using CNNs. The output, specific to each model used, must be transformed to the unified analysis result format. This allows extending the analysis service with additional Machine Learning (ML) models. The results of the image analysis are used by the content moderator to decide what kind of content an image is of and whether an image abstraction is applied.
|
| 104 |
+
|
| 105 |
+
If offensive content was detected on an image, it is disguised by applying an image abstraction techniques to it. The IAS provides an endpoint to apply specific techniques such as pixelation, or blur to an image. For every technique, different presets and parameters are provided to indicate different degrees and styles of image abstraction. To query the abstraction endpoint, the IAS requires the image's data as well as a mapped abstraction techniques and its preset (Sec. 4). With respect to this, the CMS perform such mapping by taking the analyzed scenario and score into account. In response, the processed image is returned by the IAS.
|
| 106 |
+
|
| 107 |
+
## 4 MAPPING OF ANALYSIS RESULTS
|
| 108 |
+
|
| 109 |
+
The analysis result for an image is a set of tags with scores. The tags describe objects that can be displayed or categories that can be associated with an image. The scores describe how likely these tags are actually present on an image. An image abstraction in the sense of content moderation, is processing a user generated content image with an image abstraction technique with a specific parameter preset. To process images with the goal of reducing explicit content, one has to define a mapping from analysis results to an image abstraction technique with a specific parameter preset.
|
| 110 |
+
|
| 111 |
+
In the proposed system each tag was manually associated with a scenario. A scenario is a type of content that should be moderated. For this system, the scenarios nudity, violence, and medical are used. Each scenario is specified by: (i) name, (ii) set of tag names and threshold, (iii) image abstraction technique, and (iv) three effect presets, sorted by degree of abstraction (low, medium, strong). A scenario is matched to analysis results if any of the scenario's tag names are contained in the received analysis' tags and the received score is equal or higher than the scenario tag score threshold. Because an analysis result could match multiple scenarios, the scenarios are prioritized and the matching scenario with the highest priority is chosen. If no scenario matches, then no image abstraction is required. Otherwise, the user-defined preset will be selected and used for the subsequent image abstraction step.
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 6: Images with similar content (left) but different mappings (right) based on how likely they are rated as showing violent content.
|
| 116 |
+
|
| 117 |
+
Fig. 6 shows four images with similar content but different mappings. All four images depict scenes with weapons and are categorized as showing violent content by the CAS. The used mappings are chosen according to the score that indicates how likely these images show violent content. Fig. 6(a) is disguised using a Gaussian blur preset with a large kernel size (Fig. 6(b)). Fig. 6(d) shows an applied cartoon filter using a black & white preset with thin edges to remove color information. Fig. 6(h) uses an cartoon filter with thick edges. Fig. 6(h) show the results of a pixelation abstraction technique for aggressive disguise. The effect that is mapped to an image is arbitrary and can be customized, but different abstraction techniques are more or less suitable for certain scenarios. In particular, this work uses a pixelation technique for images showing violent content, a cartoon filter for medical content based on the work of Winnemöller et al. [35] and as suggested by the studies of Besançon et al. $\left\lbrack {1,2}\right\rbrack$ , and a Gaussian blur for images that depict nudity.
|
| 118 |
+
|
| 119 |
+
## 5 IMPLEMENTATION ASPECTS
|
| 120 |
+
|
| 121 |
+
In a service-based architecture similar to the one used in the content moderation scenario, it is necessary that messages are exchanged between the individual services. Therefore, each service provides a RESTful API and queries other services correspondingly. The browser extension is implemented using JavaScript (JS) and utilizes the used browser's API to access and alter a website's DOM tree to administer local storage and to react on changes made to the website. For sending requests to other services, the fetch API is applied. Filter options facilitate users to customize whether and to what extend suggestive images should be abstracted and all three scenarios can be customized individually. Users can choose among three levels of abstraction or can switch off single scenarios completely.
|
| 122 |
+
|
| 123 |
+
The CMS is implemented using Node.js and provides a RESTful API with two endpoints: one for requesting an image to be analyzed and categorized and one for sending user feedback. A request sent to the analyze endpoint needs to include the URL of the image that should be analyzed and options that represent the filter settings made by the user. A feedback request consists of three different pieces of information: the data concerning the assessed image, the category proposed by the image analyzer service, and the category included in the user's feedback. This data is stored and used as a training set for machine learning algorithms that can be used during image classification. The implementation of the IAS also relies on Node.js. It provides an endpoint that accepts requests that need to include the data of the image to be abstracted and an operation that should be applied to the image. A preset, related to the desired effect, can also be sent to this endpoint as an optional parameter. The CAS is implemented using Python and flask. It provides a RESTful API with an endpoint to start the analysis process.
|
| 124 |
+
|
| 125 |
+
For the basic functionality of the CMS, two different kinds of neural networks are used:(i)a Single Shot MultiBox model [17] and (ii) the Yahoo Open NSFW model [19]. Single Shot MultiBox is a CNN architecture that performs object recognition on images. For a given image it returns AABB with a scenario and a confidence score (0 to 1). The confidence score indicates to what extend a scenario is detected in the image. An existing implementation for PyTorch was used, as well as a classification model that has been initially trained on the Pascal VOC dataset [4], but with very general object classes such as aeroplane, car, cow, dog, or TV monitor. To this end, we also trained an SSD model on military-like classes of the Google Open Images Dataset [14] (such as rifle, tank, knife, missile) to be able to make predictions on somewhat realistic explicit content. The Yahoo Open NSFW model also returns for a given image an NSFW score (0 - safe, no nudity detected; to 1 - not safe, nudity detected) and an implementation and trained model for TensorFlow is available, but a proper threshold for this score is required. Such threshold might be different according to the use case of the system and must be chosen carefully.
|
| 126 |
+
|
| 127 |
+
## 6 RESULTS AND DISCUSSION
|
| 128 |
+
|
| 129 |
+
### 6.1 Applications
|
| 130 |
+
|
| 131 |
+
The system described in this work offers advantages for the consumption, use, and eventually moderation of graphic content in different areas we now detail. First of all, it could assist for medical and surgical education. A primary use case would be to facilitate the education of nurses and medical students (not all destined to be surgeons) by reducing affection and aversion when looking at images showing blood and medical acts $\left\lbrack {1,2}\right\rbrack$ . Similarly, another use case is on the communication between surgeons and patients $\left\lbrack {1,2}\right\rbrack$ . Patients are usually informed and prepared before the planning of future surgeries and the explanations can be facilitated by the use of images. Yet, laypeople often find looking at images depicting surgery or blood extremely difficult [6, 27]. Communication between patients and doctors could therefore be improved with such automated image processing tools.
|
| 132 |
+
|
| 133 |
+
Furthermore, the system can be used to moderate internet forums and social networks. Nowadays, a lot of digital media, including images and videos, are shared through social media platforms such as Facebook or Instagram. While graphic content sometimes adheres to the Terms of Services (TOSs) of these platforms [5, 30], many graphic media are not accepted for ethical or legal reasons. The filtering between authorized and non-authorized content is performed by a combination of algorithms and people, or people alone (content moderators) depending on the platform. The software system presented in this paper could facilitate a content moderator's work and help to prevent mental issues that can be a consequence of looking at disturbing pictures all day [21]. The system could also alleviate the toll on volunteer moderators of platforms such as Wikipedia or Reddit. In a similar fashion, journalists and news editors might have to browse through hundreds of shocking content to illustrate their articles or better understand the case they are reporting on (e.g., war-zones, disasters, accidents). Our automated tool could also be useful in this specific context.
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+
Figure 7: Distribution of request times per day (a) and hour (b). Center line of boxes shows mean, boxes in total show quartile, whiskers show rest of the distribution without obvious outliers.
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+
Figure 8: Mean request times according to different tasks.
|
| 142 |
+
|
| 143 |
+
Finally, since exposure to graphic or pornographic content has been shown to be particularly detrimental to children [16], our tool could particularly be interesting on this regard. While blocking software could be used to limit access to nudity or pornography, such software tend to also limit access to useful information (e.g., online sex education) [24], are rarely maintained or even used [3], and unlikely to block access to content on some social media platforms. Blocking software finally rarely target all sorts of graphic content. With respect to this issue, we hope that our tool can help limit the impact of unwanted graphic content, rather than eliminate it completely along with potentially useful information.
|
| 144 |
+
|
| 145 |
+
### 6.2 Performance Evaluation
|
| 146 |
+
|
| 147 |
+
We only focus here on system performance as the potential reduction of affective responses through image abstraction techniques has already been studied $\left\lbrack {1,2,{37}}\right\rbrack$ . The system’s performance was evaluated by timing different tasks involved in the process of content moderation and comparing these to assemble a metric of the requests and processed images (time stamp, image resolution, image transformation). Timed tasks measured on the CMS were(i)fetching image, (ii) performing image analysis, and (iii) performing image transformation by abstraction. The CMS, CAS, and IAS are hosted on a single dedicated GPU-server without a significant network overhead. Thus, the network request times between clients and the services are assumed to be independent of our system and are not considered. To evaluate the performance in a way that considers all required tasks equally, only requests that led to an image transformation (unsafe image was detected) were considered. The dedicated GPU-server is equipped with Xeon E5-2637 v4, 3.5 GHz processor (8 cores), 64 GB RAM, NVIDIA Quadro M6000 GPU with 24 GB VRAM.
|
| 148 |
+
|
| 149 |
+
Over a week of testing the extension, about 35000 requests involving image transformation are logged in total. Fig. 7(a) and Fig. 7(b) show the distribution of total request times per days and hours. These are mostly independent of each day and hour with slight variances, which could be explained by a varying load of the server or different number of requests incoming in a shorter period of time. Fig. 8 shows the mean time required for each task per day. The image analysis requires $\approx {75}\%$ of the mean request time, with some outliers on day 6 where fetching the images suddenly takes up as much time as analyzing the images on average. Fig. 9(a) shows the resolution of images over the time required for analysis.
|
| 150 |
+
|
| 151 |
+
We further tested whether images of high resolution impact analysis performance. The documentation of the used CNNs describe that image data is propagated through the networks at a constant resolution, i.e., a downsampling is required before propagation through the CNNs. Fig. 9(a) shows that images have similar analysis times independently of the resolution and the required downsampling step. A further question relates whether very small images (such as icons) cause a performance overhead if they appear on websites very often. Images that are smaller than ${32} \times {32}$ pixels are highlighted in red within Fig. 9(a). One can see that they need similar inference times compared to all other images. Further analysis of the statistics also indicate that they take up only $\approx 5\%$ in count of all images and less than one percent in total request time. A similar analysis was performed for the image transformation task. Relating to the image resolution vs.the image transformation (Fig. 9(b)) shows a linear dependency between image resolution (as number of total pixels) and transformation time. However, this does not impact the complete performance severely as image analysis is slower by a factor of about 10. Small images have been highlighted, again they take up $\approx 5\%$ in count and about $\approx 4\%$ in total time only.
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
|
| 155 |
+
Figure 9: Relating resolution of input images with analysis inference time (a) and image transformation time (b). Images smaller than ${32} \times {32}$ pixels are highlighted in red.
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+
Figure 10: Relating requests per minute and the total request time.
|
| 160 |
+
|
| 161 |
+
Finally, we evaluated whether a high load (possibly caused by a high number of requests in a short time) causes higher total request times. For it, each request is plotted as a point with the number of requests in that minute and its required time (Fig. 10). This does not show any strong correlation between the number of requests per minute and the total request time though; perhaps, the load generated by the requests was not high enough yet.
|
| 162 |
+
|
| 163 |
+
### 6.3 Limitations
|
| 164 |
+
|
| 165 |
+
Applying specific filters to weaken affect when looking at medical images might work differently or not at all depending on the user and the specific image at hand. More generally, it is impossible to find a definition of offensive content that fits all users. What is perceived as "graphic" highly depends on the perception, age, views, cultural background, and personal history of the user, which results in many different potential use cases for each individual user. Even if a clear definition would be possible, modern computer vision approaches are not able to correctly recognize offensive content all of the time. Additionally, detecting objects that are known to be offensive is not enough. The context in which these objects are as well as the overall image composition could completely change the meaning.
|
| 166 |
+
|
| 167 |
+
We do not have the capacities to train accurate neural networks for real use-cases because there are no acceptable, publicly available datasets of offensive content. Even with a proper dataset, it is unlikely to train a perfect neural network, which classifies all images correctly and detects offensive content every time.
|
| 168 |
+
|
| 169 |
+
Regarding the browser extension, it is difficult to even detect all the images on websites since there are a number of different ways to integrate an image into a web page, e.g., through custom HTML elements or extensive JS usage. If images are loaded asynchronously via JS and many images change simultaneously, the extension is not able to react fast enough to all of the changes, resulting in "unobfuscated" or "non-abstracted" images. Moreover, JS is executed in a single thread in all established web browsers, which increased the occurrence of time-related problems. Additionally, a very low bandwidth could be slowing down the processing of images. The impact would be comparably tiny because not much additional data is sent and the client still downloads each image only once.
|
| 170 |
+
|
| 171 |
+
## 7 CONCLUSIONS AND FUTURE WORK
|
| 172 |
+
|
| 173 |
+
This paper presents a service-based approach to facilitate consumption of digital graphic images online. To achieve this, an automatic analysis and classification of possibly offensive content is performed using services, and, based on the results, image abstraction techniques are applied with varying levels of abstraction. This functionality is accessed and configured via a browser extension that is supported by most modern web browsers. The presented content moderation approach has various applications such as reducing affective responses during medical education, allowing less distressing browsing of social media, or enabling safer browsing for child protection.
|
| 174 |
+
|
| 175 |
+
Regarding future work, the user experience of the extension could be increased as follows. Modern object recognition algorithms are not only able to detect certain objects in images but are also able to locate them. With respect to this, image abstraction techniques can be applied to segments of the image to maintain the context and only abstract the sensitive image regions. This might support the user to identify whether an image was classified correctly. As an alternative, the image abstraction techniques could be applied to the complete image, but the user could be given the option to interactively reveal the image partial using lens-based interaction metaphors.
|
| 176 |
+
|
| 177 |
+
[1] L. Besançon, A. Semmo, D. J. Biau, B. Frachet, V. Pineau, E. H. Sariali, M. Soubeyrand, R. Taouachi, T. Isenberg, and P. Dragicevic. Reducing Affective Responses to Surgical Images and Videos Through Stylization. Computer Graphics Forum, 39(1):462-483, Jan. 2020. doi: 10.1111/cgf. 13886
|
| 178 |
+
|
| 179 |
+
[2] L. Besançon, A. Semmo, D. J. Biau, B. Frachet, V. Pineau, E. H. Sariali, R. Taouachi, T. Isenberg, and P. Dragicevic. Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization. In Proceedings of the Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering, pp. 4:1-4:13. ACM Press, Victoria, Canada, Aug. 2018. doi: 10.1145/3229147.3229158
|
| 180 |
+
|
| 181 |
+
[3] A. Davis, C. Wright, M. Curtis, M. Hellard, M. Lim, and M. Temple-Smith. 'not my child': parenting, pornography, and views on education. Journal of Family Studies, pp. 1-16, 2019. doi: 10.1080/13229400. 2019.1657929
|
| 182 |
+
|
| 183 |
+
[4] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zis-serman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303-338, June 2010.
|
| 184 |
+
|
| 185 |
+
[5] Facebook. Community standards. https://www.facebook.com/ communitystandards/objectionable_content. Accessed: 2021- 04-07.
|
| 186 |
+
|
| 187 |
+
[6] P. T. Gilchrist and B. Ditto. The effects of blood-draw and injection stimuli on the vasovagal response. Psychophysiology, 49(6):815-820, 2012. doi: 10.1111/j.1469-8986.2012.01359.x
|
| 188 |
+
|
| 189 |
+
[7] T. Gillespie. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press, 2018. doi: 10.12987/9780300235029
|
| 190 |
+
|
| 191 |
+
[8] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, abs/1311.2524, 2013.
|
| 192 |
+
|
| 193 |
+
[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
|
| 194 |
+
|
| 195 |
+
[10] E. A. Holman, D. R. Garfin, and R. C. Silver. Media's role in broadcasting acute stress following the boston marathon bombings. Proceedings of the National Academy of Sciences, 111(1):93-98, 2014. doi: 10. 1073/pnas. 1316265110
|
| 196 |
+
|
| 197 |
+
[11] T. L. Hopwood and N. S. Schutte. Psychological outcomes in reaction to media exposure to disasters and large-scale violence: A meta-analysis. Psychology of violence, 7(2):316, 2017. doi: 10.1037/ vio0000056
|
| 198 |
+
|
| 199 |
+
[12] T. Isenberg. Interactive npar: What type of tools should we create? In Proceedings of Expressive, pp. 89-96, 2016. doi: 10.2312/exp. 20161067
|
| 200 |
+
|
| 201 |
+
[13] S. Jhaver, I. Birman, E. Gilbert, and A. Bruckman. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoder-ator. ACM Trans. Comput.-Hum. Interact., 26(5), July 2019. doi: 10. 1145/3338243
|
| 202 |
+
|
| 203 |
+
[14] I. Krasin, T. Duerig, N. Alldrin, A. Veit, S. Abu-El-Haija, S. Belongie, D. Cai, Z. Feng, V. Ferrari, V. Gomes, A. Gupta, D. Narayanan, C. Sun, G. Chechik, and K. Murphy. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2016.
|
| 204 |
+
|
| 205 |
+
[15] A. L. Lemos, F. Daniel, and B. Benatallah. Web service composition: A survey of techniques and tools. ACM Computing Surveys, 48(3):33:1- 33:41, 2015. doi: 10.1145/2831270
|
| 206 |
+
|
| 207 |
+
[16] M. S. C. Lim, E. R. Carrotte, and M. E. Hellard. The impact of pornography on gender-based violence, sexual health and well-being: what do we know? Journal of Epidemiology & Community Health, 70(1):3-5, 2016. doi: 10.1136/jech-2015-205453
|
| 208 |
+
|
| 209 |
+
[17] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. E. Reed, C. Fu, and A. C. Berg. SSD: single shot multibox detector. CoRR, abs/1512.02325, 2015.
|
| 210 |
+
|
| 211 |
+
[18] V. L. Ltd. Automatic visual content moderation - reviewing the state-of-the-art vendors in cognitive cloud services to spot unwanted content. https://valossa.com/automatic-visual-content-moderation/.Accessed 2021-04-07.
|
| 212 |
+
|
| 213 |
+
[19] J. Mahadeokar and G. Pesavento. Open sourcing a deep
|
| 214 |
+
|
| 215 |
+
3 4 learning solution for detecting nsfw images. https:
|
| 216 |
+
|
| 217 |
+
//yahooeng.tumblr.com/post/151148689421/open-sourcing-a-deep-learning-solution-for. Accessed 2021-04-07.
|
| 218 |
+
|
| 219 |
+
[20] M. Mueller and B. Pross. OGC WPS 2.0.2 Interface Standard. Open Geospatial Consortium, 2015. http://docs.opengeospatial.org/is/14- 065/14-065.html.
|
| 220 |
+
|
| 221 |
+
[21] C. Newton. The trauma floor. https://www.theverge.com/2019/ 2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona, The Verge, Feb 2019. Accessed: 2021-04-07.
|
| 222 |
+
|
| 223 |
+
[22] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. arXiv preprint arXiv: 1612.08242, 2016.
|
| 224 |
+
|
| 225 |
+
[23] J. Redmon and A. Farhadi. Yolov3: An incremental improvement. arXiv, 2018.
|
| 226 |
+
|
| 227 |
+
[24] C. R. Richardson, P. J. Resnick, D. L. Hansen, H. A. Derry, and V. J. Rideout. Does Pornography-Blocking Software Block Access to Health Information on the Internet? JAMA, 288(22):2887-2894, 2002. doi: 10.1001/jama.288.22.2887
|
| 228 |
+
|
| 229 |
+
[25] M. Richter, M. Söchting, A. Semmo, J. Döllner, and M. Trapp. Service-based Processing and Provisioning of Image-Abstraction Techniques. In Proceedings International Conference on Computer Graphics, Visualization and Computer Vision (WSCG), pp. 97-106. Computer Science Research Notes (CSRN), Plzen, Czech Republic, 2018. doi: 10.24132/ CSRN.2018.2802.13
|
| 230 |
+
|
| 231 |
+
[26] S. Robinson. Filtering inappropriate content with the Cloud Vision API, Aug 2016.
|
| 232 |
+
|
| 233 |
+
[27] C. N. Sawchuk, J. M. Lohr, D. H. Westendorf, S. A. Meunier, and D. F. Tolin. Emotional responding to fearful and disgusting stimuli in specific phobics. Behaviour research and therapy, 40(9):1031-1046, 2002. doi: 10.1016/S0005-7967(01)00093-6
|
| 234 |
+
|
| 235 |
+
[28] Sightengine. Sightengine online api. https://sightengine.com/ demo. Accessed 2021-04-07.
|
| 236 |
+
|
| 237 |
+
[29] R. R. Thompson, N. M. Jones, E. A. Holman, and R. C. Silver. Media exposure to mass violence events can fuel a cycle of distress. Science Advances, 5(4), 2019. doi: 10.1126/sciadv.aav3502
|
| 238 |
+
|
| 239 |
+
[30] Twitter. Twitter's sensitive media policy. https: //help.twitter.com/en/rules-and-policies/media-policy. Accessed: 2021-04-07.
|
| 240 |
+
|
| 241 |
+
[31] M.-F. Vaida, V. Todica, and M. Cremene. Service oriented architecture for medical image processing. International Journal of Computer Assisted Radiology and Surgery, 3(3):363-369, 2008. doi: 10.1007/ s11548-008-0231-8
|
| 242 |
+
|
| 243 |
+
[32] M. Viggiato, R. Terra, H. Rocha, M. T. Valente, and E. Figueiredo. Microservices in practice: A survey study. CoRR, abs/1808.04836, 2018.
|
| 244 |
+
|
| 245 |
+
[33] O. Wegen, M. Trapp, J. Döllner, and S. Pasewaldt. Performance Evaluation and Comparison of Service-based Image Processing based on Software Rendering. In Proceedings International Conference on Computer Graphics, Visualization and Computer Vision (WSCG), pp. 127-136. Computer Science Research Notes (CSRN), Plzen, Czech Republic, 2019. doi: 10.24132/csrn.2019.2901.1.15
|
| 246 |
+
|
| 247 |
+
[34] R. P. Winkler and C. Schlesiger. Image processing rest web services. Technical Report ARL-TR-6393, Army Research Laboraty, Adelphi, MD 20783-119, 2013.
|
| 248 |
+
|
| 249 |
+
[35] H. Winnemöller, S. C. Olsen, and B. Gooch. Real-time Video Abstraction. ACM Transactions On Graphics), 25(3):1221-1226, 2006.
|
| 250 |
+
|
| 251 |
+
[36] M. Würsch, R. Ingold, and M. Liwicki. Divaservices - a restful web service for document image analysis methods. Digital Scholarship in the Humanities, 32(1):i150-i156, 2017. doi: 10.1093/llc/fqw051
|
| 252 |
+
|
| 253 |
+
[37] H. Yang, J. Han, and K. Min. Eeg-based estimation on the reduction of negative emotions for illustrated surgical images. Sensors, 20(24), 2020. doi: 10.3390/s20247103
|
| 254 |
+
|
| 255 |
+
[38] B. Zhou, H. Zhao, X. Puig, T. Xiao, S. Fidler, A. Barriuso, and A. Tor-ralba. Semantic Understanding of Scenes through ADE20K Dataset. International Journal on Computer Vision, 127(3):302-321, 2019.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/4j3avB-mrk/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SERVICE-BASED ANALYSIS AND ABSTRACTION FOR CONTENT MODERATION OF DIGITAL IMAGES
|
| 2 |
+
|
| 3 |
+
Online Submission ID: 2
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
This paper presents a service-based approach towards content moderation of digital visual media while browsing web pages. It enables the automatic analysis and classification of possibly offensive content, such as images of violence, nudity, or surgery, and applies common image abstraction techniques at different levels of abstraction to these to lower their affective impact. The system is implemented using a microservice architecture that is accessible via a browser extension, which can be installed in most modern web browsers. It can be used to facilitate content moderation of digital visual media such as digital images or to enable parental control for child protection.
|
| 8 |
+
|
| 9 |
+
Index Terms: Computer systems organization-Client-server architectures; Computing methodologies-Image processing; Information systems-Content analysis and feature selection; Information systems-Browsers; Human-centered computing-Web-based interaction; Human-centered computing-Graphical user interfaces;
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
This work's main objective is to support and facilitate human-driven moderation of digital visual media such as digital images, which are a dominant category in the domain of user-generated content, i.e., content that is acquired and uploaded to content providers. Content moderation usually requires humans who view and decide if content is considered to be appropriate or not, e.g., regarding violence, racism, nudity, privacy etc.; these aspects may have serious impacts in various directions, including legal and psychological issues.
|
| 14 |
+
|
| 15 |
+
To cope with negative impacts on the viewers psychology and to alleviate legal issues, we propose a combination of content analysis and classification together with suitable image abstraction techniques that first detect inappropriate content and, subsequently, disguise and obfuscate content depictions or specific portions (Fig. 1).
|
| 16 |
+
|
| 17 |
+
§ 1.1 PROBLEM STATEMENT AND OBJECTIVES
|
| 18 |
+
|
| 19 |
+
From a technical perspective, the implementation of such approach needs to be independent of operation systems and processing hardware. Thus, we decide to use a service-based approach to detect and classify visual media content and to perform respective abstraction techniques depending on the detection results. Deep-learning approaches will be used that allow for defining what "offensive" means. Such functionality can be integrated into web browsers using a browser-extension based on a World Wide Web Consortium (W3C) draft standard. This way, the content moderation functionality can be applied and integrated into professional IT solutions or can be used by means of end-user apps.
|
| 20 |
+
|
| 21 |
+
Facilitate Content Moderation (Objective-1): Today, content moderation becomes more and more crucial for digital content providers (e.g., Facebook or YouTube) to fulfill their responsibilities and to implement an ethical content handling [7]. Moderation comprises manual examination for detection and classification of critical or inappropriate content. For some types of visual media content, the detection and classification can already be performed semi-automatically using machine-learning approaches. However, automated moderation being often limited (see e.g., [13]), the final moderation decisions are often made by humans, required to consume the unfiltered content.
|
| 22 |
+
|
| 23 |
+
< g r a p h i c s >
|
| 24 |
+
|
| 25 |
+
Figure 1: Application example for the combination of service-based analysis and image abstraction used for content moderation functionality provided by a browser extension (a). It enables the classification of digital input images (left) displayed on web sites according to different categories (rows) and their respective disguising using adjustable image abstraction techniques (right), such as pixelation (c), cartoon stylization (e), or blur (g)
|
| 26 |
+
|
| 27 |
+
< g r a p h i c s >
|
| 28 |
+
|
| 29 |
+
Figure 2: Example of using image content analysis in combination with image abstraction techniques to disguise possibly inappropriate content for subsequent manual moderation: (a) input image, (b) results of image segmentation, (c) completely abstracted image, (d) partial abstraction techniques applied to the child only.
|
| 30 |
+
|
| 31 |
+
Recent studies indicate that workers concerned with these tasks are often subject to severe mental health damage due to traumatic experiences or monotonic duties $\left\lbrack {{10},{11},{29}}\right\rbrack$ . Interestingly, non-photo-realistic rendering of these stimuli could potentially reduce their negative impacts $\left\lbrack {1,2,{12}}\right\rbrack$ . Therefore, these negative consequences could be mitigated by reducing the affective responses that arise from consuming the unfiltered content by using a combination of automatic analysis and abstraction techniques as follow (Fig. 2): (i) visual media content is analyzed, e.g., to detect, classify, and possibly perform a semantic segmentation, (ii) abstraction techniques are used to partially or completely disguise possibly offensive content prior to (iii), the interactive visual examination.
|
| 32 |
+
|
| 33 |
+
Service-oriented Architectures (Objective-2): To implement an approach for Objective-1 and to make it available to a wide range of applications and users, we set out to provide a prototypical micro-app (e.g., a browser extension) based on a microservice infrastructure. For this, two separate microservices, for analysis and abstraction respectively, are orchestrated by a content moderator service. This enables a use-case specific exchange, replacement, or extension of specific functionality or complete services without risking the overall functionality. By using a Hyper Text Markup Language (HTML) User Interface (UI) integrated into a W3C browser extension, the abstracted content can be interpolated/blended with the original one interactively.
|
| 34 |
+
|
| 35 |
+
In current state-of-the-art systems, analysis and abstraction of images and videos are mostly performed using on-device computation. Thus, these systems' processing capabilities are limited by the device's hardware (Central Processing Unit (CPU) and Graphics Processing Unit (GPU)) and software (Operating System (OS)). Being subject to high heterogeneity (device ecosystem), this has major drawbacks concerning the applicability, maintainability, and provisioning of content-moderation applications, in particular: a software development process and integration into ${3}^{\mathrm{{rd}}}$ -party applications is aggravated by:(i)different operating systems (e.g., Windows, Linux, macOS, iOS, Android), (ii) heterogeneous hardware configurations of varying efficiency and Application Programming Interfaces (APIs) (e.g., OpenGL, Vulkan, Metal, DirectX), as well as (iii) display sizes and formats. Further, on-device processing does often not scale with respect to the increasing input complexity (e.g., number of images, increasing spatial resolution of camera sensors), which especially poses problems for mobile devices (e.g., battery drain or overheating).
|
| 36 |
+
|
| 37 |
+
§ 1.2 APPROACH AND CONTRIBUTIONS
|
| 38 |
+
|
| 39 |
+
Concerning the aforementioned drawbacks of on-device processing, the proposed combination of standardized technology for micro-apps together with service-oriented architectures and infrastructures offers a variety of advantages, in particular:(i)implementation and testing of specific analysis and abstraction techniques are required to be performed only once (controlling the systems software and hardware due to virtualization), (ii) functionality is offered to a wide range of web-based applications using standardized protocols, which can be easily integrated into ${3}^{\text{ rd }}$ -party applications and extended rapidly. This way, (iii) the proposed service-based approach can automatically scale service-instances with respect to input data complexity and computing power required. Thus software up-to-dateness and exchangeability can be easily achieved. Further, the software development process of web-based thin-clients is less complex compared to rich-clients. Together with the upcoming $5\mathrm{G}$ telecommunication standard featuring (among others) high data rates, reduced latency, and energy saving, the presented approach seems feasible to be applied in stationary as well as mobile contexts. Finally, intellectual property of the service providers can be effectively protected by not shipping respective software components to customers, and thus, impede the possibility of reverse engineering.
|
| 40 |
+
|
| 41 |
+
§ 2 BACKGROUND AND RELATED WORK
|
| 42 |
+
|
| 43 |
+
§ 2.1 VISUAL CONTENT ANALYSIS USING NEURAL NETWORKS
|
| 44 |
+
|
| 45 |
+
Image analysis can be performed according to different tasks. Image classification such as the ResNet Convolutional Neural Network (CNN) architecture can be performed [9] to determine how likely an image belongs to one or more specific categories. With object recognition, the goal is to identify objects displayed on images with it respective bounding boxes. For object recognition, CNN architectures such as is YOLOv3 [23] and Single Shot Detector (SSD) [17] can be applied. Another task is image segmentation, where objects and regions of certain semantics are identified on a per-pixel base. This can be performed with the approach R-CNN of Girshick et al. [8]. Different kinds of CNNs need to be trained with datasets consisting of images labeled with categories, objects, or image regions. There are public datasets available to train and to benchmark the performance of different CNN architectures. Popular examples are the Pascal VOC [4] and ADE20K [38] datasets. They contain labeled image data for image classification, object recognition, and even semantic segmentation. For the task of content moderation, the objects and categories are quite general; mostly everyday objects are contained. Another dataset is the Google Open Image dataset [14]. It contains about 9 million images labeled with object information. The object categories are hierarchical organized and span over 6000 categories. Another approach, which spans even more different object categories, is YOLO9000 [22]. YOLO9000 is a variation of the YOLOv3 [23] architecture that was trained on a dataset with more than 9000 different objects. However, this dataset is not publicly available.
|
| 46 |
+
|
| 47 |
+
Further, there are approaches for image analysis that are more directed towards content moderation. These are mostly presented in the form of RESTful APIs allowing for sending images and receive analysis results. The Google Cloud Vision API [26] assigns scores to images depending on how likely they represent the categories adult, violence, medical and spoof. The API of Sightengine [28] analyzes images for the occurrence of categories such as nudity, weapon, alcohol, drugs, scams and other offensive content. Valossa [18] reviews cloud-based vendors supporting the classification of unsafe content and describes the difficulties of defining what inappropriate content actually is. They concluded that content analysis models must be able to understand the context of objects and depicted situations in order to decide whether images contain unsafe content. They offer an evaluation dataset with images in 16 different content categories and benchmark it on several online RESTful APIs. However, these approaches do not offer any public datasets or specify which machine learning models they use exactly. In contrast, Yahoo [19] offers a trained CNN model that can be used free of charge. The Yahoo Open Not Safe For Work (NSFW) model is basically a ResNet [9] that was fine-tuned on a dataset of NSFW images depicting nudity and other offensive content. For a given image, it determines a score how likely it contains unsafe content.
|
| 48 |
+
|
| 49 |
+
< g r a p h i c s >
|
| 50 |
+
|
| 51 |
+
Figure 3: Conceptual overview of the microservice architecture of the presented approach. The individual service components (blue) are communicating via RESTful APIs and are used by a browser extension (orange) that integrates into standard web technologies (gray).
|
| 52 |
+
|
| 53 |
+
§ 2.2 SERVICE-BASED IMAGE PROCESSING
|
| 54 |
+
|
| 55 |
+
Several software architectural patterns are feasible to implement service-based image processing. However, one prominent style of building a web-based processing system for any data is the service-oriented architecture [31]. It enables server developers to set up various processing endpoints, each providing a specific functionality and covering a different use case. These endpoints are accessible as a single entity to the client, i.e., the implementation is hidden for the requesting clients, but can be implemented through an arbitrary number of self-contained services.
|
| 56 |
+
|
| 57 |
+
Since web services are usually designed to maximize their reusability, their functionality should be simple and atomic. Therefore, the composition of services is critical for fulfilling more complex use cases [15]. The two most prominent patterns for implementing such composition are choreography and orchestration. The choreography pattern describes decentralized collaboration directly between modules without a central component. The orchestration pattern describes collaboration through a central module, which requests the different web services and passes the intermediate results between them.
|
| 58 |
+
|
| 59 |
+
In the field of image analysis, Wursch et al. [36] present a web-based tool that enables users to perform various image analysis methods, such as text-line extraction, binarization, and layout analysis. It is implemented using a number of Representational State Transfer (REST) web services. Application examples include multiple web-based applications for different use cases. Further, the viability of implementing image-processing web services using REST has been demonstrated by Winkler et al. [34], including the ease of combination of endpoints. Another example for service-based image processing is Leadtools (https://www.leadtools.com), which provides a fixed set of approx. 200 image-processing functions with a fixed configuration set via a web API.
|
| 60 |
+
|
| 61 |
+
In this work, a similar approach using REST is chosen, however, with a different focus in terms of granularity of services. The advantages of using microservices are(i)increased scalability of the components, (ii) easy deployment and maintainability as well as, (iii) the possibility to introduce various technologies into one system [32]. For our work, we are extending a microservice platform for cloud-based visual analysis and processing that was first presented by Richter et al. [25]. In addition thereto and based on that, Wegen et al. [33] present an approach for performing service-based image processing using software rendering to balance cost-performance relation.
|
| 62 |
+
|
| 63 |
+
In the field of geodata, the Open Geospatial Consortium (OGC) set standards for a complete server-client ecosystem. As part of this specification, different web services for geodata are introduced [20]. Each web service is defined through specific input and output data and the ability to self-describe its functionality. In contrast, in the domain of general image-processing there is no such standardization yet. However, it is possible to transfer concepts from the OGC standard, such as unified data models. These data models are implemented using a platform-independent effect format. In the future, it is possible to transfer even more concepts set by the OGC to the general image-processing domain, such as the standardized self-description of services.
|
| 64 |
+
|
| 65 |
+
§ 3 METHOD
|
| 66 |
+
|
| 67 |
+
With respect to Objective-2, we choose to implement our approach using microservices, which are described as follows.
|
| 68 |
+
|
| 69 |
+
§ 3.1 SYSTEM OVERVIEW
|
| 70 |
+
|
| 71 |
+
Fig. 3 shows a conceptual overview of the components as well as their data and control flow. It comprises of the following components that communicate via RESTful APIs:
|
| 72 |
+
|
| 73 |
+
Moderation Browser-Extension: A client-device running a web browser that(i)hosts the moderation browser-extension and (ii) an arbitrary website with visual media content. The web-site's visual media content is hosted by a content provider and referenced by an Unified Resource Locator (URL). The browser extension accesses the URLs via content-script and uses it to query the RESTful API of the CMS asynchronously.
|
| 74 |
+
|
| 75 |
+
Content Moderation Service (CMS): The CMS orchestrates the interplay between instances of the analysis and abstraction services, which encapsulate respective techniques. Upon request, it downloads the image from the given URL and forwards its content to an analysis service instance by querying the analysis RESTful API. Depending on the analysis response, it uses the configuration data to map analysis results to specific parameter values that are used to query the image abstraction service. The response is subsequently forwarded to the browser extension that replaces a placeholder content with the abstracted content.
|
| 76 |
+
|
| 77 |
+
Content Analysis Service (CAS): The CAS identifies if an image contains offensive content and has to be filtered using image abstraction techniques. It receives image data from the CMS and performs image analysis with different image recognition models as well as multiple image classification and object recognition CNNs. It then returns results of the different analysis models in a unified and structured way.
|
| 78 |
+
|
| 79 |
+
Image Abstraction Service (IAS): The IAS provides an interface for applying various image abstraction techniques (e.g., blur, pixelation, or more specific operations such as cartoon stylization, etc.) with presets of different strength to images that are identified by the CAS to possibly contain offensive content.
|
| 80 |
+
|
| 81 |
+
§ 3.2 BROWSER-EXTENSION FOR MODERATION CLIENT
|
| 82 |
+
|
| 83 |
+
The browser extension traverses the Document Object Model (DOM) tree and utilizes a MutationObserver object to detect changes in the respective image and picture tags; a DOM-MutationObserver is provided by the corresponding web browser and is intended to watch for changes being made to the DOM tree. As soon as an image is detected, it is initially blurred using Cascading Style Sheets (CSS) image filters. This prevents users to see possible disturbing image content while the CMS's RESTful API is queried and the image's URL is transmitted to it. The response received by the CMS contains information on whether or not an image contains disturbing content and therefore needs to be disguised. In the case that an image is categorized as offensive, the response also includes a processed version of the image.
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Figure 4: Filtered image with an overlay shown on mouse-over.
|
| 88 |
+
|
| 89 |
+
Subsequent to receiving the response, the local CSS-filter blur is removed. If the processed image does contain suggestive content, the original image gets replaced - otherwise, the original image is displayed. Finally, an overlay is added for every image (Fig. 4) that provides buttons for(i)allowing users to report misclassified images and (ii) toggling between the original and the processed image. If a user does not agree with the determined content classification, they can propose a more suitable one. A modal (Fig. 5) will appear where the user can select if another category is more suitable, the image is disturbing in a different way, or it should not be filtered.
|
| 90 |
+
|
| 91 |
+
§ 3.3 CONTENT MODERATION SERVICE
|
| 92 |
+
|
| 93 |
+
The CMS moderates communication and interactions between the browser extension, the analysis service, and the image abstraction service. Clients use it to initiate the analysis process that consists of the following steps. First, the image is downloaded using the URL specified in the analysis request. Then, the image analyzer is queried to detect if the image contains offensive content. Subsequently, the CMS maps the image analysis results to an image abstraction technique and forwards it to the abstraction service for application. Finally, it notifies the client whether the image contains offensive content and, if it does, attaches the processed image to the response. If a user decides to send feedback via the feedback modal (e.g., the chosen scenario is unsuitable), a request to a feedback route is sent and the feedback is stored for further use.
|
| 94 |
+
|
| 95 |
+
< g r a p h i c s >
|
| 96 |
+
|
| 97 |
+
Figure 5: A feedback pop-up enables the correction of misclassifications and for feeding this information back to the server.
|
| 98 |
+
|
| 99 |
+
§ 3.4 CONTENT ANALYSIS & ABSTRACTION SERVICE
|
| 100 |
+
|
| 101 |
+
The CAS provides an interface for clients to analyze images for the presence of certain objects and categories. Clients send the image data and receive analysis results that are represented in the form of tags with score and meta data associated. Tags can comprise objects displayed in a image or categories that can be associated with an image. The score describes how likely an object or category is present. The meta data could include an Axis-aligned Bounding Box (AABB) that describes the estimated position of an object within the image.
|
| 102 |
+
|
| 103 |
+
The image analysis is performed with different machine learning models, in our case using CNNs. The output, specific to each model used, must be transformed to the unified analysis result format. This allows extending the analysis service with additional Machine Learning (ML) models. The results of the image analysis are used by the content moderator to decide what kind of content an image is of and whether an image abstraction is applied.
|
| 104 |
+
|
| 105 |
+
If offensive content was detected on an image, it is disguised by applying an image abstraction techniques to it. The IAS provides an endpoint to apply specific techniques such as pixelation, or blur to an image. For every technique, different presets and parameters are provided to indicate different degrees and styles of image abstraction. To query the abstraction endpoint, the IAS requires the image's data as well as a mapped abstraction techniques and its preset (Sec. 4). With respect to this, the CMS perform such mapping by taking the analyzed scenario and score into account. In response, the processed image is returned by the IAS.
|
| 106 |
+
|
| 107 |
+
§ 4 MAPPING OF ANALYSIS RESULTS
|
| 108 |
+
|
| 109 |
+
The analysis result for an image is a set of tags with scores. The tags describe objects that can be displayed or categories that can be associated with an image. The scores describe how likely these tags are actually present on an image. An image abstraction in the sense of content moderation, is processing a user generated content image with an image abstraction technique with a specific parameter preset. To process images with the goal of reducing explicit content, one has to define a mapping from analysis results to an image abstraction technique with a specific parameter preset.
|
| 110 |
+
|
| 111 |
+
In the proposed system each tag was manually associated with a scenario. A scenario is a type of content that should be moderated. For this system, the scenarios nudity, violence, and medical are used. Each scenario is specified by: (i) name, (ii) set of tag names and threshold, (iii) image abstraction technique, and (iv) three effect presets, sorted by degree of abstraction (low, medium, strong). A scenario is matched to analysis results if any of the scenario's tag names are contained in the received analysis' tags and the received score is equal or higher than the scenario tag score threshold. Because an analysis result could match multiple scenarios, the scenarios are prioritized and the matching scenario with the highest priority is chosen. If no scenario matches, then no image abstraction is required. Otherwise, the user-defined preset will be selected and used for the subsequent image abstraction step.
|
| 112 |
+
|
| 113 |
+
< g r a p h i c s >
|
| 114 |
+
|
| 115 |
+
Figure 6: Images with similar content (left) but different mappings (right) based on how likely they are rated as showing violent content.
|
| 116 |
+
|
| 117 |
+
Fig. 6 shows four images with similar content but different mappings. All four images depict scenes with weapons and are categorized as showing violent content by the CAS. The used mappings are chosen according to the score that indicates how likely these images show violent content. Fig. 6(a) is disguised using a Gaussian blur preset with a large kernel size (Fig. 6(b)). Fig. 6(d) shows an applied cartoon filter using a black & white preset with thin edges to remove color information. Fig. 6(h) uses an cartoon filter with thick edges. Fig. 6(h) show the results of a pixelation abstraction technique for aggressive disguise. The effect that is mapped to an image is arbitrary and can be customized, but different abstraction techniques are more or less suitable for certain scenarios. In particular, this work uses a pixelation technique for images showing violent content, a cartoon filter for medical content based on the work of Winnemöller et al. [35] and as suggested by the studies of Besançon et al. $\left\lbrack {1,2}\right\rbrack$ , and a Gaussian blur for images that depict nudity.
|
| 118 |
+
|
| 119 |
+
§ 5 IMPLEMENTATION ASPECTS
|
| 120 |
+
|
| 121 |
+
In a service-based architecture similar to the one used in the content moderation scenario, it is necessary that messages are exchanged between the individual services. Therefore, each service provides a RESTful API and queries other services correspondingly. The browser extension is implemented using JavaScript (JS) and utilizes the used browser's API to access and alter a website's DOM tree to administer local storage and to react on changes made to the website. For sending requests to other services, the fetch API is applied. Filter options facilitate users to customize whether and to what extend suggestive images should be abstracted and all three scenarios can be customized individually. Users can choose among three levels of abstraction or can switch off single scenarios completely.
|
| 122 |
+
|
| 123 |
+
The CMS is implemented using Node.js and provides a RESTful API with two endpoints: one for requesting an image to be analyzed and categorized and one for sending user feedback. A request sent to the analyze endpoint needs to include the URL of the image that should be analyzed and options that represent the filter settings made by the user. A feedback request consists of three different pieces of information: the data concerning the assessed image, the category proposed by the image analyzer service, and the category included in the user's feedback. This data is stored and used as a training set for machine learning algorithms that can be used during image classification. The implementation of the IAS also relies on Node.js. It provides an endpoint that accepts requests that need to include the data of the image to be abstracted and an operation that should be applied to the image. A preset, related to the desired effect, can also be sent to this endpoint as an optional parameter. The CAS is implemented using Python and flask. It provides a RESTful API with an endpoint to start the analysis process.
|
| 124 |
+
|
| 125 |
+
For the basic functionality of the CMS, two different kinds of neural networks are used:(i)a Single Shot MultiBox model [17] and (ii) the Yahoo Open NSFW model [19]. Single Shot MultiBox is a CNN architecture that performs object recognition on images. For a given image it returns AABB with a scenario and a confidence score (0 to 1). The confidence score indicates to what extend a scenario is detected in the image. An existing implementation for PyTorch was used, as well as a classification model that has been initially trained on the Pascal VOC dataset [4], but with very general object classes such as aeroplane, car, cow, dog, or TV monitor. To this end, we also trained an SSD model on military-like classes of the Google Open Images Dataset [14] (such as rifle, tank, knife, missile) to be able to make predictions on somewhat realistic explicit content. The Yahoo Open NSFW model also returns for a given image an NSFW score (0 - safe, no nudity detected; to 1 - not safe, nudity detected) and an implementation and trained model for TensorFlow is available, but a proper threshold for this score is required. Such threshold might be different according to the use case of the system and must be chosen carefully.
|
| 126 |
+
|
| 127 |
+
§ 6 RESULTS AND DISCUSSION
|
| 128 |
+
|
| 129 |
+
§ 6.1 APPLICATIONS
|
| 130 |
+
|
| 131 |
+
The system described in this work offers advantages for the consumption, use, and eventually moderation of graphic content in different areas we now detail. First of all, it could assist for medical and surgical education. A primary use case would be to facilitate the education of nurses and medical students (not all destined to be surgeons) by reducing affection and aversion when looking at images showing blood and medical acts $\left\lbrack {1,2}\right\rbrack$ . Similarly, another use case is on the communication between surgeons and patients $\left\lbrack {1,2}\right\rbrack$ . Patients are usually informed and prepared before the planning of future surgeries and the explanations can be facilitated by the use of images. Yet, laypeople often find looking at images depicting surgery or blood extremely difficult [6, 27]. Communication between patients and doctors could therefore be improved with such automated image processing tools.
|
| 132 |
+
|
| 133 |
+
Furthermore, the system can be used to moderate internet forums and social networks. Nowadays, a lot of digital media, including images and videos, are shared through social media platforms such as Facebook or Instagram. While graphic content sometimes adheres to the Terms of Services (TOSs) of these platforms [5, 30], many graphic media are not accepted for ethical or legal reasons. The filtering between authorized and non-authorized content is performed by a combination of algorithms and people, or people alone (content moderators) depending on the platform. The software system presented in this paper could facilitate a content moderator's work and help to prevent mental issues that can be a consequence of looking at disturbing pictures all day [21]. The system could also alleviate the toll on volunteer moderators of platforms such as Wikipedia or Reddit. In a similar fashion, journalists and news editors might have to browse through hundreds of shocking content to illustrate their articles or better understand the case they are reporting on (e.g., war-zones, disasters, accidents). Our automated tool could also be useful in this specific context.
|
| 134 |
+
|
| 135 |
+
< g r a p h i c s >
|
| 136 |
+
|
| 137 |
+
Figure 7: Distribution of request times per day (a) and hour (b). Center line of boxes shows mean, boxes in total show quartile, whiskers show rest of the distribution without obvious outliers.
|
| 138 |
+
|
| 139 |
+
< g r a p h i c s >
|
| 140 |
+
|
| 141 |
+
Figure 8: Mean request times according to different tasks.
|
| 142 |
+
|
| 143 |
+
Finally, since exposure to graphic or pornographic content has been shown to be particularly detrimental to children [16], our tool could particularly be interesting on this regard. While blocking software could be used to limit access to nudity or pornography, such software tend to also limit access to useful information (e.g., online sex education) [24], are rarely maintained or even used [3], and unlikely to block access to content on some social media platforms. Blocking software finally rarely target all sorts of graphic content. With respect to this issue, we hope that our tool can help limit the impact of unwanted graphic content, rather than eliminate it completely along with potentially useful information.
|
| 144 |
+
|
| 145 |
+
§ 6.2 PERFORMANCE EVALUATION
|
| 146 |
+
|
| 147 |
+
We only focus here on system performance as the potential reduction of affective responses through image abstraction techniques has already been studied $\left\lbrack {1,2,{37}}\right\rbrack$ . The system’s performance was evaluated by timing different tasks involved in the process of content moderation and comparing these to assemble a metric of the requests and processed images (time stamp, image resolution, image transformation). Timed tasks measured on the CMS were(i)fetching image, (ii) performing image analysis, and (iii) performing image transformation by abstraction. The CMS, CAS, and IAS are hosted on a single dedicated GPU-server without a significant network overhead. Thus, the network request times between clients and the services are assumed to be independent of our system and are not considered. To evaluate the performance in a way that considers all required tasks equally, only requests that led to an image transformation (unsafe image was detected) were considered. The dedicated GPU-server is equipped with Xeon E5-2637 v4, 3.5 GHz processor (8 cores), 64 GB RAM, NVIDIA Quadro M6000 GPU with 24 GB VRAM.
|
| 148 |
+
|
| 149 |
+
Over a week of testing the extension, about 35000 requests involving image transformation are logged in total. Fig. 7(a) and Fig. 7(b) show the distribution of total request times per days and hours. These are mostly independent of each day and hour with slight variances, which could be explained by a varying load of the server or different number of requests incoming in a shorter period of time. Fig. 8 shows the mean time required for each task per day. The image analysis requires $\approx {75}\%$ of the mean request time, with some outliers on day 6 where fetching the images suddenly takes up as much time as analyzing the images on average. Fig. 9(a) shows the resolution of images over the time required for analysis.
|
| 150 |
+
|
| 151 |
+
We further tested whether images of high resolution impact analysis performance. The documentation of the used CNNs describe that image data is propagated through the networks at a constant resolution, i.e., a downsampling is required before propagation through the CNNs. Fig. 9(a) shows that images have similar analysis times independently of the resolution and the required downsampling step. A further question relates whether very small images (such as icons) cause a performance overhead if they appear on websites very often. Images that are smaller than ${32} \times {32}$ pixels are highlighted in red within Fig. 9(a). One can see that they need similar inference times compared to all other images. Further analysis of the statistics also indicate that they take up only $\approx 5\%$ in count of all images and less than one percent in total request time. A similar analysis was performed for the image transformation task. Relating to the image resolution vs.the image transformation (Fig. 9(b)) shows a linear dependency between image resolution (as number of total pixels) and transformation time. However, this does not impact the complete performance severely as image analysis is slower by a factor of about 10. Small images have been highlighted, again they take up $\approx 5\%$ in count and about $\approx 4\%$ in total time only.
|
| 152 |
+
|
| 153 |
+
< g r a p h i c s >
|
| 154 |
+
|
| 155 |
+
Figure 9: Relating resolution of input images with analysis inference time (a) and image transformation time (b). Images smaller than ${32} \times {32}$ pixels are highlighted in red.
|
| 156 |
+
|
| 157 |
+
< g r a p h i c s >
|
| 158 |
+
|
| 159 |
+
Figure 10: Relating requests per minute and the total request time.
|
| 160 |
+
|
| 161 |
+
Finally, we evaluated whether a high load (possibly caused by a high number of requests in a short time) causes higher total request times. For it, each request is plotted as a point with the number of requests in that minute and its required time (Fig. 10). This does not show any strong correlation between the number of requests per minute and the total request time though; perhaps, the load generated by the requests was not high enough yet.
|
| 162 |
+
|
| 163 |
+
§ 6.3 LIMITATIONS
|
| 164 |
+
|
| 165 |
+
Applying specific filters to weaken affect when looking at medical images might work differently or not at all depending on the user and the specific image at hand. More generally, it is impossible to find a definition of offensive content that fits all users. What is perceived as "graphic" highly depends on the perception, age, views, cultural background, and personal history of the user, which results in many different potential use cases for each individual user. Even if a clear definition would be possible, modern computer vision approaches are not able to correctly recognize offensive content all of the time. Additionally, detecting objects that are known to be offensive is not enough. The context in which these objects are as well as the overall image composition could completely change the meaning.
|
| 166 |
+
|
| 167 |
+
We do not have the capacities to train accurate neural networks for real use-cases because there are no acceptable, publicly available datasets of offensive content. Even with a proper dataset, it is unlikely to train a perfect neural network, which classifies all images correctly and detects offensive content every time.
|
| 168 |
+
|
| 169 |
+
Regarding the browser extension, it is difficult to even detect all the images on websites since there are a number of different ways to integrate an image into a web page, e.g., through custom HTML elements or extensive JS usage. If images are loaded asynchronously via JS and many images change simultaneously, the extension is not able to react fast enough to all of the changes, resulting in "unobfuscated" or "non-abstracted" images. Moreover, JS is executed in a single thread in all established web browsers, which increased the occurrence of time-related problems. Additionally, a very low bandwidth could be slowing down the processing of images. The impact would be comparably tiny because not much additional data is sent and the client still downloads each image only once.
|
| 170 |
+
|
| 171 |
+
§ 7 CONCLUSIONS AND FUTURE WORK
|
| 172 |
+
|
| 173 |
+
This paper presents a service-based approach to facilitate consumption of digital graphic images online. To achieve this, an automatic analysis and classification of possibly offensive content is performed using services, and, based on the results, image abstraction techniques are applied with varying levels of abstraction. This functionality is accessed and configured via a browser extension that is supported by most modern web browsers. The presented content moderation approach has various applications such as reducing affective responses during medical education, allowing less distressing browsing of social media, or enabling safer browsing for child protection.
|
| 174 |
+
|
| 175 |
+
Regarding future work, the user experience of the extension could be increased as follows. Modern object recognition algorithms are not only able to detect certain objects in images but are also able to locate them. With respect to this, image abstraction techniques can be applied to segments of the image to maintain the context and only abstract the sensitive image regions. This might support the user to identify whether an image was classified correctly. As an alternative, the image abstraction techniques could be applied to the complete image, but the user could be given the option to interactively reveal the image partial using lens-based interaction metaphors.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,323 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Visual-Interactive Neural Machine Translation
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Figure 1: The main view of our neural machine translation system: (A) Document View with sentences of the document for the current filtering settings, (B) Metrics View with sentences of the filtering result highlighted, and (C) Keyphrase View with a set of rare words that may be mistranslated. The Document View initially contains all sentences automatically translated with the NMT model. After filtering with the Metrics View and Keyphrase View, a smaller selection of sentences is shown. Each entry in the Document View provides information about metrics, the correction state, and functionality for modification (on the right side next to each sentence). The Metrics View represents each sentence as one path and shows values for different metrics (e.g., correlation, coverage penalty, sentence length). Green paths correspond to sentences of the current filtering. One sentence is highlighted (yellow) in both the Metrics View and the Document View.
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
We introduce a novel visual analytics approach for analyzing, understanding, and correcting neural machine translation. Our system supports users in automatically translating documents using neural machine translation and identifying and correcting possible erroneous translations. User corrections can then be used to fine-tune the neural machine translation model and automatically improve the whole document. While translation results of neural machine translation can be impressive, there are still many challenges such as over-and under-translation, domain-specific terminology, and handling long sentences, making it necessary for users to verify translation results; our system aims at supporting users in this task. Our visual analytics approach combines several visualization techniques in an interactive system. A parallel coordinates plot with multiple metrics related to translation quality can be used to find, filter, and select translations that might contain errors. An interactive beam search visualization and graph visualization for attention weights can be used for post-editing and understanding machine-generated translations. The machine translation model is updated from user corrections to improve the translation quality of the whole document. We designed our approach for an LSTM-based translation model and extended it to also include the Transformer architecture. We show for representative examples possible mistranslations and how to use our system to deal with them. A user study revealed that many participants favor such a system over manual text-based translation, especially for translating large documents.
|
| 12 |
+
|
| 13 |
+
Index Terms: Human-centered computing-Visualization-Visualization application domains-Visual analytics; Human-centered computing-Visualization-Visualization systems and tools; Computing methodologies-Artificial intelligence-Natural language processing—Machine translation
|
| 14 |
+
|
| 15 |
+
## 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Machine learning and especially deep learning are popular and rapidly growing fields in many research areas. The results created with machine learning models are often impressive but sometimes still problematic. Currently, much research is performed to better understand, explain, and interact with these models. In this context, visualization and visual analytics methods are suitable and more and more often used to explore different aspects of these models. Available techniques for visual analytics in deep learning were examined by Hohman et al. [16]. While there is a large amount of work available for explainability in computer vision, less work exists for machine translation.
|
| 18 |
+
|
| 19 |
+
As it becomes increasingly important to communicate in different languages, and since information should be available for a huge range of people from different countries, many texts have to be translated. Doing this manually takes much effort. Nowadays, online translation systems like Google Translate [13] or deepL [10] support humans in translating texts. However, the translations generated that way are often not as expected or like someone familiar with both languages might translate them. It may also not express someone's translation style or use the correct terminology of a specific domain or for some occasion. Often, more background knowledge about the text is required to translate documents appropriately.
|
| 20 |
+
|
| 21 |
+
With the introduction of deep learning methods, the translation quality of machine translation models has improved considerably in the last years. However, there are still difficulties that need to be addressed. Common problems of neural machine translation (NMT) models are, for instance, over- and under-translation [35] when words are translated repeatedly or not at all. Handling rare words [20], which might be available in specific documents, and long sentences, are also issues. Domain adaption [20] is another challenge; especially documents from specific domains such as medicine, law, or science require high-quality translations [7]. As many NMT models are trained on general data sets, their translation performance is worse for domain-specific texts.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
Figure 2: The detailed view for a selected sentence consists of the Sentence View (A), the Attention View (B), and the Beam Search View (C). The Sentence View allows text-based modifications of the translation. The Attention View shows the attention weights (represented by the lines connecting source words with their translation) for the translation. The Beam Search View provides an interactive visualization that shows different translation possibilities and allows exploration and correction of the translation. All three areas are linked.
|
| 26 |
+
|
| 27 |
+
If high-quality translations for large texts are required, it is insufficient to use machine translation models alone. These models are computationally efficient and able to translate large documents with low time effort, but they may create erroneous or inappropriate translations. Humans are very slow compared to these models, but they can detect and correct mistranslations when familiar with the languages and the domain terminology. In a visual analytics system, both of these capabilities can be combined. Such a system should provide the translations from an NMT model and possibilities for users to visually explore translation results to find mistranslated sentences, correct them, and steer the machine learning model.
|
| 28 |
+
|
| 29 |
+
We have developed a visual analytics approach to reach the goals outlined above. First, our system performs automatic translation of a whole, possibly large, document and shows the result in the Document View (Figure 1). Users can then explore and modify the document on different views [28] (Figure 2) to improve translations and use these corrections to fine-tune the NMT model. We support different NMT architectures and use both an LSTM-based and a Transformer architecture.
|
| 30 |
+
|
| 31 |
+
So far, visual analytics systems for deep learning were mostly available for computer vision, some text-related areas, focusing on smaller parts of machine translation [22, 27] or intended for domain experts to gain insight into the models or to debug them $\left\lbrack {{32},{33}}\right\rbrack$ . This work contributes to visualization research by introducing the application domain of NMT using a user-oriented visual analytics approach. In our system, we employ different visualization techniques adapted for usage with NMT. Our parallel coordinates plot (Figure 1 (B)) supports the visualization of different metrics related to text quality. The interaction techniques in our graph-based visualization for attention (Figure 2 (B)) and tree-based visualization for beam search (Figure 2 (C)) are specifically designed for text exploration and modification. They have a strong coupling to the underlying model. Furthermore, our system has a fast feedback loop and allows interaction in real-time. We demonstrate our system's features in a video and will provide the source code of our system with the published paper.
|
| 32 |
+
|
| 33 |
+
## 2 RELATED WORK
|
| 34 |
+
|
| 35 |
+
This section first discusses visualization and visual analytics approaches for language translation in general and then visual analytics of deep learning for text. Afterward, we provide an overview of work that combines both areas in the context of NMT.
|
| 36 |
+
|
| 37 |
+
Many visualization techniques and visual analytics systems exist for text; see Kucher and Kerren [21] for an overview. However, there is little work on exploring and modifying translation results. An interactive system to explore and correct translations was introduced by Albrecht et al. [1]. While the translation was created by machine translation, their system did not use deep learning. Lattice structures with an uncertainty that can be used to visualize machine translation were used by Collins et al. [9]. They created a lattice structure from beam search where the path for the best translation result is highlighted and can be corrected. We also use visualization for beam search, but ours is based on a tree structure.
|
| 38 |
+
|
| 39 |
+
Recently, much research was done to visualize deep learning models to understand them better. Multiple surveys $\left\lbrack {6,{12},{16},{23},{45}}\right\rbrack$ are available that give summaries of existing visual analytics systems. It is noticeable that not much work exists related to text-based domains. One of the few examples is RNN-Vis [24], a visual analytics system designed to understand and compare models for natural language processing by considering hidden state units. Karpathy et al. [18] explore the prediction of Long Short-Term Memory (LSTM) models by visualizing activations on text. Heatmaps are used by Hermann et al. [15] in order to visualize attention for machine-reading tasks. To explore the training process and to better understand how the network is learning, RNNbow [4] can be used to visualize the gradient flow during backpropagation training in Recurrent Neural Networks (RNNs).
|
| 40 |
+
|
| 41 |
+
While the previous systems support the analysis of deep learning models for text domains in general, approaches exist to specifically explore and understand NMT. The first who introduced visualizations for attention were Bahdanau et al. [2]; they showed the contribution of source words to translated words within a sentence, using an attention weight matrix. Later, Rikters et al. [27] introduced multiple ways to visualize attention and implemented exploration of a whole document. They visualize attention weights with a matrix and a graph-based visualization connecting source words and translated words by lines whose thickness represents the attention weight. Bar charts give an overview for a whole document for multiple attention-based metrics that are supposed to correlate with the translation quality. Interactive ordering of these metrics and sentence selection is possible. However, it is difficult for large documents to compare the different metrics as each bar chart is horizontally too large to be entirely shown on a display. The only connection between different bar charts is that the bars for the currently selected sentence are highlighted. Our system also uses such a metrics approach, but instead of using bar charts, a parallel coordinates plot was chosen for better scalability, interaction, and filtering.
|
| 42 |
+
|
| 43 |
+
An interactive visualization approach for beam search is provided by Lee et al. [22]. The interaction techniques supported by their tree structure are quite limited. It is possible to expand the structure and to change attention weights. However, it is not possible to add unknown words, and no sub-word units are considered. Furthermore, the exploration is limited to single sentences instead of a whole document.
|
| 44 |
+
|
| 45 |
+
With LSTMVis, Strobelt et al. [33] introduced a system to explore LSTM networks by showing hidden state dynamics. Among other application areas, their approach is also suitable for NMT. While our approach is rather intended for end-users, LSTMVis has the goal of debugging models by researchers and machine learning developers. With Seq2Seq-Vis, Strobelt et al. [32] present a system that uses an attention view similar to ours, and they also provide an interactive beam search visualization. However, their system is designed to translate single sentences, and no model adaption is possible for improved translation quality. Their system was designed for debugging and for gaining insight into the models.
|
| 46 |
+
|
| 47 |
+
Since there are different architectures available for generating translations [43], specific visualization approaches may be required. Often, LSTM- based architectures are used. Recently, the Transformer architecture [36] gained popularity; Vig [37,38] visually explores their self-attention layers and Rikters et al. [26] extended their previous approach for debugging document to Transformer-based systems.
|
| 48 |
+
|
| 49 |
+
All these systems provide different, possibly interactive, visualizations. However, their goal is rather to debug NMT models instead of supporting users in translating entire documents, or they are limited to small aspects of the model. Additionally, they are usually designed for one specific translation model. None of these approaches provide extended interaction techniques for beam search or interactive approaches to iteratively improve the translation quality of a whole document.
|
| 50 |
+
|
| 51 |
+
## 3 VISUAL ANALYTICS APPROACH
|
| 52 |
+
|
| 53 |
+
Our visual analytics approach allows the automatic translation, exploration, and correction of documents. Its components can be split into multiple parts. First, a document is automatically translated from one language into another one, then mistranslated sentences in the document are identified by users, and individual sentences can be explored and corrected. Finally, a model can be fine-tuned and the document retranslated.
|
| 54 |
+
|
| 55 |
+
Our approach has a strong link to machine data processing and follows the visual analytics process presented by Keim et al. [19]. We use visualizations for different aspects of NMT models, and users can interact with the provided information.
|
| 56 |
+
|
| 57 |
+
### 3.1 Requirements
|
| 58 |
+
|
| 59 |
+
For the development of our system, we followed the nested model by Munzner [25]. The main focus was on the outer parts of the model, including identifying domain issues, feature implementation design, and visualization and interaction implementation. Additionally, we used a similar process as Sedlmair et al. [29], especially focusing on the core phases. Design decisions were made in close cooperation with deep learning and NMT experts, who are also co-authors of this paper. The visual analytics system was implemented in a formative process that included these experts. Our system went through an iterative development that included multiple meetings with our domain experts. Together, we identified the requirements listed in Table 1. After implementing the basic prototype of the system, we demonstrated it to further domain experts. At a later stage, we performed a small user study with experts for visualization and machine translation. For our current prototype, we added recommended functionality from these experts.
|
| 60 |
+
|
| 61 |
+
Table 1: Requirements for our visual analytics system and their implementations in our approach.
|
| 62 |
+
|
| 63 |
+
<table><tr><td>$\mathbf{{R1}}$</td><td>Automatic translation - A document is translated automatically by an NMT model.</td></tr><tr><td>$\mathbf{{R2}}$</td><td>Overview - The user can see the whole document as a list of all source sentences and their translations (Figure 1 (A)). Additionally, an overview of the translation quality is provided in the Metrics View that reveals statistics about different metrics encoded as a parallel coordinates plot (Figure 1 (B)) showing an overall quality distribution.</td></tr><tr><td>$\mathbf{{R3}}$</td><td>Find, filter, and select relevant sentences - Interaction in the parallel coordinates allows filtering according to different metrics and selecting specific sentences. It is also possible to select one sentence and order the other sentences of the document by similarity to verify for similar sentences if they contain similar errors. Additionally, our Keyphrase View (Figure 1 (C)) supports selecting sentences containing specific keywords that might be domain-specific and rarely used in general documents.</td></tr><tr><td>$\mathbf{{R4}}$</td><td>Visualize and modify sentences - For each sentence, a beam search and attention visualization (Figure 2) can be used to interactively explore and adapt the translation result in order to correct erroneous sentences and explore how a translation failed. It is also possible to explore alternative translations.</td></tr><tr><td>$\mathbf{{R5}}$</td><td>Update model and translation - The model can be fine-tuned using the user inputs from translation corrections; this is especially useful for domain adaption. Afterward, the document is retranslated with the updated model in order to improve the translation result (the result is visualized similar to Figure 9).</td></tr><tr><td>$\mathbf{{R6}}$</td><td>Generalizability and extensibility - While we initially designed our visualization system for one translation model, we soon noticed that our approach should handle data from other translation models as well. Therefore, our approach should be easily adaptable for new models to cope with the dynamic development of new deep learning architectures. Our general translation and correction process is held quite agnostic to be applied on a variety of models. Only model-specific visualizations have limitations, need to be adapted or exchanged when using a different translation architecture.</td></tr><tr><td>$\mathbf{{R7}}$</td><td>Target group - The target group for our system should be quite broad and include professional translators or students who need to translate documents. However, it should also be able to be used by other people interested in correcting and possibly better understanding the results of automated translation.</td></tr></table>
|
| 64 |
+
|
| 65 |
+
### 3.2 Neural Machine Translation
|
| 66 |
+
|
| 67 |
+
The goal of machine translation is to translate a sequence of words from a source language into a sequence of words in a target language. Different approaches exist to achieve this goal [34,44].
|
| 68 |
+
|
| 69 |
+
Usually, neural networks for machine translation are based on an encoder-decoder architecture. The encoder is responsible for transforming the source sequence into a fixed-length representation known as a context vector. Based on the context vector, the decoder generates an output sequence where each element is then used to generate a probability distribution over the target vocabulary. These probabilities are then used to determine the target sequence; a common method to achieve this uses beam search decoding [14].
|
| 70 |
+
|
| 71 |
+
Although different NMT models vary in their architecture, the previously described encoder-decoder design should apply to a wide range of architectures and new approaches that may be developed in the future (R6). In this work, we explored an LSTM architecture with attention and extended our approach to include the Transformer architecture, thus verifying its ability to generalize.
|
| 72 |
+
|
| 73 |
+
One of the first neural network architectures for machine translation consists of two RNNs with LSTM units [5]. To handle long sentences, the attention mechanism for NMT [2] was introduced. It allows sequence-to-sequence models to attend to different parts of the source sequence while predicting the next element of the target sequence by giving the decoder access to the encoder's weighted hidden states. During decoding, the hidden states of the encoder together with the hidden state of the decoder for the current step are used to compute the attention scores. Finally, the context vector for the current step is computed as a sum of the encoder hidden states, weighted by the attention scores. The attention weights can be easily visualized and used to explain why a neural network model predicted a certain output. Furthermore, attention weights can be understood as a soft alignment between source and target sequences. For each translated word, the weight distribution over the source sequence signifies which source words were most important for predicting this target word. The Transformer architecture was recently introduced by Vaswani et al. [36] and gained much popularity. It uses a more complex attention mechanism with multi-head attention layers; especially, self-attentions play an important role in the translation process. We verify its applicability to our approach and visualize only the part of the attention information that showed an alignment between source and target sentences comparable to the LSTM model.
|
| 74 |
+
|
| 75 |
+
### 3.3 Exploration of Documents
|
| 76 |
+
|
| 77 |
+
After uploading a document to our system, it is translated by an NMT model (R1). The main view of our approach then shows information about the whole document (R2). This includes a list of all sentences in the Document View (Figure 1 (A)) and an overview of the translation quality in the Metrics View (Figure 1 (B)). Using the Metrics View and Keyphrase View (Figure 1 (C)), sentences can be filtered to detect possible mistranslated sentences that can be flagged by the user (R3). Once a mistranslated sentence is found, it is also possible to filter for sentences containing similar errors (R3).
|
| 78 |
+
|
| 79 |
+
## Metrics View
|
| 80 |
+
|
| 81 |
+
In the Metrics View, a parallel coordinates plot (Figure 1 (B)) is used to detect possible mistranslated sentences by filtering sentences according to different metrics (R3). For instance, it is possible to find sentences that have low translation confidence.
|
| 82 |
+
|
| 83 |
+
Multiple metrics exist that are relevant to identify translations with low quality; we use the following metrics in our approach:
|
| 84 |
+
|
| 85 |
+
- Confidence: A metric that considers attention distribution for input and output tokens; it was suggested by Rikters et al. [27]. Here, a higher value is usually better.
|
| 86 |
+
|
| 87 |
+
- Coverage Penalty: This metric by Wu et al. [42] can be used to detect sentences where words did not get enough attention. Here, a lower value is usually better.
|
| 88 |
+
|
| 89 |
+
- Sentence length: The sentence length (the number of words in a source sentence) can be used to filter very short or long sentences. For example, long sentences might be more likely to contain errors.
|
| 90 |
+
|
| 91 |
+
- Keyphrases: This metric can be used to filter for sentences containing domain-specific words. As these words are rare in the training data, the initial translation of sentences containing them is likely erroneous. The values used for this metric are the number of occurrences of keyphrases in a sentence weighted by the frequency of the keyphrases in the whole document.
|
| 92 |
+
|
| 93 |
+
- Sentence similarity: Optionally, for a given sentence, the similarity to all other sentences can be determined using cosine similarity. This helps to find sentences with similar errors to a detected mistranslated sentence.
|
| 94 |
+
|
| 95 |
+
- Document index: The document index allows the user to sort sentences according to their original order in the document, which can be especially important for correcting translations where the context of sentences is relevant. Furthermore, this metric might also show trends like consecutive sentences with low confidence.
|
| 96 |
+
|
| 97 |
+
In contrast to Rikters et al. [27], who use bar charts to visualize different metrics, we chose a parallel coordinates plot [17]. Each sentence can be mapped to one line in such a plot, and different metrics can be easily compared. These plots are useful for an overview of different metrics and to detect outliers and trends. Interactions with the metrics, such as highlighting lines or choosing filtering ranges, are supported. It can be expected that sentences filtered for both, low confidence and high coverage penalty, are more likely to be poorly translated than sentences falling into only one of these categories.
|
| 98 |
+
|
| 99 |
+
## Keyphrase View
|
| 100 |
+
|
| 101 |
+
It is possible to search for sentences according to keyphrases by selecting them in the Keyphrase View (Figure 1 (C)) (R3). This can be visualized as shown in Figure 4. Keyphrases are domain-specific words and were not often included in the training data used for our model. As the model has not enough knowledge on how to deal with these words, it is important to verify if the respective sentences were translated correctly. In addition to automatically determined keyphrases, users can manually specify further keyphrases for sentence filtering.
|
| 102 |
+
|
| 103 |
+
## Document View
|
| 104 |
+
|
| 105 |
+
A list of all the source sentences in a document and a list of their translations are shown in the Document View (Figure 1 (A)) (R2). Each entry in this list can be marked as correct or flagged (Figure 4) for later correction. A small histogram shows an overview of the previously mentioned metrics. If a sentence is modified, either through user-correction or retranslation by the fine-tuned model, changes in the sentences are highlighted (Figure 9). Both the Metrics View and the Keyphrase View are connected via brushing and linking [39] to allow filtering for sentences that are likely to be mistranslated and should be examined and possibly corrected. Additionally, sentences can be sorted into a list by similarity to a user-selected reference sentence. In this list, sentences can be selected for further exploration and correction in more detailed sentence-based views.
|
| 106 |
+
|
| 107 |
+
### 3.4 Exploration and Correction of Sentences
|
| 108 |
+
|
| 109 |
+
After filtering and selection, a sentence can be further analysed with the Sentence, Attention, and Beam Search Views (Figure 2) and subsequent correction (R4). These views are shown simultaneously to allow interactive exploration and modification of translations.
|
| 110 |
+
|
| 111 |
+
Note, on the sentence level, we use subword units to handle the problem of rare words, which often occurs in domain-specific documents, and to avoid unknown words. We use the Byte Pair Encoding (BPE) method proposed by Sennrich et al. [31] for compressing text by recursively joining frequent pairs of characters into new subwords. This means, instead of choosing whole words to build the source and target vocabulary, words are split into subword units consisting of possibly multiple characters. This method reduces model size, complexity, and training time. Additionally, the model can handle unknown words by splitting them into their subword units. As these subword units are known beforehand, they do not require the introduction of an "unknown" token for translation. Thus, we can adapt the NMT model to any new domain, including those with vocabulary not seen at training time.
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 3: Attention visualization: (top) when hovering a source word (here: 'verarbeiten') translated words influenced by the source are highlighted and (bottom) when hovering a translated word (here: 'process') source words that influence the translation are highlighted according to attention weights.
|
| 116 |
+
|
| 117 |
+
## Sentence View
|
| 118 |
+
|
| 119 |
+
Similar to common translation systems, the Sentence View (Figure 2 (A)) shows the source sentence and the current translation. It is possible to manually modify the translation, which in turn updates the content in the other sentence-based views. After adding a new word in the text area, the translation with the highest score is used for the remainder of the sentence. This supports a quick text-based modification of a translation without explicit use of visualizations.
|
| 120 |
+
|
| 121 |
+
## Attention View
|
| 122 |
+
|
| 123 |
+
The Attention View depends on the underlying NMT model. It is intended to visualize the relationship between words of the source sentence and the current translation as a weighted graph (Figure 2 (B)); such a technique was also used by Strobelt et al. [32]. Both source and translated words are represented by nodes; links between such words show the attention weights encoded by the thickness of the connecting lines (we use a predefined threshold to hide lines for very low attention). These weights correlate with the importance of source words for the translated words. Hovering over a source word highlights connecting lines to translated words starting at this word. In addition, the translated words are highlighted by transparency according to the attention weights (Figure 3 top). While this shows how a source word contributes to the translation, it is also possible to show for translated words how source words contribute to the translation (Figure 3 bottom). This interactive visualization supports users in understanding how translations are generated from the source sentence words. On the one hand, such a visualization helps gain insight into the NMT model, and, on the other hand, detects issues in generated translations. The links between source sentence and translation can be explored to identify anomalies such as under-or over-translation. Missing attention weights can be an indication for under-translation and links to multiple translated words for over-translation. In our case study in Section 4, examples of these cases are presented. While this technique specifically employs information of the attention-based LSTM model, we use it in an adapted form for the Transformer architecture (see Section 4.4). A visualization more tailored to Transformers, also including self-attention and attention scores from multiple decoder layers, could provide additional information. Further models may need different visualizations for a generalized use of our approach, employing model-specific information.
|
| 124 |
+
|
| 125 |
+
## Beam Search View
|
| 126 |
+
|
| 127 |
+
While the Attention View can be used to identify positions with mistranslations, the Beam Search View supports users in interactively modifying and correcting translations. The Beam Search View visualizes multiple translations created by the beam search decoding as a hierarchical structure (see Figure 2 (C)). This interactive visualization can be used for post-editing the translations.
|
| 128 |
+
|
| 129 |
+
The simplest way of predicting a target sequence is greedy decoding, where at every time step, the token with the highest output probability is chosen as the next predictied token and fed to the decoder in the next step. This is an efficient, straightforward way of generating an output sequence. However, another translation may be better overall, despite having lower probabilities for the first words. Beam search decoding [14] is a compromise between exhaustive search and greedy decoding, often used for generating the final translation. A fixed number(k)of hypotheses is considered at each timestep. For each hypothesis considered, the NMT model outputs a probability distribution over the target vocabulary for the next token. These hypotheses are sorted by the probability of the latest token, and up to $k$ hypotheses remain in the beam. Hypotheses ending with the End-of-Sequence (EOS) token are filtered out and put into the result set. Once $k$ hypotheses are in the result set, the beam search stops, and the final hypotheses are ranked according to a scoring function that depends on the attention weights and the sentence length.
|
| 130 |
+
|
| 131 |
+
For visualization, we use a similar approach as Strobelt et al. [32], and Lee et al. [22]: a tree structure reflects the inherently hierarchical nature of the beam search decoding. This way, translation hypotheses starting with the same prefixes are merged into one branch of this hierarchical structure. The root node of each translation is associated with a Start-of-Sequence (SOS) token and all leaf nodes with an End-of-Sequence (EOS) token. Compared to the visualization of a list of different suggested translations, showing a tree is more compact, and it is easier to recognize where commonalities of different translation variants lie.
|
| 132 |
+
|
| 133 |
+
Each term of the translation is visualized by a circle that represents the actual node and a corresponding label. The color of a circle is mapped to the word's output probability. This supports users in identifying areas with a lower probability that might require further exploration. It can be seen as uncertainty for the prediction of words. In our visualization, we differentiate between nodes that represent subwords and whole words. Continuous lines connect subwords and nodes are placed closer together to form a unit. In contrast, the connections to whole words are represented by dashed lines.
|
| 134 |
+
|
| 135 |
+
The beam search visualization can be used to navigate within a translation and edit it (Figures 7 and 8). The interaction can be performed either mouse-based or keyboard-based; the latter is more efficient for fast post-editing. The view supports standard panning-and-zooming techniques that are especially needed to explore long sentences as they do not fit common displays. For navigation within the tree, arrow keys can be used to move through a sentence, or nodes can be selected by mouse cursor. If the translation of the current node's child node is not satisfying, the node can be expanded to show suggestions for correction. If the user selects a suggested word, the beam search runs with a lexical prefix constraint, and the tree structure gets updated. If the suggested words are not suitable, a custom correction can be performed by typing an arbitrary word that better fits. The number of suggested translations is initially set to three and can be increased by adapting the beam size. Increasing this value may create better translations and provides more alternative translations. However, the higher the value is, the more information has to be shown in the visualization. By hovering and selecting elements in this view, corresponding elements of the Attention View and Sentence View are shown for reference.
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
Figure 4: Main view of the system: the Document View shows some flagged sentences for correction. Additionally, the keyphrase filter (top right) is active: all sentences containing the keyphrase 'MÜ' are shown in the Metrics and Document Views. It is visible that 'MÜ' is never correctly translated to 'MT'.
|
| 140 |
+
|
| 141 |
+
### 3.5 Model Fine-tuning and Retranslation
|
| 142 |
+
|
| 143 |
+
After correcting the translation of multiple sentences, the user corrections can be used to fine-tune the NMT model and automatically improve the translation of the not yet verified sentences (R5). This approach can be applied repeatedly to improve the document's translation quality, especially for domain-specific texts.
|
| 144 |
+
|
| 145 |
+
Documents often belong to a specific domain, e.g., legal, medical, or scientific. Each domain has specific terminology, and one word may even refer to different concepts in different domains. As such, the ability of NMT models to handle different types of domains is an important research topic. Domain adaption refers to techniques allowing NMT models trained on general training data, also called out-of-domain, to adapt to domain-specific documents, called in-domain. This is useful since there may be an abundant amount of general training data, but domain-specific data may be rare. Since NMT models need a large amount of training data to achieve good translation quality, the out-of-domain data can be used to train a baseline model. The model can then be adapted using the in-domain data (R5), which typically contains a smaller number of sentences: in our system, we use the user-corrected sentences. This mitigates the problem of training an NMT model in a low-resource scenario where little data exists for a given domain. In our approach, we continue training for the in-domain data in a reduced way by freezing certain model weights (for the LSTM-based model, both decoder and the LSTM layers of the encoder are trained; for the Transformer, only the decoder is trained).
|
| 146 |
+
|
| 147 |
+
## 4 CASE STUDY
|
| 148 |
+
|
| 149 |
+
As a typical use case, we take the German Wikipedia article for machine translation (Maschinelle Übersetzung) [41] as a document for translation into English. In the following, we show how to use our system to improve the translation quality of the document. Please see our accompanying video for a demonstration with the Transformer model. The examples in the following were created with both the LSTM and Transformer models. We trained our models on a general data set: the German-to-English data set from the 2016 ACL Conference on Machine Translation (WMT'16) [3] shared news translation task. This is a popular data set for NMT, used, for instance, by Denkowski and Neubig [11] and Sennrich et al. [30].
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
|
| 153 |
+
Figure 5: Example of over-translation: 'Examples' is placed twice as translation for the German word 'Beispiele'. The Beam Search View (right) shows possible alternative translations. However, only increasing the beam size to four shows the translation we would have expected.
|
| 154 |
+
|
| 155 |
+
### 4.1 Exploration of Documents
|
| 156 |
+
|
| 157 |
+
After uploading a document (R1), we have a look at the parallel coordinates plot (R2) for our initial translations and the list of keyphrases in order to detect possible mistranslations (R3). In the Keyphrase View, we notice the domain-specific term 'MÜ' occurring very often. This term is the German abbreviation for 'machine translation' and should therefore be translated as 'MT'. However, none of the translations use the correct term (Figure 4). Additionally, one could select and verify sentences with low confidence or with a high coverage penalty. Here, we especially notice the under-translation of some sentences. After verifying a translation in the Document View, users can decide if they are correct (R2). If the users do not agree with the translation, they can set a flag (Figure 4) to modify the translation later or switch to the sentence-based views to correct it (R4).
|
| 158 |
+
|
| 159 |
+
### 4.2 Exploration and Correction of Sentences
|
| 160 |
+
|
| 161 |
+
After setting flags for multiple sentences (Figure 4), or the decision to explore or modify a sentence, a more detailed view for each sentence can be shown to explore and improve their translations interactively (Figure 2) (R4).
|
| 162 |
+
|
| 163 |
+
Over-translation is a common issue of NMT [20]. In the Attention View, it is possible to see what went wrong by identifying where the attention weights connect the source and destination words.
|
| 164 |
+
|
| 165 |
+
For both models, we notice some cases for very short sentences. Figure 5 shows for the German heading 'Beispiele' (en: 'Examples'), a translation that uses the translated word multiple times. Also, the suggested alternatives use this term more than once. Only after increasing the beam size to four, the correct translation is visible, which can then be selected as the correction.
|
| 166 |
+
|
| 167 |
+
More often, only parts of a sentence are translated, and important words are not considered in our document. Such under-translation is shown in Figure 6. In the first example, only the beginning of the sentence is translated, and it is visible that the remaining nodes have almost zero attention. In the second example, the German term 'zweisprachigen' (en: 'bilingual') is skipped in the translation. While this part of the translation is missing, the translated sentence is still correct and fluid; it might be difficult to detect such an error without such attention visualizations.
|
| 168 |
+
|
| 169 |
+
An example of a wrong translation containing a keyphrase is visualized in Figure 7. Here, it is also shown that using the beam search visualization, it is possible to select an alternative translation interactively starting from the position where the first error occurs. The beam search provides possible alternative translations, but it is possible to manually type what the user believes should be the next term. Here, we enter the correct translation manually. The beam search visualization automatically updates in real-time according to the correction.
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+
Figure 6: Example of under-translation shown in the Attention View: (top) for the LSTM model the end of the sentence is not translated; attention weights are very low for this part of the sentence. (Bottom) for the Transformer architecture the term 'zweisprachigen' (en: 'bilingual') is not translated; attention weights are very low for this term.
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+
Figure 7: Example of a mistranslated sentence containing the keyphrase 'MÜ' shown as beam search visualization: (top) suggested translation, suggested alternatives and custom correction; (bottom) updated translation tree for corrected keyword with new suggestions for continuing the sentence after the custom change.
|
| 178 |
+
|
| 179 |
+
Finally, it is also possible to change sentences without mistakes. Sometimes, sentences are correctly translated, but different words or sentence structures are used as the current user would prefer for the context of a sentence or to express someone's own style (Figure 8). Again, it is possible to explore and select alternative words or sentences with the Beam Search View. If we wanted to start the sentence with a different word, an alternative could be selected, and the remaining sentence would get updated accordingly.
|
| 180 |
+
|
| 181 |
+
After correcting and accepting multiple translation corrections, the Document View shows how a translation was changed (Figure 9).
|
| 182 |
+
|
| 183 |
+
### 4.3 Model Fine-tuning and Retranslation
|
| 184 |
+
|
| 185 |
+
After users corrected multiple sentences, they can choose to retrain the current model for not yet accepted sentences (R5). The model is then fine-tuned using the corrected sentences by the user. This is usually a small number of sentences. Afterward, the system translates the uncorrected sentences to improve translation quality as the model adapted from the corrected sentences. Since our document contains 29 times the keyphrase 'MÜ' that is wrongly translated, we retrained our model after correcting only a few (less than 5 ) of these terms to 'MT'. After retranslation, the Document View shows the difference of the translations compared to before. For both the LSTM and the Transformer model, all or almost all occurrences of 'MÜ' are now correctly translated. The user can look at the changes and accept translations or continue with iteratively improving sentences and fine-tuning the model.
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+
Figure 8: Correctly translated sentence is changed to another correct translation. 'SOS' is selected to show alternative beginnings for the sentence. After choosing an alternative the remaining sentence gets updated by another correct translation.
|
| 190 |
+
|
| 191 |
+
### 4.4 Architecture-specific Observations
|
| 192 |
+
|
| 193 |
+
We initially designed our approach for the use with an LSTM-based model with an attention mechanism. Since other architectures exist to translate documents, we also adapted it and tested its usefulness for the current state-of-the-art Transformer architecture [36] (R6). This architecture is also attention-based, and we analyzed how well it fits our interactive visualization approach. The general workflow of our system can be used in the same way as the model we initially developed it for: the Document and Metrics Views can be used to identify sentences for further investigation, and sentences can be updated using the Sentence and Beam Search View. The main difference between the Transformer model concerning our approach is the attention mechanism that influences the Attention View and some calculated metric values.
|
| 194 |
+
|
| 195 |
+
The Transformer architecture uses multiple layers with multiple self-attention heads instead of just attentions between encoder and decoder. There are approaches for the visualization of this more complex attention mechanism [37, 38]. The attention values for Transformers could, for example, show different linguistic characteristics for different attention heads [8]. However, including this into our system would make our approach more complex and not useful for end-users (R7) with little knowledge about this architecture. As a simple workaround to apply our visualization, we discard the self-attention and only use the decoder attention. We explored the influence of decoder attention values from different layers, averaged across all attention heads. Similar to Rikters et al. [26], we noticed that averaging attention from all layers is not meaningful since almost all source words are connected to all target words. Using one of the first layers showed similar results. For the final layer, a better alignment could be seen; however, the last token of the source word received too much attention compared to other words. Instead, using the second last layer showed a similar alignment between source and target words as it is available for the LSTM model. Therefore, we use this as a compromise for the use in our Attention View and for calculation of metric values.
|
| 196 |
+
|
| 197 |
+
<table><tr><td>Source</td><td>Translation</td></tr><tr><td>#23 Der Stand der MÜ im Jahr 2010 wurde von vielen Menschen als unbefriedigend bewertet .</td><td>Many people have been considered unsatisfactory in assessed the context state of the MU MT in 2010 as unsatisfactory .</td></tr><tr><td>#30 Der Bedarf an MÜ-Anwendungen steigt weiter :</td><td>The need for MUs MT applications continues to rise :✓</td></tr><tr><td>#46 Dies ist die älteste und einfachste MÜ-Methode , die beispielsweise auch obigem Russisch-Englisch-System zugrunde lag</td><td>This is the oldest and simplest MT method that ✓ underlies, which was based, for example, on the obigem Russian-English system mentioned above</td></tr><tr><td>#48 Die Transfer-Methode Ist die klassische MÜ-Methode mit drei Schritten :</td><td>The transfer method is the classic MU method MT ✓ method with three steps :</td></tr><tr><td>61 Beispielbasierte MÜ</td><td>Examples based MT✓</td></tr></table>
|
| 198 |
+
|
| 199 |
+
Figure 9: Document View showing corrected translations and changes to the initial machine-generated translations.
|
| 200 |
+
|
| 201 |
+
Since there are different approaches and architectures developed for NMT, we could incorporate them as well (R6). Some might provide better support in gaining insights into the model and offer different visualization and interaction capabilities. For others, new ways for visualization will have to be investigated.
|
| 202 |
+
|
| 203 |
+
## 5 User Study
|
| 204 |
+
|
| 205 |
+
We conducted an early user study during the development of our approach to evaluate our system's concept. We used a prototype with an LSTM translation model. The system had the same views as described before but limited features. A group of anonymous visualization and machine learning experts were invited to test our system online for general aspects related to visualization, interaction, and usefulness. Our goal was to make sure that we considered aspects relevant from both the visualization and the machine translation perspective in our system and to improve our approach. The user study was questionnaire-based to evaluate the effectiveness of the system, understandability of visualizations, and usability of interaction techniques. A 7-point Likert scale was used. In this study, the German Wikipedia article for autonomous driving (Autonomes Fahren) [40] was available to all participants. This allowed the participants to explore the phenomena we showed previously. The participants claimed to have good English (mean $= {5.1}$ , std. dev. $= {0.8}$ ) and very good German (mean $= {6.2}$ , std. dev. $= {1.7}$ ) knowledge. While the visualization experts claimed to have rather low knowledge about machine learning (mean: 2.5), the machine learning experts similarly indicated lower knowledge for visualization (mean: 3).
|
| 206 |
+
|
| 207 |
+
First, participants were introduced to the system with a short overview of the features. Then, they could explore the system freely with no time restriction. Afterward, they were asked to participate in a survey regarding the usefulness of our system and its design choices. Additionally, there were free text sections for further feedback. We recruited 11 voluntary participants from our university (six experts on visualization and five for language processing).
|
| 208 |
+
|
| 209 |
+
Table 2: Ratings from our user study for each evaluated view on a 7-point Likert scale; mean and standard deviation values are provided.
|
| 210 |
+
|
| 211 |
+
<table><tr><td>$\mathbf{{View}}$</td><td>Effectiveness</td><td>Visualization</td><td>Interaction</td></tr><tr><td>Metrics View</td><td>5.9 (1.1)</td><td>6.8 (0.4)</td><td>6.1 (0.7)</td></tr><tr><td>Keyphrase View</td><td>4.4 (1.6)</td><td>6.5 (1.2)</td><td>6.3 (1.1)</td></tr><tr><td>Beam Search View</td><td>5.6 (1.5)</td><td>6 (1.3)</td><td>4.5 (1.8)</td></tr><tr><td>Attention View</td><td>${5.6}\left( {0.8}\right)$</td><td>6.2 (1.2)</td><td>5.9 (0.9)</td></tr></table>
|
| 212 |
+
|
| 213 |
+
The general effectiveness of translating a large document containing more than 100 sentences with our approach was rated high (mean $= {5.6}$ , std. dev. $= {1.0}$ ) compared to a small document containing up to 20 sentences (mean $= {4.5}$ , std. dev. $= {1.6}$ ). The results for effectiveness, ease of understanding and intuitiveness of visualizations, and ease of interaction are given in Table 2. The ratings for the visualizations were high for all views. Best rated was the Metrics View that additionally had the lowest standard deviation. As not all our user study participants were visualization experts, we noticed that non-experts could also manage to understand and work with parallel coordinate plots. We conclude that our design choice for the visualization of metrics was appropriate. The ratings for interaction were also very high, but there was more variation. Especially the interaction for beam search was rated comparatively low and had the highest standard deviation; two language processing participants ranked it very low ( 1 and 2 ) and two (one from each participant group) very high (7). This variation might be the result of the learning curve being different for different participant groups. Since we conducted the user study, we have also improved the interaction in this view. For effectiveness, the Keyphrase View had the lowest rating. We believe the reason is that participants were not able to detect enough mistranslated sentences with this view. However, this might be due to our document provided and may differ for other documents containing more domain-specific vocabulary as we showed in our case study.
|
| 214 |
+
|
| 215 |
+
In addition, we asked users for general feedback on our approach. Especially the Metrics View received positive feedback. Participants mentioned that it is useful for quickly detecting mistranslations through brushing and linking. For the Beam Search View, one participant noted that the alternatives provided would speed up the correction of translations. For one participant, the Attention View was useful in showing the differences in the sentence structure of different languages. Negative feedback was mostly related to interaction and specific features; some participants suggested new features. Multiple participants noted that the exploration and correction of long sentences are challenging in the Beam Search View as the size of the viewport is limited. Furthermore, a feature to delete individual words and functionality for freezing areas was suggested. From the remaining feedback, we already included, for example, an undo function for the sentence views. Also, to find sentences that might contain similar errors, one participant recommended showing sentences similar to a selected sentence, and we added a respective metric. Additionally, it was mentioned that confidence scores could be shown in the document list next to each sentence and not only in the Metrics View. This would be helpful to quickly examine the confidence value even if the document is sorted by a different metric (e.g., document order); small histograms were added next to each sentence as a quick quality overview.
|
| 216 |
+
|
| 217 |
+
## 6 DISCUSSION AND FUTURE WORK
|
| 218 |
+
|
| 219 |
+
To conclude, we present a visual analytics approach for exploring, understanding, and correcting translations created by NMT. Our approach supports users in translating large domain-specific documents with interactive visualizations in different views, allows sentence correction in real-time and model adaption.
|
| 220 |
+
|
| 221 |
+
Our qualitative user study results showed that our visual analytics system was rated positively regarding effectiveness, interpretability of visualizations, and ease of interaction. The participants mastered the translation process well with our selected visualizations. Especially, our choice of parallel coordinate plots for visualization of multiple metrics and the related interaction techniques for brushing and linking were rated positively. Our approach had a clear preference for translating large documents compared to a traditional text-based approach. Right now, users have to use metrics to decide with which sentence they will start correcting the translations. More research has to be done for better automatic detection of mistranslated sentences. For example, an additional machine learning model could be trained with sentences that were already identified as wrong translations.
|
| 222 |
+
|
| 223 |
+
We believe our system is useful for people who have to deal with large documents and could use the features of interactive sentence correction and domain adaption. Comparing the use of our approach for LSTM and the Transfomer architecture showed almost no difference; for both, we could successfully interactively improve the translation quality of documents and see model-specific information. We argue that our general translation and visualization process can also be used with further models, while in such cases, some visualization views might need limited adaptation.
|
| 224 |
+
|
| 225 |
+
## REFERENCES
|
| 226 |
+
|
| 227 |
+
[1] J. Albrecht, R. Hwa, and G. E. Marai. The Chinese Room: Visualization and interaction to understand and correct ambiguous machine translation. Computer Graphics Forum, 28(3):1047-1054, 2009.
|
| 228 |
+
|
| 229 |
+
[2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014.
|
| 230 |
+
|
| 231 |
+
[3] O. Bojar, R. Chatterjee, C. Federmann, Y. Graham, B. Haddow, M. Huck, A. Jimeno Yepes, P. Koehn, V. Logacheva, C. Monz, M. Ne-gri, A. Neveol, M. Neves, M. Popel, M. Post, R. Rubino, C. Scarton, L. Specia, M. Turchi, K. Verspoor, and M. Zampieri. Findings of the 2016 Conference on Machine Translation (WMT16). In Proceedings of the First Conference on Machine Translation, Volume 2: Shared Task Papers, pp. 131-198. Association for Computational Linguistics, 8 2016.
|
| 232 |
+
|
| 233 |
+
[4] D. Cashman, G. Patterson, A. Mosca, and R. Chang. RNNbow: Visualizing learning via backpropagation gradients in recurrent neural networks. In Workshop on Visual Analytics for Deep Learning (VADL), 2017.
|
| 234 |
+
|
| 235 |
+
[5] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724-1734. Association for Computational Linguistics, 2014.
|
| 236 |
+
|
| 237 |
+
[6] J. Choo and S. Liu. Visual analytics for explainable deep learning. IEEE Computer Graphics and Applications, 38(4):84-92, Jul 2018.
|
| 238 |
+
|
| 239 |
+
[7] C. Chu and R. Wang. A survey of domain adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 1304-1319. Association for Computational Linguistics, 2018.
|
| 240 |
+
|
| 241 |
+
[8] K. Clark, U. Khandelwal, O. Levy, and C. D. Manning. What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 276-286. Association for Computational Linguistics, Florence, Italy, Aug. 2019. doi: 10.18653/v1/W19-4828
|
| 242 |
+
|
| 243 |
+
[9] C. Collins, S. Carpendale, and G. Penn. Visualization of uncertainty in lattices to support decision-making. In Proceedings of Eurograph-ics/IEEE VGTC Symposium on Visualization (EuroVis 2007), pp. 51-58, 2007.
|
| 244 |
+
|
| 245 |
+
[10] DeepL. DeepL Translator. https://www.deepl.com/translator, 2021.
|
| 246 |
+
|
| 247 |
+
[11] M. Denkowski and G. Neubig. Stronger baselines for trustable results in neural machine translation. In Proceedings of the First Workshop on
|
| 248 |
+
|
| 249 |
+
Neural Machine Translation, pp. 18-27. Association for Computational Linguistics, 2017.
|
| 250 |
+
|
| 251 |
+
[12] R. Garcia, A. C. Telea, B. C. da Silva, J. Tørresen, and J. L. D. Comba.
|
| 252 |
+
|
| 253 |
+
A task-and-technique centered survey on visual analytics for deep learning model engineering. Computers & Graphics, 77:30-49, 2018.
|
| 254 |
+
|
| 255 |
+
[13] Google. Google Translate. https://translate.google.com, 2021.
|
| 256 |
+
|
| 257 |
+
[14] A. Graves. Sequence transduction with recurrent neural networks. CoRR, abs/1211.3711, 2012.
|
| 258 |
+
|
| 259 |
+
[15] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, eds., Advances in Neural Information Processing Systems 28, pp. 1693-1701. Curran Associates, Inc., 2015.
|
| 260 |
+
|
| 261 |
+
[16] F. M. Hohman, M. Kahng, R. Pienta, and D. H. Chau. Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE Transactions on Visualization and Computer Graphics, pp. 1-1, 2018.
|
| 262 |
+
|
| 263 |
+
[17] A. Inselberg. The plane with parallel coordinates. The Visual Computer, 1(2):69-91, Aug 1985. doi: 10.1007/BF01898350
|
| 264 |
+
|
| 265 |
+
[18] A. Karpathy, J. Johnson, and F. Li. Visualizing and understanding recurrent networks. CoRR, abs/1506.02078, 2015.
|
| 266 |
+
|
| 267 |
+
[19] D. A. Keim, F. Mansmann, J. Schneidewind, J. Thomas, and H. Ziegler. Visual analytics: Scope and challenges. In Visual data mining, pp. 76-90. Springer, 2008.
|
| 268 |
+
|
| 269 |
+
[20] P. Koehn and R. Knowles. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pp. 28-39. Association for Computational Linguistics, 2017.
|
| 270 |
+
|
| 271 |
+
[21] K. Kucher and A. Kerren. Text visualization techniques: Taxonomy, visual survey, and community insights. In 2015 IEEE Pacific Visualization Symposium (PacificVis), pp. 117-121, 2015.
|
| 272 |
+
|
| 273 |
+
[22] J. Lee, J.-H. Shin, and J.-S. Kim. Interactive visualization and manipulation of attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 121-126. Association for Computational Linguistics, 2017.
|
| 274 |
+
|
| 275 |
+
[23] S. Liu, X. Wang, M. Liu, and J. Zhu. Towards better analysis of machine learning models: A visual analytics perspective. Visual Informatics, 1(1):48-56, 2017.
|
| 276 |
+
|
| 277 |
+
[24] Y. Ming, S. Cao, R. Zhang, Z. Li, Y. Chen, Y. Song, and H. Qu. Understanding hidden memories of recurrent neural networks. In 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 13-24, 2017.
|
| 278 |
+
|
| 279 |
+
[25] T. Munzner. A nested model for visualization design and validation. IEEE Transactions on Visualization and Computer Graphics, 15(6):921-928, 2009.
|
| 280 |
+
|
| 281 |
+
[26] M. Rikters. Debugging neural machine translations. arXiv preprint arXiv:1808.02733, 2018.
|
| 282 |
+
|
| 283 |
+
[27] M. Rikters, M. Fishel, and O. Bojar. Visualizing neural machine translation attention and confidence. The Prague Bulletin of Mathematical Linguistics, 109(1):39-50, 2017.
|
| 284 |
+
|
| 285 |
+
[28] J. C. Roberts. State of the art: Coordinated multiple views in exploratory visualization. In Fifth International Conference on Coordinated and Multiple Views in Exploratory Visualization (CMV 2007), pp. 61-71, 2007.
|
| 286 |
+
|
| 287 |
+
[29] M. Sedlmair, M. Meyer, and T. Munzner. Design study methodology: Reflections from the trenches and the stacks. IEEE Transactions on Visualization and Computer Graphics, 18(12):2431-2440, 2012. doi: 10.1109/TVCG.2012.213
|
| 288 |
+
|
| 289 |
+
[30] R. Sennrich, B. Haddow, and A. Birch. Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, August 11-12, Berlin, Germany, pp. 371-376. The Association for Computer Linguistics, 2016. doi: 10.18653/v1/w16-2323
|
| 290 |
+
|
| 291 |
+
[31] R. Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725. Association for Computational Linguistics, 2016.
|
| 292 |
+
|
| 293 |
+
[32] H. Strobelt, S. Gehrmann, M. Behrisch, A. Perer, H. Pfister, and A. M. Rush. Seq2seq-vis : A visual debugging tool for sequence-to-sequence
|
| 294 |
+
|
| 295 |
+
models. IEEE Transactions on Visualization and Computer Graphics,
|
| 296 |
+
|
| 297 |
+
pp. 1-1, 2018.
|
| 298 |
+
|
| 299 |
+
[33] H. Strobelt, S. Gehrmann, H. Pfister, and A. M. Rush. LSTMVis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE Transactions on Visualization and Computer Graphics, 24(1):667-676, 2018.
|
| 300 |
+
|
| 301 |
+
[34] Z. Tan, S. Wang, Z. Yang, G. Chen, X. Huang, M. Sun, and Y. Liu. Neural machine translation: A review of methods, resources, and tools, 2020.
|
| 302 |
+
|
| 303 |
+
[35] Z. Tu, Y. Liu, L. Shang, X. Liu, and H. Li. Neural machine translation with reconstruction. In Thirty-First AAAI Conference on Artificial Intelligence (AAAI), pp. 3097-3103, 2017.
|
| 304 |
+
|
| 305 |
+
[36] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. ArXiv e-prints, June 2017.
|
| 306 |
+
|
| 307 |
+
[37] J. Vig. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 37-42. Association for Computational Linguistics, Florence, Italy, July 2019. doi: 10. 18653/v1/P19-3007
|
| 308 |
+
|
| 309 |
+
[38] J. Vig. Visualizing attention in transformerbased language models. arXiv preprint arXiv:1904.02679, 2019.
|
| 310 |
+
|
| 311 |
+
[39] M. O. Ward. Linking and Brushing, pp. 1623-1626. Springer US, Boston, MA, 2009. doi: 10.1007/978-0-387-39940-9_1129
|
| 312 |
+
|
| 313 |
+
[40] Wikipedia. Autonomes fahren - wikipedia, die freie enzyklopädie, 2018.
|
| 314 |
+
|
| 315 |
+
[41] Wikipedia. Maschinelle übersetzung — wikipedia, die freie enzyk-lopädie, 2021.
|
| 316 |
+
|
| 317 |
+
[42] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, Ł. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. ArXiv e-prints, Sept. 2016.
|
| 318 |
+
|
| 319 |
+
[43] S. Yang, Y. Wang, and X. Chu. A survey of deep learning techniques for neural machine translation. arXiv preprint arXiv:2002.07526, 2020.
|
| 320 |
+
|
| 321 |
+
[44] S. Yang, Y. Wang, and X. Chu. A survey of deep learning techniques for neural machine translation. 2020.
|
| 322 |
+
|
| 323 |
+
[45] J. Yuan, C. Chen, W. Yang, M. Liu, J. Xia, and S. Liu. A survey of visual analytics techniques for machine learning. Computational Visual Media, pp. 1-34, 2020.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/DQHaCvN9xd/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ VISUAL-INTERACTIVE NEURAL MACHINE TRANSLATION
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
< g r a p h i c s >
|
| 6 |
+
|
| 7 |
+
Figure 1: The main view of our neural machine translation system: (A) Document View with sentences of the document for the current filtering settings, (B) Metrics View with sentences of the filtering result highlighted, and (C) Keyphrase View with a set of rare words that may be mistranslated. The Document View initially contains all sentences automatically translated with the NMT model. After filtering with the Metrics View and Keyphrase View, a smaller selection of sentences is shown. Each entry in the Document View provides information about metrics, the correction state, and functionality for modification (on the right side next to each sentence). The Metrics View represents each sentence as one path and shows values for different metrics (e.g., correlation, coverage penalty, sentence length). Green paths correspond to sentences of the current filtering. One sentence is highlighted (yellow) in both the Metrics View and the Document View.
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
We introduce a novel visual analytics approach for analyzing, understanding, and correcting neural machine translation. Our system supports users in automatically translating documents using neural machine translation and identifying and correcting possible erroneous translations. User corrections can then be used to fine-tune the neural machine translation model and automatically improve the whole document. While translation results of neural machine translation can be impressive, there are still many challenges such as over-and under-translation, domain-specific terminology, and handling long sentences, making it necessary for users to verify translation results; our system aims at supporting users in this task. Our visual analytics approach combines several visualization techniques in an interactive system. A parallel coordinates plot with multiple metrics related to translation quality can be used to find, filter, and select translations that might contain errors. An interactive beam search visualization and graph visualization for attention weights can be used for post-editing and understanding machine-generated translations. The machine translation model is updated from user corrections to improve the translation quality of the whole document. We designed our approach for an LSTM-based translation model and extended it to also include the Transformer architecture. We show for representative examples possible mistranslations and how to use our system to deal with them. A user study revealed that many participants favor such a system over manual text-based translation, especially for translating large documents.
|
| 12 |
+
|
| 13 |
+
Index Terms: Human-centered computing-Visualization-Visualization application domains-Visual analytics; Human-centered computing-Visualization-Visualization systems and tools; Computing methodologies-Artificial intelligence-Natural language processing—Machine translation
|
| 14 |
+
|
| 15 |
+
§ 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Machine learning and especially deep learning are popular and rapidly growing fields in many research areas. The results created with machine learning models are often impressive but sometimes still problematic. Currently, much research is performed to better understand, explain, and interact with these models. In this context, visualization and visual analytics methods are suitable and more and more often used to explore different aspects of these models. Available techniques for visual analytics in deep learning were examined by Hohman et al. [16]. While there is a large amount of work available for explainability in computer vision, less work exists for machine translation.
|
| 18 |
+
|
| 19 |
+
As it becomes increasingly important to communicate in different languages, and since information should be available for a huge range of people from different countries, many texts have to be translated. Doing this manually takes much effort. Nowadays, online translation systems like Google Translate [13] or deepL [10] support humans in translating texts. However, the translations generated that way are often not as expected or like someone familiar with both languages might translate them. It may also not express someone's translation style or use the correct terminology of a specific domain or for some occasion. Often, more background knowledge about the text is required to translate documents appropriately.
|
| 20 |
+
|
| 21 |
+
With the introduction of deep learning methods, the translation quality of machine translation models has improved considerably in the last years. However, there are still difficulties that need to be addressed. Common problems of neural machine translation (NMT) models are, for instance, over- and under-translation [35] when words are translated repeatedly or not at all. Handling rare words [20], which might be available in specific documents, and long sentences, are also issues. Domain adaption [20] is another challenge; especially documents from specific domains such as medicine, law, or science require high-quality translations [7]. As many NMT models are trained on general data sets, their translation performance is worse for domain-specific texts.
|
| 22 |
+
|
| 23 |
+
< g r a p h i c s >
|
| 24 |
+
|
| 25 |
+
Figure 2: The detailed view for a selected sentence consists of the Sentence View (A), the Attention View (B), and the Beam Search View (C). The Sentence View allows text-based modifications of the translation. The Attention View shows the attention weights (represented by the lines connecting source words with their translation) for the translation. The Beam Search View provides an interactive visualization that shows different translation possibilities and allows exploration and correction of the translation. All three areas are linked.
|
| 26 |
+
|
| 27 |
+
If high-quality translations for large texts are required, it is insufficient to use machine translation models alone. These models are computationally efficient and able to translate large documents with low time effort, but they may create erroneous or inappropriate translations. Humans are very slow compared to these models, but they can detect and correct mistranslations when familiar with the languages and the domain terminology. In a visual analytics system, both of these capabilities can be combined. Such a system should provide the translations from an NMT model and possibilities for users to visually explore translation results to find mistranslated sentences, correct them, and steer the machine learning model.
|
| 28 |
+
|
| 29 |
+
We have developed a visual analytics approach to reach the goals outlined above. First, our system performs automatic translation of a whole, possibly large, document and shows the result in the Document View (Figure 1). Users can then explore and modify the document on different views [28] (Figure 2) to improve translations and use these corrections to fine-tune the NMT model. We support different NMT architectures and use both an LSTM-based and a Transformer architecture.
|
| 30 |
+
|
| 31 |
+
So far, visual analytics systems for deep learning were mostly available for computer vision, some text-related areas, focusing on smaller parts of machine translation [22, 27] or intended for domain experts to gain insight into the models or to debug them $\left\lbrack {{32},{33}}\right\rbrack$ . This work contributes to visualization research by introducing the application domain of NMT using a user-oriented visual analytics approach. In our system, we employ different visualization techniques adapted for usage with NMT. Our parallel coordinates plot (Figure 1 (B)) supports the visualization of different metrics related to text quality. The interaction techniques in our graph-based visualization for attention (Figure 2 (B)) and tree-based visualization for beam search (Figure 2 (C)) are specifically designed for text exploration and modification. They have a strong coupling to the underlying model. Furthermore, our system has a fast feedback loop and allows interaction in real-time. We demonstrate our system's features in a video and will provide the source code of our system with the published paper.
|
| 32 |
+
|
| 33 |
+
§ 2 RELATED WORK
|
| 34 |
+
|
| 35 |
+
This section first discusses visualization and visual analytics approaches for language translation in general and then visual analytics of deep learning for text. Afterward, we provide an overview of work that combines both areas in the context of NMT.
|
| 36 |
+
|
| 37 |
+
Many visualization techniques and visual analytics systems exist for text; see Kucher and Kerren [21] for an overview. However, there is little work on exploring and modifying translation results. An interactive system to explore and correct translations was introduced by Albrecht et al. [1]. While the translation was created by machine translation, their system did not use deep learning. Lattice structures with an uncertainty that can be used to visualize machine translation were used by Collins et al. [9]. They created a lattice structure from beam search where the path for the best translation result is highlighted and can be corrected. We also use visualization for beam search, but ours is based on a tree structure.
|
| 38 |
+
|
| 39 |
+
Recently, much research was done to visualize deep learning models to understand them better. Multiple surveys $\left\lbrack {6,{12},{16},{23},{45}}\right\rbrack$ are available that give summaries of existing visual analytics systems. It is noticeable that not much work exists related to text-based domains. One of the few examples is RNN-Vis [24], a visual analytics system designed to understand and compare models for natural language processing by considering hidden state units. Karpathy et al. [18] explore the prediction of Long Short-Term Memory (LSTM) models by visualizing activations on text. Heatmaps are used by Hermann et al. [15] in order to visualize attention for machine-reading tasks. To explore the training process and to better understand how the network is learning, RNNbow [4] can be used to visualize the gradient flow during backpropagation training in Recurrent Neural Networks (RNNs).
|
| 40 |
+
|
| 41 |
+
While the previous systems support the analysis of deep learning models for text domains in general, approaches exist to specifically explore and understand NMT. The first who introduced visualizations for attention were Bahdanau et al. [2]; they showed the contribution of source words to translated words within a sentence, using an attention weight matrix. Later, Rikters et al. [27] introduced multiple ways to visualize attention and implemented exploration of a whole document. They visualize attention weights with a matrix and a graph-based visualization connecting source words and translated words by lines whose thickness represents the attention weight. Bar charts give an overview for a whole document for multiple attention-based metrics that are supposed to correlate with the translation quality. Interactive ordering of these metrics and sentence selection is possible. However, it is difficult for large documents to compare the different metrics as each bar chart is horizontally too large to be entirely shown on a display. The only connection between different bar charts is that the bars for the currently selected sentence are highlighted. Our system also uses such a metrics approach, but instead of using bar charts, a parallel coordinates plot was chosen for better scalability, interaction, and filtering.
|
| 42 |
+
|
| 43 |
+
An interactive visualization approach for beam search is provided by Lee et al. [22]. The interaction techniques supported by their tree structure are quite limited. It is possible to expand the structure and to change attention weights. However, it is not possible to add unknown words, and no sub-word units are considered. Furthermore, the exploration is limited to single sentences instead of a whole document.
|
| 44 |
+
|
| 45 |
+
With LSTMVis, Strobelt et al. [33] introduced a system to explore LSTM networks by showing hidden state dynamics. Among other application areas, their approach is also suitable for NMT. While our approach is rather intended for end-users, LSTMVis has the goal of debugging models by researchers and machine learning developers. With Seq2Seq-Vis, Strobelt et al. [32] present a system that uses an attention view similar to ours, and they also provide an interactive beam search visualization. However, their system is designed to translate single sentences, and no model adaption is possible for improved translation quality. Their system was designed for debugging and for gaining insight into the models.
|
| 46 |
+
|
| 47 |
+
Since there are different architectures available for generating translations [43], specific visualization approaches may be required. Often, LSTM- based architectures are used. Recently, the Transformer architecture [36] gained popularity; Vig [37,38] visually explores their self-attention layers and Rikters et al. [26] extended their previous approach for debugging document to Transformer-based systems.
|
| 48 |
+
|
| 49 |
+
All these systems provide different, possibly interactive, visualizations. However, their goal is rather to debug NMT models instead of supporting users in translating entire documents, or they are limited to small aspects of the model. Additionally, they are usually designed for one specific translation model. None of these approaches provide extended interaction techniques for beam search or interactive approaches to iteratively improve the translation quality of a whole document.
|
| 50 |
+
|
| 51 |
+
§ 3 VISUAL ANALYTICS APPROACH
|
| 52 |
+
|
| 53 |
+
Our visual analytics approach allows the automatic translation, exploration, and correction of documents. Its components can be split into multiple parts. First, a document is automatically translated from one language into another one, then mistranslated sentences in the document are identified by users, and individual sentences can be explored and corrected. Finally, a model can be fine-tuned and the document retranslated.
|
| 54 |
+
|
| 55 |
+
Our approach has a strong link to machine data processing and follows the visual analytics process presented by Keim et al. [19]. We use visualizations for different aspects of NMT models, and users can interact with the provided information.
|
| 56 |
+
|
| 57 |
+
§ 3.1 REQUIREMENTS
|
| 58 |
+
|
| 59 |
+
For the development of our system, we followed the nested model by Munzner [25]. The main focus was on the outer parts of the model, including identifying domain issues, feature implementation design, and visualization and interaction implementation. Additionally, we used a similar process as Sedlmair et al. [29], especially focusing on the core phases. Design decisions were made in close cooperation with deep learning and NMT experts, who are also co-authors of this paper. The visual analytics system was implemented in a formative process that included these experts. Our system went through an iterative development that included multiple meetings with our domain experts. Together, we identified the requirements listed in Table 1. After implementing the basic prototype of the system, we demonstrated it to further domain experts. At a later stage, we performed a small user study with experts for visualization and machine translation. For our current prototype, we added recommended functionality from these experts.
|
| 60 |
+
|
| 61 |
+
Table 1: Requirements for our visual analytics system and their implementations in our approach.
|
| 62 |
+
|
| 63 |
+
max width=
|
| 64 |
+
|
| 65 |
+
$\mathbf{{R1}}$ Automatic translation - A document is translated automatically by an NMT model.
|
| 66 |
+
|
| 67 |
+
1-2
|
| 68 |
+
$\mathbf{{R2}}$ Overview - The user can see the whole document as a list of all source sentences and their translations (Figure 1 (A)). Additionally, an overview of the translation quality is provided in the Metrics View that reveals statistics about different metrics encoded as a parallel coordinates plot (Figure 1 (B)) showing an overall quality distribution.
|
| 69 |
+
|
| 70 |
+
1-2
|
| 71 |
+
$\mathbf{{R3}}$ Find, filter, and select relevant sentences - Interaction in the parallel coordinates allows filtering according to different metrics and selecting specific sentences. It is also possible to select one sentence and order the other sentences of the document by similarity to verify for similar sentences if they contain similar errors. Additionally, our Keyphrase View (Figure 1 (C)) supports selecting sentences containing specific keywords that might be domain-specific and rarely used in general documents.
|
| 72 |
+
|
| 73 |
+
1-2
|
| 74 |
+
$\mathbf{{R4}}$ Visualize and modify sentences - For each sentence, a beam search and attention visualization (Figure 2) can be used to interactively explore and adapt the translation result in order to correct erroneous sentences and explore how a translation failed. It is also possible to explore alternative translations.
|
| 75 |
+
|
| 76 |
+
1-2
|
| 77 |
+
$\mathbf{{R5}}$ Update model and translation - The model can be fine-tuned using the user inputs from translation corrections; this is especially useful for domain adaption. Afterward, the document is retranslated with the updated model in order to improve the translation result (the result is visualized similar to Figure 9).
|
| 78 |
+
|
| 79 |
+
1-2
|
| 80 |
+
$\mathbf{{R6}}$ Generalizability and extensibility - While we initially designed our visualization system for one translation model, we soon noticed that our approach should handle data from other translation models as well. Therefore, our approach should be easily adaptable for new models to cope with the dynamic development of new deep learning architectures. Our general translation and correction process is held quite agnostic to be applied on a variety of models. Only model-specific visualizations have limitations, need to be adapted or exchanged when using a different translation architecture.
|
| 81 |
+
|
| 82 |
+
1-2
|
| 83 |
+
$\mathbf{{R7}}$ Target group - The target group for our system should be quite broad and include professional translators or students who need to translate documents. However, it should also be able to be used by other people interested in correcting and possibly better understanding the results of automated translation.
|
| 84 |
+
|
| 85 |
+
1-2
|
| 86 |
+
|
| 87 |
+
§ 3.2 NEURAL MACHINE TRANSLATION
|
| 88 |
+
|
| 89 |
+
The goal of machine translation is to translate a sequence of words from a source language into a sequence of words in a target language. Different approaches exist to achieve this goal [34,44].
|
| 90 |
+
|
| 91 |
+
Usually, neural networks for machine translation are based on an encoder-decoder architecture. The encoder is responsible for transforming the source sequence into a fixed-length representation known as a context vector. Based on the context vector, the decoder generates an output sequence where each element is then used to generate a probability distribution over the target vocabulary. These probabilities are then used to determine the target sequence; a common method to achieve this uses beam search decoding [14].
|
| 92 |
+
|
| 93 |
+
Although different NMT models vary in their architecture, the previously described encoder-decoder design should apply to a wide range of architectures and new approaches that may be developed in the future (R6). In this work, we explored an LSTM architecture with attention and extended our approach to include the Transformer architecture, thus verifying its ability to generalize.
|
| 94 |
+
|
| 95 |
+
One of the first neural network architectures for machine translation consists of two RNNs with LSTM units [5]. To handle long sentences, the attention mechanism for NMT [2] was introduced. It allows sequence-to-sequence models to attend to different parts of the source sequence while predicting the next element of the target sequence by giving the decoder access to the encoder's weighted hidden states. During decoding, the hidden states of the encoder together with the hidden state of the decoder for the current step are used to compute the attention scores. Finally, the context vector for the current step is computed as a sum of the encoder hidden states, weighted by the attention scores. The attention weights can be easily visualized and used to explain why a neural network model predicted a certain output. Furthermore, attention weights can be understood as a soft alignment between source and target sequences. For each translated word, the weight distribution over the source sequence signifies which source words were most important for predicting this target word. The Transformer architecture was recently introduced by Vaswani et al. [36] and gained much popularity. It uses a more complex attention mechanism with multi-head attention layers; especially, self-attentions play an important role in the translation process. We verify its applicability to our approach and visualize only the part of the attention information that showed an alignment between source and target sentences comparable to the LSTM model.
|
| 96 |
+
|
| 97 |
+
§ 3.3 EXPLORATION OF DOCUMENTS
|
| 98 |
+
|
| 99 |
+
After uploading a document to our system, it is translated by an NMT model (R1). The main view of our approach then shows information about the whole document (R2). This includes a list of all sentences in the Document View (Figure 1 (A)) and an overview of the translation quality in the Metrics View (Figure 1 (B)). Using the Metrics View and Keyphrase View (Figure 1 (C)), sentences can be filtered to detect possible mistranslated sentences that can be flagged by the user (R3). Once a mistranslated sentence is found, it is also possible to filter for sentences containing similar errors (R3).
|
| 100 |
+
|
| 101 |
+
§ METRICS VIEW
|
| 102 |
+
|
| 103 |
+
In the Metrics View, a parallel coordinates plot (Figure 1 (B)) is used to detect possible mistranslated sentences by filtering sentences according to different metrics (R3). For instance, it is possible to find sentences that have low translation confidence.
|
| 104 |
+
|
| 105 |
+
Multiple metrics exist that are relevant to identify translations with low quality; we use the following metrics in our approach:
|
| 106 |
+
|
| 107 |
+
* Confidence: A metric that considers attention distribution for input and output tokens; it was suggested by Rikters et al. [27]. Here, a higher value is usually better.
|
| 108 |
+
|
| 109 |
+
* Coverage Penalty: This metric by Wu et al. [42] can be used to detect sentences where words did not get enough attention. Here, a lower value is usually better.
|
| 110 |
+
|
| 111 |
+
* Sentence length: The sentence length (the number of words in a source sentence) can be used to filter very short or long sentences. For example, long sentences might be more likely to contain errors.
|
| 112 |
+
|
| 113 |
+
* Keyphrases: This metric can be used to filter for sentences containing domain-specific words. As these words are rare in the training data, the initial translation of sentences containing them is likely erroneous. The values used for this metric are the number of occurrences of keyphrases in a sentence weighted by the frequency of the keyphrases in the whole document.
|
| 114 |
+
|
| 115 |
+
* Sentence similarity: Optionally, for a given sentence, the similarity to all other sentences can be determined using cosine similarity. This helps to find sentences with similar errors to a detected mistranslated sentence.
|
| 116 |
+
|
| 117 |
+
* Document index: The document index allows the user to sort sentences according to their original order in the document, which can be especially important for correcting translations where the context of sentences is relevant. Furthermore, this metric might also show trends like consecutive sentences with low confidence.
|
| 118 |
+
|
| 119 |
+
In contrast to Rikters et al. [27], who use bar charts to visualize different metrics, we chose a parallel coordinates plot [17]. Each sentence can be mapped to one line in such a plot, and different metrics can be easily compared. These plots are useful for an overview of different metrics and to detect outliers and trends. Interactions with the metrics, such as highlighting lines or choosing filtering ranges, are supported. It can be expected that sentences filtered for both, low confidence and high coverage penalty, are more likely to be poorly translated than sentences falling into only one of these categories.
|
| 120 |
+
|
| 121 |
+
§ KEYPHRASE VIEW
|
| 122 |
+
|
| 123 |
+
It is possible to search for sentences according to keyphrases by selecting them in the Keyphrase View (Figure 1 (C)) (R3). This can be visualized as shown in Figure 4. Keyphrases are domain-specific words and were not often included in the training data used for our model. As the model has not enough knowledge on how to deal with these words, it is important to verify if the respective sentences were translated correctly. In addition to automatically determined keyphrases, users can manually specify further keyphrases for sentence filtering.
|
| 124 |
+
|
| 125 |
+
§ DOCUMENT VIEW
|
| 126 |
+
|
| 127 |
+
A list of all the source sentences in a document and a list of their translations are shown in the Document View (Figure 1 (A)) (R2). Each entry in this list can be marked as correct or flagged (Figure 4) for later correction. A small histogram shows an overview of the previously mentioned metrics. If a sentence is modified, either through user-correction or retranslation by the fine-tuned model, changes in the sentences are highlighted (Figure 9). Both the Metrics View and the Keyphrase View are connected via brushing and linking [39] to allow filtering for sentences that are likely to be mistranslated and should be examined and possibly corrected. Additionally, sentences can be sorted into a list by similarity to a user-selected reference sentence. In this list, sentences can be selected for further exploration and correction in more detailed sentence-based views.
|
| 128 |
+
|
| 129 |
+
§ 3.4 EXPLORATION AND CORRECTION OF SENTENCES
|
| 130 |
+
|
| 131 |
+
After filtering and selection, a sentence can be further analysed with the Sentence, Attention, and Beam Search Views (Figure 2) and subsequent correction (R4). These views are shown simultaneously to allow interactive exploration and modification of translations.
|
| 132 |
+
|
| 133 |
+
Note, on the sentence level, we use subword units to handle the problem of rare words, which often occurs in domain-specific documents, and to avoid unknown words. We use the Byte Pair Encoding (BPE) method proposed by Sennrich et al. [31] for compressing text by recursively joining frequent pairs of characters into new subwords. This means, instead of choosing whole words to build the source and target vocabulary, words are split into subword units consisting of possibly multiple characters. This method reduces model size, complexity, and training time. Additionally, the model can handle unknown words by splitting them into their subword units. As these subword units are known beforehand, they do not require the introduction of an "unknown" token for translation. Thus, we can adapt the NMT model to any new domain, including those with vocabulary not seen at training time.
|
| 134 |
+
|
| 135 |
+
< g r a p h i c s >
|
| 136 |
+
|
| 137 |
+
Figure 3: Attention visualization: (top) when hovering a source word (here: 'verarbeiten') translated words influenced by the source are highlighted and (bottom) when hovering a translated word (here: 'process') source words that influence the translation are highlighted according to attention weights.
|
| 138 |
+
|
| 139 |
+
§ SENTENCE VIEW
|
| 140 |
+
|
| 141 |
+
Similar to common translation systems, the Sentence View (Figure 2 (A)) shows the source sentence and the current translation. It is possible to manually modify the translation, which in turn updates the content in the other sentence-based views. After adding a new word in the text area, the translation with the highest score is used for the remainder of the sentence. This supports a quick text-based modification of a translation without explicit use of visualizations.
|
| 142 |
+
|
| 143 |
+
§ ATTENTION VIEW
|
| 144 |
+
|
| 145 |
+
The Attention View depends on the underlying NMT model. It is intended to visualize the relationship between words of the source sentence and the current translation as a weighted graph (Figure 2 (B)); such a technique was also used by Strobelt et al. [32]. Both source and translated words are represented by nodes; links between such words show the attention weights encoded by the thickness of the connecting lines (we use a predefined threshold to hide lines for very low attention). These weights correlate with the importance of source words for the translated words. Hovering over a source word highlights connecting lines to translated words starting at this word. In addition, the translated words are highlighted by transparency according to the attention weights (Figure 3 top). While this shows how a source word contributes to the translation, it is also possible to show for translated words how source words contribute to the translation (Figure 3 bottom). This interactive visualization supports users in understanding how translations are generated from the source sentence words. On the one hand, such a visualization helps gain insight into the NMT model, and, on the other hand, detects issues in generated translations. The links between source sentence and translation can be explored to identify anomalies such as under-or over-translation. Missing attention weights can be an indication for under-translation and links to multiple translated words for over-translation. In our case study in Section 4, examples of these cases are presented. While this technique specifically employs information of the attention-based LSTM model, we use it in an adapted form for the Transformer architecture (see Section 4.4). A visualization more tailored to Transformers, also including self-attention and attention scores from multiple decoder layers, could provide additional information. Further models may need different visualizations for a generalized use of our approach, employing model-specific information.
|
| 146 |
+
|
| 147 |
+
§ BEAM SEARCH VIEW
|
| 148 |
+
|
| 149 |
+
While the Attention View can be used to identify positions with mistranslations, the Beam Search View supports users in interactively modifying and correcting translations. The Beam Search View visualizes multiple translations created by the beam search decoding as a hierarchical structure (see Figure 2 (C)). This interactive visualization can be used for post-editing the translations.
|
| 150 |
+
|
| 151 |
+
The simplest way of predicting a target sequence is greedy decoding, where at every time step, the token with the highest output probability is chosen as the next predictied token and fed to the decoder in the next step. This is an efficient, straightforward way of generating an output sequence. However, another translation may be better overall, despite having lower probabilities for the first words. Beam search decoding [14] is a compromise between exhaustive search and greedy decoding, often used for generating the final translation. A fixed number(k)of hypotheses is considered at each timestep. For each hypothesis considered, the NMT model outputs a probability distribution over the target vocabulary for the next token. These hypotheses are sorted by the probability of the latest token, and up to $k$ hypotheses remain in the beam. Hypotheses ending with the End-of-Sequence (EOS) token are filtered out and put into the result set. Once $k$ hypotheses are in the result set, the beam search stops, and the final hypotheses are ranked according to a scoring function that depends on the attention weights and the sentence length.
|
| 152 |
+
|
| 153 |
+
For visualization, we use a similar approach as Strobelt et al. [32], and Lee et al. [22]: a tree structure reflects the inherently hierarchical nature of the beam search decoding. This way, translation hypotheses starting with the same prefixes are merged into one branch of this hierarchical structure. The root node of each translation is associated with a Start-of-Sequence (SOS) token and all leaf nodes with an End-of-Sequence (EOS) token. Compared to the visualization of a list of different suggested translations, showing a tree is more compact, and it is easier to recognize where commonalities of different translation variants lie.
|
| 154 |
+
|
| 155 |
+
Each term of the translation is visualized by a circle that represents the actual node and a corresponding label. The color of a circle is mapped to the word's output probability. This supports users in identifying areas with a lower probability that might require further exploration. It can be seen as uncertainty for the prediction of words. In our visualization, we differentiate between nodes that represent subwords and whole words. Continuous lines connect subwords and nodes are placed closer together to form a unit. In contrast, the connections to whole words are represented by dashed lines.
|
| 156 |
+
|
| 157 |
+
The beam search visualization can be used to navigate within a translation and edit it (Figures 7 and 8). The interaction can be performed either mouse-based or keyboard-based; the latter is more efficient for fast post-editing. The view supports standard panning-and-zooming techniques that are especially needed to explore long sentences as they do not fit common displays. For navigation within the tree, arrow keys can be used to move through a sentence, or nodes can be selected by mouse cursor. If the translation of the current node's child node is not satisfying, the node can be expanded to show suggestions for correction. If the user selects a suggested word, the beam search runs with a lexical prefix constraint, and the tree structure gets updated. If the suggested words are not suitable, a custom correction can be performed by typing an arbitrary word that better fits. The number of suggested translations is initially set to three and can be increased by adapting the beam size. Increasing this value may create better translations and provides more alternative translations. However, the higher the value is, the more information has to be shown in the visualization. By hovering and selecting elements in this view, corresponding elements of the Attention View and Sentence View are shown for reference.
|
| 158 |
+
|
| 159 |
+
< g r a p h i c s >
|
| 160 |
+
|
| 161 |
+
Figure 4: Main view of the system: the Document View shows some flagged sentences for correction. Additionally, the keyphrase filter (top right) is active: all sentences containing the keyphrase 'MÜ' are shown in the Metrics and Document Views. It is visible that 'MÜ' is never correctly translated to 'MT'.
|
| 162 |
+
|
| 163 |
+
§ 3.5 MODEL FINE-TUNING AND RETRANSLATION
|
| 164 |
+
|
| 165 |
+
After correcting the translation of multiple sentences, the user corrections can be used to fine-tune the NMT model and automatically improve the translation of the not yet verified sentences (R5). This approach can be applied repeatedly to improve the document's translation quality, especially for domain-specific texts.
|
| 166 |
+
|
| 167 |
+
Documents often belong to a specific domain, e.g., legal, medical, or scientific. Each domain has specific terminology, and one word may even refer to different concepts in different domains. As such, the ability of NMT models to handle different types of domains is an important research topic. Domain adaption refers to techniques allowing NMT models trained on general training data, also called out-of-domain, to adapt to domain-specific documents, called in-domain. This is useful since there may be an abundant amount of general training data, but domain-specific data may be rare. Since NMT models need a large amount of training data to achieve good translation quality, the out-of-domain data can be used to train a baseline model. The model can then be adapted using the in-domain data (R5), which typically contains a smaller number of sentences: in our system, we use the user-corrected sentences. This mitigates the problem of training an NMT model in a low-resource scenario where little data exists for a given domain. In our approach, we continue training for the in-domain data in a reduced way by freezing certain model weights (for the LSTM-based model, both decoder and the LSTM layers of the encoder are trained; for the Transformer, only the decoder is trained).
|
| 168 |
+
|
| 169 |
+
§ 4 CASE STUDY
|
| 170 |
+
|
| 171 |
+
As a typical use case, we take the German Wikipedia article for machine translation (Maschinelle Übersetzung) [41] as a document for translation into English. In the following, we show how to use our system to improve the translation quality of the document. Please see our accompanying video for a demonstration with the Transformer model. The examples in the following were created with both the LSTM and Transformer models. We trained our models on a general data set: the German-to-English data set from the 2016 ACL Conference on Machine Translation (WMT'16) [3] shared news translation task. This is a popular data set for NMT, used, for instance, by Denkowski and Neubig [11] and Sennrich et al. [30].
|
| 172 |
+
|
| 173 |
+
< g r a p h i c s >
|
| 174 |
+
|
| 175 |
+
Figure 5: Example of over-translation: 'Examples' is placed twice as translation for the German word 'Beispiele'. The Beam Search View (right) shows possible alternative translations. However, only increasing the beam size to four shows the translation we would have expected.
|
| 176 |
+
|
| 177 |
+
§ 4.1 EXPLORATION OF DOCUMENTS
|
| 178 |
+
|
| 179 |
+
After uploading a document (R1), we have a look at the parallel coordinates plot (R2) for our initial translations and the list of keyphrases in order to detect possible mistranslations (R3). In the Keyphrase View, we notice the domain-specific term 'MÜ' occurring very often. This term is the German abbreviation for 'machine translation' and should therefore be translated as 'MT'. However, none of the translations use the correct term (Figure 4). Additionally, one could select and verify sentences with low confidence or with a high coverage penalty. Here, we especially notice the under-translation of some sentences. After verifying a translation in the Document View, users can decide if they are correct (R2). If the users do not agree with the translation, they can set a flag (Figure 4) to modify the translation later or switch to the sentence-based views to correct it (R4).
|
| 180 |
+
|
| 181 |
+
§ 4.2 EXPLORATION AND CORRECTION OF SENTENCES
|
| 182 |
+
|
| 183 |
+
After setting flags for multiple sentences (Figure 4), or the decision to explore or modify a sentence, a more detailed view for each sentence can be shown to explore and improve their translations interactively (Figure 2) (R4).
|
| 184 |
+
|
| 185 |
+
Over-translation is a common issue of NMT [20]. In the Attention View, it is possible to see what went wrong by identifying where the attention weights connect the source and destination words.
|
| 186 |
+
|
| 187 |
+
For both models, we notice some cases for very short sentences. Figure 5 shows for the German heading 'Beispiele' (en: 'Examples'), a translation that uses the translated word multiple times. Also, the suggested alternatives use this term more than once. Only after increasing the beam size to four, the correct translation is visible, which can then be selected as the correction.
|
| 188 |
+
|
| 189 |
+
More often, only parts of a sentence are translated, and important words are not considered in our document. Such under-translation is shown in Figure 6. In the first example, only the beginning of the sentence is translated, and it is visible that the remaining nodes have almost zero attention. In the second example, the German term 'zweisprachigen' (en: 'bilingual') is skipped in the translation. While this part of the translation is missing, the translated sentence is still correct and fluid; it might be difficult to detect such an error without such attention visualizations.
|
| 190 |
+
|
| 191 |
+
An example of a wrong translation containing a keyphrase is visualized in Figure 7. Here, it is also shown that using the beam search visualization, it is possible to select an alternative translation interactively starting from the position where the first error occurs. The beam search provides possible alternative translations, but it is possible to manually type what the user believes should be the next term. Here, we enter the correct translation manually. The beam search visualization automatically updates in real-time according to the correction.
|
| 192 |
+
|
| 193 |
+
< g r a p h i c s >
|
| 194 |
+
|
| 195 |
+
Figure 6: Example of under-translation shown in the Attention View: (top) for the LSTM model the end of the sentence is not translated; attention weights are very low for this part of the sentence. (Bottom) for the Transformer architecture the term 'zweisprachigen' (en: 'bilingual') is not translated; attention weights are very low for this term.
|
| 196 |
+
|
| 197 |
+
< g r a p h i c s >
|
| 198 |
+
|
| 199 |
+
Figure 7: Example of a mistranslated sentence containing the keyphrase 'MÜ' shown as beam search visualization: (top) suggested translation, suggested alternatives and custom correction; (bottom) updated translation tree for corrected keyword with new suggestions for continuing the sentence after the custom change.
|
| 200 |
+
|
| 201 |
+
Finally, it is also possible to change sentences without mistakes. Sometimes, sentences are correctly translated, but different words or sentence structures are used as the current user would prefer for the context of a sentence or to express someone's own style (Figure 8). Again, it is possible to explore and select alternative words or sentences with the Beam Search View. If we wanted to start the sentence with a different word, an alternative could be selected, and the remaining sentence would get updated accordingly.
|
| 202 |
+
|
| 203 |
+
After correcting and accepting multiple translation corrections, the Document View shows how a translation was changed (Figure 9).
|
| 204 |
+
|
| 205 |
+
§ 4.3 MODEL FINE-TUNING AND RETRANSLATION
|
| 206 |
+
|
| 207 |
+
After users corrected multiple sentences, they can choose to retrain the current model for not yet accepted sentences (R5). The model is then fine-tuned using the corrected sentences by the user. This is usually a small number of sentences. Afterward, the system translates the uncorrected sentences to improve translation quality as the model adapted from the corrected sentences. Since our document contains 29 times the keyphrase 'MÜ' that is wrongly translated, we retrained our model after correcting only a few (less than 5 ) of these terms to 'MT'. After retranslation, the Document View shows the difference of the translations compared to before. For both the LSTM and the Transformer model, all or almost all occurrences of 'MÜ' are now correctly translated. The user can look at the changes and accept translations or continue with iteratively improving sentences and fine-tuning the model.
|
| 208 |
+
|
| 209 |
+
< g r a p h i c s >
|
| 210 |
+
|
| 211 |
+
Figure 8: Correctly translated sentence is changed to another correct translation. 'SOS' is selected to show alternative beginnings for the sentence. After choosing an alternative the remaining sentence gets updated by another correct translation.
|
| 212 |
+
|
| 213 |
+
§ 4.4 ARCHITECTURE-SPECIFIC OBSERVATIONS
|
| 214 |
+
|
| 215 |
+
We initially designed our approach for the use with an LSTM-based model with an attention mechanism. Since other architectures exist to translate documents, we also adapted it and tested its usefulness for the current state-of-the-art Transformer architecture [36] (R6). This architecture is also attention-based, and we analyzed how well it fits our interactive visualization approach. The general workflow of our system can be used in the same way as the model we initially developed it for: the Document and Metrics Views can be used to identify sentences for further investigation, and sentences can be updated using the Sentence and Beam Search View. The main difference between the Transformer model concerning our approach is the attention mechanism that influences the Attention View and some calculated metric values.
|
| 216 |
+
|
| 217 |
+
The Transformer architecture uses multiple layers with multiple self-attention heads instead of just attentions between encoder and decoder. There are approaches for the visualization of this more complex attention mechanism [37, 38]. The attention values for Transformers could, for example, show different linguistic characteristics for different attention heads [8]. However, including this into our system would make our approach more complex and not useful for end-users (R7) with little knowledge about this architecture. As a simple workaround to apply our visualization, we discard the self-attention and only use the decoder attention. We explored the influence of decoder attention values from different layers, averaged across all attention heads. Similar to Rikters et al. [26], we noticed that averaging attention from all layers is not meaningful since almost all source words are connected to all target words. Using one of the first layers showed similar results. For the final layer, a better alignment could be seen; however, the last token of the source word received too much attention compared to other words. Instead, using the second last layer showed a similar alignment between source and target words as it is available for the LSTM model. Therefore, we use this as a compromise for the use in our Attention View and for calculation of metric values.
|
| 218 |
+
|
| 219 |
+
max width=
|
| 220 |
+
|
| 221 |
+
Source Translation
|
| 222 |
+
|
| 223 |
+
1-2
|
| 224 |
+
#23 Der Stand der MÜ im Jahr 2010 wurde von vielen Menschen als unbefriedigend bewertet . Many people have been considered unsatisfactory in assessed the context state of the MU MT in 2010 as unsatisfactory .
|
| 225 |
+
|
| 226 |
+
1-2
|
| 227 |
+
#30 Der Bedarf an MÜ-Anwendungen steigt weiter : The need for MUs MT applications continues to rise :✓
|
| 228 |
+
|
| 229 |
+
1-2
|
| 230 |
+
#46 Dies ist die älteste und einfachste MÜ-Methode, die beispielsweise auch obigem Russisch-Englisch-System zugrunde lag This is the oldest and simplest MT method that ✓ underlies, which was based, for example, on the obigem Russian-English system mentioned above
|
| 231 |
+
|
| 232 |
+
1-2
|
| 233 |
+
#48 Die Transfer-Methode Ist die klassische MÜ-Methode mit drei Schritten : The transfer method is the classic MU method MT ✓ method with three steps :
|
| 234 |
+
|
| 235 |
+
1-2
|
| 236 |
+
61 Beispielbasierte MÜ Examples based MT✓
|
| 237 |
+
|
| 238 |
+
1-2
|
| 239 |
+
|
| 240 |
+
Figure 9: Document View showing corrected translations and changes to the initial machine-generated translations.
|
| 241 |
+
|
| 242 |
+
Since there are different approaches and architectures developed for NMT, we could incorporate them as well (R6). Some might provide better support in gaining insights into the model and offer different visualization and interaction capabilities. For others, new ways for visualization will have to be investigated.
|
| 243 |
+
|
| 244 |
+
§ 5 USER STUDY
|
| 245 |
+
|
| 246 |
+
We conducted an early user study during the development of our approach to evaluate our system's concept. We used a prototype with an LSTM translation model. The system had the same views as described before but limited features. A group of anonymous visualization and machine learning experts were invited to test our system online for general aspects related to visualization, interaction, and usefulness. Our goal was to make sure that we considered aspects relevant from both the visualization and the machine translation perspective in our system and to improve our approach. The user study was questionnaire-based to evaluate the effectiveness of the system, understandability of visualizations, and usability of interaction techniques. A 7-point Likert scale was used. In this study, the German Wikipedia article for autonomous driving (Autonomes Fahren) [40] was available to all participants. This allowed the participants to explore the phenomena we showed previously. The participants claimed to have good English (mean $= {5.1}$ , std. dev. $= {0.8}$ ) and very good German (mean $= {6.2}$ , std. dev. $= {1.7}$ ) knowledge. While the visualization experts claimed to have rather low knowledge about machine learning (mean: 2.5), the machine learning experts similarly indicated lower knowledge for visualization (mean: 3).
|
| 247 |
+
|
| 248 |
+
First, participants were introduced to the system with a short overview of the features. Then, they could explore the system freely with no time restriction. Afterward, they were asked to participate in a survey regarding the usefulness of our system and its design choices. Additionally, there were free text sections for further feedback. We recruited 11 voluntary participants from our university (six experts on visualization and five for language processing).
|
| 249 |
+
|
| 250 |
+
Table 2: Ratings from our user study for each evaluated view on a 7-point Likert scale; mean and standard deviation values are provided.
|
| 251 |
+
|
| 252 |
+
max width=
|
| 253 |
+
|
| 254 |
+
$\mathbf{{View}}$ Effectiveness Visualization Interaction
|
| 255 |
+
|
| 256 |
+
1-4
|
| 257 |
+
Metrics View 5.9 (1.1) 6.8 (0.4) 6.1 (0.7)
|
| 258 |
+
|
| 259 |
+
1-4
|
| 260 |
+
Keyphrase View 4.4 (1.6) 6.5 (1.2) 6.3 (1.1)
|
| 261 |
+
|
| 262 |
+
1-4
|
| 263 |
+
Beam Search View 5.6 (1.5) 6 (1.3) 4.5 (1.8)
|
| 264 |
+
|
| 265 |
+
1-4
|
| 266 |
+
Attention View ${5.6}\left( {0.8}\right)$ 6.2 (1.2) 5.9 (0.9)
|
| 267 |
+
|
| 268 |
+
1-4
|
| 269 |
+
|
| 270 |
+
The general effectiveness of translating a large document containing more than 100 sentences with our approach was rated high (mean $= {5.6}$ , std. dev. $= {1.0}$ ) compared to a small document containing up to 20 sentences (mean $= {4.5}$ , std. dev. $= {1.6}$ ). The results for effectiveness, ease of understanding and intuitiveness of visualizations, and ease of interaction are given in Table 2. The ratings for the visualizations were high for all views. Best rated was the Metrics View that additionally had the lowest standard deviation. As not all our user study participants were visualization experts, we noticed that non-experts could also manage to understand and work with parallel coordinate plots. We conclude that our design choice for the visualization of metrics was appropriate. The ratings for interaction were also very high, but there was more variation. Especially the interaction for beam search was rated comparatively low and had the highest standard deviation; two language processing participants ranked it very low ( 1 and 2 ) and two (one from each participant group) very high (7). This variation might be the result of the learning curve being different for different participant groups. Since we conducted the user study, we have also improved the interaction in this view. For effectiveness, the Keyphrase View had the lowest rating. We believe the reason is that participants were not able to detect enough mistranslated sentences with this view. However, this might be due to our document provided and may differ for other documents containing more domain-specific vocabulary as we showed in our case study.
|
| 271 |
+
|
| 272 |
+
In addition, we asked users for general feedback on our approach. Especially the Metrics View received positive feedback. Participants mentioned that it is useful for quickly detecting mistranslations through brushing and linking. For the Beam Search View, one participant noted that the alternatives provided would speed up the correction of translations. For one participant, the Attention View was useful in showing the differences in the sentence structure of different languages. Negative feedback was mostly related to interaction and specific features; some participants suggested new features. Multiple participants noted that the exploration and correction of long sentences are challenging in the Beam Search View as the size of the viewport is limited. Furthermore, a feature to delete individual words and functionality for freezing areas was suggested. From the remaining feedback, we already included, for example, an undo function for the sentence views. Also, to find sentences that might contain similar errors, one participant recommended showing sentences similar to a selected sentence, and we added a respective metric. Additionally, it was mentioned that confidence scores could be shown in the document list next to each sentence and not only in the Metrics View. This would be helpful to quickly examine the confidence value even if the document is sorted by a different metric (e.g., document order); small histograms were added next to each sentence as a quick quality overview.
|
| 273 |
+
|
| 274 |
+
§ 6 DISCUSSION AND FUTURE WORK
|
| 275 |
+
|
| 276 |
+
To conclude, we present a visual analytics approach for exploring, understanding, and correcting translations created by NMT. Our approach supports users in translating large domain-specific documents with interactive visualizations in different views, allows sentence correction in real-time and model adaption.
|
| 277 |
+
|
| 278 |
+
Our qualitative user study results showed that our visual analytics system was rated positively regarding effectiveness, interpretability of visualizations, and ease of interaction. The participants mastered the translation process well with our selected visualizations. Especially, our choice of parallel coordinate plots for visualization of multiple metrics and the related interaction techniques for brushing and linking were rated positively. Our approach had a clear preference for translating large documents compared to a traditional text-based approach. Right now, users have to use metrics to decide with which sentence they will start correcting the translations. More research has to be done for better automatic detection of mistranslated sentences. For example, an additional machine learning model could be trained with sentences that were already identified as wrong translations.
|
| 279 |
+
|
| 280 |
+
We believe our system is useful for people who have to deal with large documents and could use the features of interactive sentence correction and domain adaption. Comparing the use of our approach for LSTM and the Transfomer architecture showed almost no difference; for both, we could successfully interactively improve the translation quality of documents and see model-specific information. We argue that our general translation and visualization process can also be used with further models, while in such cases, some visualization views might need limited adaptation.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Library of Apollo: A Virtual Library Experience in your Web Browser
|
| 2 |
+
|
| 3 |
+
Section D - World History By period Empire, 27 B.C.-476 A.D DG277 (10 Books) DG276.C216 2013 DG276.694 2012 Coney, Michael Peto Gorgetti, Albine Ancient Italy. Rome to History DG275 (8 Books) DG276 (73 Books) General works Resistanth, G. McN Capes. W. W
|
| 4 |
+
|
| 5 |
+
Figure 1: A wide-angle shot of the shelves. The user can scroll the shelves left and right, look around using their mouse and click & engage with the books, signage and panels.
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Research libraries that house a large number of books organize, classify and lay out their collections by using increasingly rigorous classification systems. These classification systems make relations between books physically explicit within the dense library shelves, and allow for fast, easy and high quality serendipitous book discovery. We suggest that online libraries can offer the same browsing experience and serendipitous discovery that physical libraries do. To explore this, we introduce the Virtual Library, a virtual-reality experience which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. Library of Apollo is an infinitely scrollable array of shelves that has 9.4 million books distributed over 200,000 hierarchical Library of Congress Classification categories, the most common library classification system in the U.S. The users can jump between books and categories by search, clicking on the subject tags on books, and by navigating the classification tree by simple collapse and expand options. An online deployment of our system, together with user surveys and collected session data showed that our users showed a strong preference for our system for book discovery with ${41}/{45}$ of users saying they are positively inclined to recommend it to a bookish friend.
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing- -
|
| 12 |
+
|
| 13 |
+
## 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Book exploration is a fundamental part of every reader's life. Libraries often play a unique role in this exploratory process. Large collections are classified, organized and laid out through the use of continuously-updated, rigorous and systematic classification systems, such as the Library of Congress Classification (LCC) system in the U.S. Under the LCC system, books are uniquely classified into a tree of classifications, and are additionally tagged with one or more standardized subject headings.
|
| 16 |
+
|
| 17 |
+
These categorization systems make relationships between books physically explicit, putting each book in the context of every other book within the same and nearby shelves that have similar books, adjacent in subjects, authorship, time and geography [1,2]. This allows for natural, fast and high quality serendipitous book discoveries.
|
| 18 |
+
|
| 19 |
+
Online library interfaces are generally search result based and this takes away from the valuable serendipitous discoveries that are often made in physical libraries through browsing [3-5]. In this paper, we introduce a Virtual Library implementation, Library of Apollo, which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. Library of Apollo is an infinitely scrollable array of bookshelves that has 9.4 million books distributed over 200,000 hierarchical Library of Congress Classification (LCC) categories. The users can jump between books and categories by targeted search, clicking on the subject tags on books, and by navigating the classification tree by simple collapse and expand options.
|
| 20 |
+
|
| 21 |
+
The main contributions of this paper are two-fold. Firstly, we contribute the design and implementation of the Virtual Library system, which replicates a physical library's collection outline and categorization features while adapting it to a web browser setting that is controlled via mouse and keyboard. We discuss the improvements we made to increase the internal cross-connectivity, ease-of-use and navigability, along with the technical details necessary to serve a large and organized collection of books virtually. Second, we deployed this system publicly on the Internet at loapollo.com, soliciting survey feedback from visitors. Our analysis of the survey data and server logs reveal changing behaviours around physical libraries and bookstores due to the ongoing COVID-19 pandemic, and their preferences for using the Virtual Library system.
|
| 22 |
+
|
| 23 |
+
## 2 RELATED WORK
|
| 24 |
+
|
| 25 |
+
### 2.1 Library of Congress Classification System
|
| 26 |
+
|
| 27 |
+
There are different classification systems used for different purposes in different parts of the world, and as we have focused on large collections as those hosted by research and academic libraries, we decided to use the subject-based, hierarchical Library of Congress Classification system (LCC) [1], currently one of the most widely used library classification systems in the world [6].
|
| 28 |
+
|
| 29 |
+
a tale of two cities a tale of two cities A tale of two cities New York : A. E. Dirrich, 1930. A tale of two cities :wherein a physicist looks at the 1860 census of Nebraska City an Clies, A-2 Grand Island, Neb. : Prairie Ponneer Press, Stahr Mustern, 1987 The tale of two cities shelf hierarchy Twentieth century interpretations of $A$ tale of two citi Beckwith, Charles E., comp Englewood Cliffs, N.J., Premise-Hall [1972] Shelf Results English inerature Language and Literature Individual authors Dickens, Charles Separate works PR4571 Tale of two exten 19 Books Charles Dickens' A tale of two citles Karnicky, Jeffrey Telection cases A tale of two cities Glancy, Ruth F., 1948 Tale of two cities A tale of two cities
|
| 30 |
+
|
| 31 |
+
Figure 2: Search results for ’a tale of two cities’, showing both shelf and book results. (A) Shelf results list LCC classes that are search-hits, while book results are hits for titles, authors and subject headings of books. The terms that led to the search-hits are highlighted. (B) Hovering over the 'Under Shelf' portion of book results expands the shelf hierarchy.
|
| 32 |
+
|
| 33 |
+
LCC divides all knowledge into twenty-one basic classes, each identified by a single letter. These are then further divided into more specific subclasses, identified by two or three letter combinations (class N, Art, has subclasses NA, Architecture; NB, Sculpture etc.). Each subclass includes a loosely hierarchical arrangement of the topics pertinent to the subclass, going from the general to the more specific, often also specifying form, place and time [1,6]. LCC grows with human knowledge and is updated regularly by the Library of Congress [6].
|
| 34 |
+
|
| 35 |
+
### 2.2 Book Browsing and Serendipity in Libraries
|
| 36 |
+
|
| 37 |
+
Information seeking in physical libraries takes two forms: search of known items and browsing. Patrons may begin with a search of a known item, but through undirected browsing of nearby items, often discover new and unknown books serendipitously [7]. Even though patrons could conceivably find interesting items in completely disarranged stacks, libraries aim to ease this fundamental browsing process by cataloguing, classifying and shelving books according to some classification system [2], putting like near like, deemed so through their topics, subtopics, authorship, form, place and time.
|
| 38 |
+
|
| 39 |
+
Library shelves are meticulously organized to encourage serendipity $\left\lbrack {7,8}\right\rbrack$ and call numbers themselves are markers that indicate the semantic content of shelved items [9]. There is some quantified evidence about the neighbour effects created by Dewey Decimal and Library of Congress Classification systems: a nearby book being loaned increases the probability of a subsequent loan by at least 9-fold [10], indicating that browsing is happening regularly and effectively within these classification systems. There is also strong spoken and anecdotal preference for physical library shelf browsing $\left\lbrack {3,{11}}\right\rbrack$ and users bemoan the lack of opportunity to do so online [3-5].
|
| 40 |
+
|
| 41 |
+
More recently, McKay et. al. detailed the actions taken during library browsing by users, like reading signs, scanning, the number of books, shelves, bays touched and examined etc., indicating an idiosyncratic browsing behaviour by each user as well as a high success rate, with ${87}\%$ of users leaving their browsing session with one or more books [12]. These results suggest a unique and creative character to browsing within the general scope of information retrieval.
|
| 42 |
+
|
| 43 |
+
### 2.3 Book Browsing and Serendipity in the Digital Age
|
| 44 |
+
|
| 45 |
+
Even though libraries have been amongst the earliest adopters of computers, digitization and online access [13] and there has been a clearly expressed sentiment favoring browsing $\left\lbrack {3 - 5,{11}}\right\rbrack$ , there has been little commensurate effort to carry the physical shelf browsing experience to online libraries. Online library interfaces are almost universally search-result based, and we know the vast majority of users only see up to ten search results [14], and only interact with one or two in any depth for further information [15]. These small numbers are hardly conducive to browsing and serendipitous discovery. The recommender systems used by online vendors are generally good to discover popular books, popular tastes and genres while they often fail to provide novel ranges of selections necessary for browsing and serendipity [16].
|
| 46 |
+
|
| 47 |
+
There is some novel recent work that has been developed to facilitate serendipitous browsing in libraries [17, 18]. The Bohemian Bookshelf [17] aimed to encourage serendipity by creating five interlinked visualizations over its collection, based on books' covers, sizes, keywords, authorships and timelines. This playful approach was deployed on a touch kiosk in a library and was received very positively, but the collection size used for implementation and test was limited to only 250 books, a number too small to be indicative of performance over larger collections where some of the employed visualizations could become too cluttered and complex to navigate.
|
| 48 |
+
|
| 49 |
+
The Blended Shelf [18] carried over the physical shelves into very large touch displays that offered 3D visualizations of a library collection of ${2.1}\mathrm{\;m}$ books, conforming to the standard classification used by that library. The 3D shelves were draggable by swipe, can be searched and reordered and the classifications can be navigated via a breadcrumb-cued input mechanism. However there are no deployments, tests or user studies regarding their implementation.
|
| 50 |
+
|
| 51 |
+
We have taken a reality-based presentation approach, creating a 3D online library of ${9.4}\mathrm{\;m}$ books on infinitely scrollable shelves that conform to the LCC ordering, that is also navigable through search, cross-connected subjects, and a navigatable classification tree.
|
| 52 |
+
|
| 53 |
+
## 3 DESIGN AND IMPLEMENTATION
|
| 54 |
+
|
| 55 |
+
We have regarded the strongly expressed preference for physical shelf browsing $\left\lbrack {3 - 5,{11}}\right\rbrack$ together with the long cultivated art of library classification systems [1] as the guiding principles of our design process, and aimed to bring the physical library experience online while making improvements to enhance the connectedness and navigability of the served online collection.
|
| 56 |
+
|
| 57 |
+
### 3.1 Designing to Faithfully Recreate Physical Libraries
|
| 58 |
+
|
| 59 |
+
We had two conflicting design interests. First, we wanted to design and deploy a virtual library application at scale available to everyone on web browsers. Second, we also wanted this to be as close to a physical library experience as possible. Naïvely optimizing for the latter would have meant building an online replica of a huge library in 3D, that would then be navigated by mouse and keyboard through some combination of user motion and teleportation. However, that would have been an alien set of interactions to most people, be hard to navigate and possibly even clunky.
|
| 60 |
+
|
| 61 |
+
Gothic ornaments in the cathedral church of York NA5471 B Architecture for the shroud Scott, John Beldon, 1946 No Author Listed Monticalia, Ill. : Vance Bibliographies, 1981 Derese nyi, Deno Europe Chemian Marlowe, George F. (George Francis) New England New York, Macmillan Co., 1947, by Halfpenny, Joseph, 1748-1811 York :, Published by J. Todd & Sons, 1795-1800 Details SATI YORDATION Extent: [51] p., 105 leaves of plotes Other Physical Details: III. (some col) Dimensions: 37 cm. Search on Amazon naments in
|
| 62 |
+
|
| 63 |
+
Figure 3: (A) Clicks on shelved books bring up synoptic book panels. The elements in the subject heading table are clickable. (B) The search-results after the “church architecture” subject heading was clicked. The hits on subject headings are highlighted with shades of green.
|
| 64 |
+
|
| 65 |
+
Instead we extracted the most crucial part of the physical book browsing experience from the physical library: the shelves. The user, essentially the camera, is placed in front of a set of floating shelves, and is allowed 20 degrees of freedom of camera motion left and right, and 10 degrees up and down, achievable with a mouse or a trackpad. A wide-angle view of the shelves is seen on Figure 1. The user can "scroll" the shelves left and right as slow or as fast as they want via the regular scroll gesture on their trackpads, scroll-wheels on mouses or arrow-keys on keyboards (Video Figure).
|
| 66 |
+
|
| 67 |
+
1770/1800-3800/ Separate wates Oliver Twist 21. Socience Digby, Keeten Henry PRASSB.DSS English literature B English literature English Stareture English literature English literature
|
| 68 |
+
|
| 69 |
+
Figure 4: (A) The scroll-view of class hierarchies. Hierarchies can be collapsed by clicking any of the intermediate elements. (B) The collapsed hierarchies after the click at A. The hierarchies can be scrolled up and down, or expanded by clicking their tag panels if they contain child categories.
|
| 70 |
+
|
| 71 |
+
Books on the virtual shelves are arranged in order according to their LCC classifications, just as they would be in a real library. There are also panels showing LCC hierarchies above shelves indicating where the user is within the library. The panels in the center indicate the user's current location, while the left and right panels respectively indicate the previous and next classifications, and thus the content of the shelves that await the user in those directions. A click on the left or right panels scroll the shelves to bring those classes in front of the user. A click on the center panels opens the LCC classification hierarchy as seen on Figure 4. This list can be navigated by up and down scrolling as well as clicks to expand and collapse it. Clicking on the end portion of these hierarchies brings that class & its associated books in front of the user.
|
| 72 |
+
|
| 73 |
+
The user can start typing anytime they want to bring up the search bar, which can be used to search over titles, authors, subjects and LCC classifications. If any book or classification is selected through the search results, the user is transported to the shelf containing this selected book. The books are also sized according to their real dimensions, page numbers and volumes, as seen on Figure 1.
|
| 74 |
+
|
| 75 |
+
### 3.2 Building a Familiar and Expressive Search Bar
|
| 76 |
+
|
| 77 |
+
Single input search bars are ubiquitous and users are intimately familiar with them. We utilized the search bar as an entry point into our library by designing it as a catch-all input system that searches over book titles, authors, subject headings of books, and LCC categories. Our search provides a way for users to enter into the hierarchically organized stacks by helping them first with identifying a relevant pool of items they might be interested in.
|
| 78 |
+
|
| 79 |
+
Figure 2 shows shelf (class) and book search hits for the query ’A tale of two cities.' Shelf results show the LCC hierarchy from top to bottom, together with the class numbers and the number of books that are listed under that class. Book results list titles, authors and publishers, together with a table showing the subject headings listed for each book. To the right of each book result, you can also see under which shelf (class) that book can be found, hovering over that section expands the shelf hierarchy as can be seen on the right side of Figure 2. The 'under shelf' portions are always colored to clearly designate shelf hierarchies. The search results are not paginated, and can be "infinitely" scrolled downwards until the end of results.
|
| 80 |
+
|
| 81 |
+
### 3.3 Engineering a Smooth Browsing Experience
|
| 82 |
+
|
| 83 |
+
Clicking on a search result, both for books and shelves, zooms the user into a set of floating shelves (Figure 1). The searched-for book appears on the centre shelf and is briefly highlighted as an indicator of its position. The books on shelves conform to LCC sorting and appear as they would on a regular library. One other key feature we have added is browsing over the LCC classification hierarchy as seen on Figure 4. This scroll-view of classification hierarchies is opened by clicking on the centered LCC panels. A listed hierarchy can be expanded or collapsed by clicking on different parts of the hierarchy. Clicking on any panel within a hierarchy collapses that hierarchy down to that panel, reducing the number of hierarchies in the scroll-view. Clicking on the name-tag panel of a hierarchy expands that hierarchy. Clicking on the last panel of a hierarchy transports the user to the beginning of that hierarchical class with the first book of that class highlighted on shelves.
|
| 84 |
+
|
| 85 |
+
Another addition to the browsing experience is through the improved connectivity that comes through search over subject headings of books. Book clicks bring up synoptic windows as seen on part 1 of Figure 3. The colored subject tables are filled with Library of Congress Subject Headings (LCSH) which we have populated for every book from a dataset maintained by domain experts at the Library of Congress [1]. These LCSH headings are clickable and trigger a search over the entire dataset's indexed LCSH fields. Part 2 of Figure 3 shows search results for an LCSH field search; notice that these books can come from entirely different shelves and classes, often very distant from each other, thus providing another way to jump between classes and books. This allows users to transcend the distance imposed by LCC in physical libraries through a subject-based search method, which allows for dynamic links between books. We believe this feature would go a long way to satisfy the expressed desire by users to occasionally see distant books [5].
|
| 86 |
+
|
| 87 |
+
### 3.4 Providing Access to Digitized and Physical Books
|
| 88 |
+
|
| 89 |
+
Another killer feature of real world libraries is in-depth browsing of individual books. A library-goer can just pick up any book and read until their curiosity is sated. In order to provide a similar experience, when a user clicks on a shelved book, we provide a single page overview. Part 1 of Figure 3 shows an example of this page which displays the book's title, authorship, publishing house, colored subject headings, physical information regarding extent and dimensions, and links to lookup the same book on Amazon and Goodreads, as well as a Peek Inside button that is enabled when there is a free & digitized version available on OpenLibrary (which houses over 3 million books, or around ${30}\%$ of the total collection). A single click on this button opens a new tab with an online reader showing the contents of the book, as seen on Figure 5, providing an in-depth reading experience. The Amazon and Goodreads buttons provide easy access to purchase and social reading options, respectively.
|
| 90 |
+
|
| 91 |
+
Digitized by the Internet Archive Gortille Ornaments in the YORK In MDCCXCV 生活中国一点十五个人学生,以下是一点“大中点”。 in 2014 http://archive.org/details/gothicornaments100hall
|
| 92 |
+
|
| 93 |
+
Figure 5: The "Peek Inside" button on book panels opens an e-reader that uses OpenLibrary's digitized book archive.
|
| 94 |
+
|
| 95 |
+
Search Indexed books and classes Cloudsearch Peek Inside OpenLibrary API User interaction logging DynamoDB WebGL + Website AWS S3 AWS API Gateway Users Static Files Cloudfront
|
| 96 |
+
|
| 97 |
+
Figure 6: The cloud architecture of Library of Apollo.
|
| 98 |
+
|
| 99 |
+
## 4 ONLINE DEPLOYMENT AND SURVEY RESULTS
|
| 100 |
+
|
| 101 |
+
### 4.1 The Front-end and the Cloud Architecture
|
| 102 |
+
|
| 103 |
+
We have developed the front-end of our application using Unity WebGL. The compiled WebAssembly, Javascript and HTML files are hosted on the AWS S3 service through the CDN service of AWS, Cloudfront. The books are sorted according to LCC sorting scheme $\left\lbrack {1,6}\right\rbrack$ and clustered into JSON files that contain 500 books each, gzipped and stored in S3. These data files are fetched when the user is browsing the shelves, and pre-fetched during scrolling when the user is near the end of a cluster.
|
| 104 |
+
|
| 105 |
+
The same data that is stored in S3 is indexed on AWS Cloudsearch to power the search functionalities of our app. All search requests are performed through the REST API we have developed on AWS API Gateway. Search results are populated through the data returned from Cloudsearch. For analytics, user click-data is also recorded on DynamoDB through the same gateway. The search requests to OpenLibrary's digitized books dataset are made in a similar fashion. Our architecture is summarized on Figure 6.
|
| 106 |
+
|
| 107 |
+
### 4.2 The Deployed Dataset
|
| 108 |
+
|
| 109 |
+
Courtesy of the Library of Congress [19], through their MARC Open-Access program, we were able to put together ${9.4}\mathrm{\;m}$ books, complete with their LCSH and LCC data, distributed over ${200}\mathrm{k}$ LCC classes. We pre-processed this data to serve our specific needs, and stored and indexed them for use within our virtual library application as described above. Despite the size of the dataset, the website is very smooth to use and inexpensive to operate - monthly operational costs are less than \$40 USD.
|
| 110 |
+
|
| 111 |
+
### 4.3 Deployment and Recruitment
|
| 112 |
+
|
| 113 |
+
We deployed our library at a publicly accessible address, loapollo.com, and seeded links to the project via Reddit and word-of-mouth. Over a span of two weeks, over two hundred unique users visited the site. Each user to the library was assigned a randomly-generated persistent user ID, to track repeated session visits, as well as a session ID for any given visit. The Library front-end automatically logged click interactions with the system in a DynamoDB server for later introspection.
|
| 114 |
+
|
| 115 |
+
Of the 224 unique visitors, 21 (9.4%) visited more than once, with one user visiting the site a total of five times over the course of two weeks. Users interacted for an average of 162 seconds, and produced an average of 8 click interactions; notably, however, both distributions are particularly long-tailed. While some users only visited for very brief moments (often just a single search, followed presumably by some scrolling), 12 users used it for over ten minutes each, and likewise 20 users generated over 20 click events each.
|
| 116 |
+
|
| 117 |
+
Have your reading habits changed during the 20 35 1122 \$3 \$4 \$5 COVID-19 pandemic? How often did you use libraries (pre-pandemic) How often do you use libraries (post March How often did you use physical bookshops (pre-pandemic) How often do you use physical bookshops (post March 2020) How often did you browse online for books (pre-pandemic) How often do you browse online for books (post March 2020)
|
| 118 |
+
|
| 119 |
+
Figure 7: The change in reading habits due to COVID-19 pandemic.
|
| 120 |
+
|
| 121 |
+
### 4.4 Survey
|
| 122 |
+
|
| 123 |
+
When users clicked on any book, a survey link was displayed in the upper-right corner, ensuring that users interacted with the system to a minimal extent prior to taking the survey. Users who accessed the link were invited to fill out a short survey with two major parts: a first section which asked about their existing (pre- and post-pandemic) book browsing habits, and another section with which asked about their experiences with the library. Finally, users could provide open-ended feedback about their experience with the Library. The full survey design can be found in the Supplemental Materials.
|
| 124 |
+
|
| 125 |
+
We received a total of 46 survey responses. Two responses were discarded: one filled out ' 1's for every single question, and another was a clear duplicate submission. Participants reported reading an average of 13.4 books per year (SD 15.2; participants reported anywhere from 1 book per year to 80 books per year).
|
| 126 |
+
|
| 127 |
+
In the first part of the survey (Figure 7), the participants did not report significantly changing their reading habits (mean 2.9), but evidently had a significant shift in book-browsing habits due to the pandemic. Library usage decreased significantly (pre-pandemic mean 2.36, post-pandemic mean 1.54), as did bookshop usage (pre-pandemic mean 2.98, post-pandemic mean 1.45), whereas online browsing significantly increased (pre-pandemic mean 3.14, post-pandemic mean 3.86).
|
| 128 |
+
|
| 129 |
+
In the second part of the survey (Figure 8), the users generally expressed a strong preference for the Library, with over ${80}\% \left( {{36}/{44}}\right)$ users being "somewhat likely" or more to come back, and over 90% of users (41/44) being "somewhat likely" or more to recommend this to a friend. Users generally felt that it did replicate the feel of real libraries and bookshops to some extent, with over half (25/44) noting that it felt similar or very similar. Similarly, around half of participants found it "easier" or "much easier" to find books compared to bookshops, libraries (23/44) and online destinations (21/44), and felt that it contained "more" or "significantly more" new and interesting books compared to bookshops, libraries (23/44) and online destinations $\left( {{24}/{44}}\right)$ . Overall, survey respondents were very positive about the library and its major features.
|
| 130 |
+
|
| 131 |
+
## 5 Discussion
|
| 132 |
+
|
| 133 |
+
Through our online deployment and survey, we were able to gather valuable feedback from readers and book browsers. In the open feedback column, we had several useful insights generated by users.
|
| 134 |
+
|
| 135 |
+
Likely to come back 20 Likely to recommend to a bookish friend Replicates the feeling of real libraries and bookshops Rate the navigability of the library Easier to find books compared to bookshops More new and interesting books compared to bookshops and libraries Easier to find books compared to online destinations More new and interesting books compared to online destinations
|
| 136 |
+
|
| 137 |
+
Figure 8: The user perception of the library's navigability, ease of use, browsing quality and overall quality.
|
| 138 |
+
|
| 139 |
+
User Interface: Three users wished for a mobile version in the feedback, with one noting "Mobile version must be created and [published] on App stores immediately", indicating the modern importance of smartphone-friendly interfaces. Indeed, although our interface does work on some mobile browsers, it does not on many older browsers due to a lack of WebGL support. We expect this limitation will improve in the future as newer devices generally do support WebGL. Two users commented that they would have liked to change the colour scheme: one user remarked that it was too dark, while another user wanted a "night mode" to make it even darker.
|
| 140 |
+
|
| 141 |
+
Search Performance: Our project primarily focused on book-browsing, rather than search, and consequently our search feature is much simpler than e.g. Google or Amazon's search functionality. Users commented that this could be improved. Four users noted that search could be improved: providing better search capabilities (such as advanced search by author/title/subject fields), improving the presentation of results (e.g. moving favorite or recommended books to the top), improving the discoverability of category or tag search, and ordering works by popularity or relevance. Recommendations, in particular, are interesting as they must function differently in a public library setting (with limited or no access to prior user data) compared with companies like Amazon which have vast access to prior user preferences via tracking and search history.
|
| 142 |
+
|
| 143 |
+
Browsing Capabilities Users generally praised the browsing and search features of the Library. One user noted that they were able to find " 5 books on music theory in 5 minutes", while another noted that "It has everything I am looking for". Users also had suggestions for improving the experience further: one commented that they would have liked even more physicality in the form of different rooms/areas to browse around in, while another suggested a downloadable version for organizing their own books.
|
| 144 |
+
|
| 145 |
+
## 6 CONCLUSION
|
| 146 |
+
|
| 147 |
+
We have presented the design of the Library of Apollo, a virtual-reality library implemented in a web-browser application and designed to support library-like browsing and discovery. Our system was deployed publicly and attracted over two hundred visitors and 44 survey responses; our survey results suggested high user affinity for the book-browsing experience and library capabilities. Through the combination of features that we designed, we have built a virtual library which lends itself to serendipitous browsing and discoveries across a large, connected book dataset.
|
| 148 |
+
|
| 149 |
+
## ACKNOWLEDGMENTS
|
| 150 |
+
|
| 151 |
+
Anonymous for review.
|
| 152 |
+
|
| 153 |
+
## REFERENCES
|
| 154 |
+
|
| 155 |
+
[1] Lois Mai Chan, Sheila S Intner, and Jean Weihs. Guide to the Library of Congress classification. ABC-CLIO, 2016.
|
| 156 |
+
|
| 157 |
+
[2] Jim LeBlanc. Classification and shelflisting as value added: Some remarks on the relative worth and price of predictability, serendipity, and depth of access. Library resources & technical services, 39(3):294- 302, 1995.
|
| 158 |
+
|
| 159 |
+
[3] Dana McKay. Gotta keep'em separated: Why the single search box may not be right for libraries. In Proceedings of the 12th Annual Conference of the New Zealand chapter of the ACM special interest group on computer-human interaction, pages 109-112, 2011.
|
| 160 |
+
|
| 161 |
+
[4] Stephann Makri, Ann Blandford, Jeremy Gow, Jon Rimmer, Claire Warwick, and George Buchanan. A library or just another information resource? a case study of users' mental models of traditional and digital libraries. Journal of the American Society for Information Science and Technology, 58(3):433-445, 2007.
|
| 162 |
+
|
| 163 |
+
[5] Annika Hinze, Dana McKay, Nicholas Vanderschantz, Claire Timpany, and Sally Jo Cunningham. Book selection behavior in the physical library: implications for ebook collections. In Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries, pages 305-314, 2012.
|
| 164 |
+
|
| 165 |
+
[6] Library of congress classification. https://www.loc.gov/catdir/ cpso/lcc.html. Accessed: 2021-04-06.
|
| 166 |
+
|
| 167 |
+
[7] Elizabeth B Cooksey. Too important to be left to chance-serendipity and the digital library. Science & technology libraries, 25(1-2):23-32, 2004.
|
| 168 |
+
|
| 169 |
+
[8] Geoffrey C Bowker and Susan Leigh Star. Sorting things out: Classification and its consequences. MIT press, 2000.
|
| 170 |
+
|
| 171 |
+
[9] Elaine Svenonius. The intellectual foundation of information organization. MIT press, 2000.
|
| 172 |
+
|
| 173 |
+
[10] Dana McKay, Wally Smith, and Shanton Chang. Lend me some sugar: Borrowing rates of neighbouring books as evidence for browsing. In IEEE/ACM Joint Conference on Digital Libraries, pages 145-154. IEEE, 2014.
|
| 174 |
+
|
| 175 |
+
[11] Hanna Stelmaszewska and Ann Blandford. From physical to digital: a case study of computer scientists' behaviour in physical libraries. International Journal on Digital Libraries, 4(2):82-92, 2004.
|
| 176 |
+
|
| 177 |
+
[12] Dana McKay, Shanton Chang, and Wally Smith. Manoeuvres in the dark: Design implications of the physical mechanics of library shelf browsing. In Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval, pages 47-56, 2017.
|
| 178 |
+
|
| 179 |
+
[13] Shiao-Feng Su. Dialogue with an opac: How visionary was swanson in 1964? The Library Quarterly, 64(2):130-161, 1994.
|
| 180 |
+
|
| 181 |
+
[14] Amanda Spink, Dietmar Wolfram, Major BJ Jansen, and Tefko Sarace-vic. Searching the web: The public and their queries. Journal of the American society for information science and technology, 52(3):226- 234, 2001.
|
| 182 |
+
|
| 183 |
+
[15] Micheline Hancock-Beaulieu. Evaluating the impact of an online library catalogue on subject searching behaviour at the catalogue and t the shelves. Journal of documentation, 1990.
|
| 184 |
+
|
| 185 |
+
[16] Jonathan L Herlocker, Joseph A Konstan, Loren G Terveen, and John T Riedl. Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS), 22(1):5-53, 2004.
|
| 186 |
+
|
| 187 |
+
[17] Alice Thudt, Uta Hinrichs, and Sheelagh Carpendale. The bohemian bookshelf: supporting serendipitous book discoveries through information visualization. In Proceedings of the SIGCHI Conference on human factors in computing systems, pages 1461-1470, 2012.
|
| 188 |
+
|
| 189 |
+
[18] Eike Kleiner, Roman Rädle, and Harald Reiterer. Blended shelf: reality-based presentation and exploration of library collections. In CHI'13 extended abstracts on human factors in computing systems, pages 577-582. 2013.
|
| 190 |
+
|
| 191 |
+
[19] Library of Congress. Marc open-access, marc distribution services, mdsconnect, 2021. data retrieved from Library of Congress MARC Distribution Services, https://www.loc.gov/ cds/products/marcDist.php.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ERb8dfghQKX/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ LIBRARY OF APOLLO: A VIRTUAL LIBRARY EXPERIENCE IN YOUR WEB BROWSER
|
| 2 |
+
|
| 3 |
+
< g r a p h i c s >
|
| 4 |
+
|
| 5 |
+
Figure 1: A wide-angle shot of the shelves. The user can scroll the shelves left and right, look around using their mouse and click & engage with the books, signage and panels.
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Research libraries that house a large number of books organize, classify and lay out their collections by using increasingly rigorous classification systems. These classification systems make relations between books physically explicit within the dense library shelves, and allow for fast, easy and high quality serendipitous book discovery. We suggest that online libraries can offer the same browsing experience and serendipitous discovery that physical libraries do. To explore this, we introduce the Virtual Library, a virtual-reality experience which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. Library of Apollo is an infinitely scrollable array of shelves that has 9.4 million books distributed over 200,000 hierarchical Library of Congress Classification categories, the most common library classification system in the U.S. The users can jump between books and categories by search, clicking on the subject tags on books, and by navigating the classification tree by simple collapse and expand options. An online deployment of our system, together with user surveys and collected session data showed that our users showed a strong preference for our system for book discovery with ${41}/{45}$ of users saying they are positively inclined to recommend it to a bookish friend.
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing- -
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Book exploration is a fundamental part of every reader's life. Libraries often play a unique role in this exploratory process. Large collections are classified, organized and laid out through the use of continuously-updated, rigorous and systematic classification systems, such as the Library of Congress Classification (LCC) system in the U.S. Under the LCC system, books are uniquely classified into a tree of classifications, and are additionally tagged with one or more standardized subject headings.
|
| 16 |
+
|
| 17 |
+
These categorization systems make relationships between books physically explicit, putting each book in the context of every other book within the same and nearby shelves that have similar books, adjacent in subjects, authorship, time and geography [1,2]. This allows for natural, fast and high quality serendipitous book discoveries.
|
| 18 |
+
|
| 19 |
+
Online library interfaces are generally search result based and this takes away from the valuable serendipitous discoveries that are often made in physical libraries through browsing [3-5]. In this paper, we introduce a Virtual Library implementation, Library of Apollo, which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. Library of Apollo is an infinitely scrollable array of bookshelves that has 9.4 million books distributed over 200,000 hierarchical Library of Congress Classification (LCC) categories. The users can jump between books and categories by targeted search, clicking on the subject tags on books, and by navigating the classification tree by simple collapse and expand options.
|
| 20 |
+
|
| 21 |
+
The main contributions of this paper are two-fold. Firstly, we contribute the design and implementation of the Virtual Library system, which replicates a physical library's collection outline and categorization features while adapting it to a web browser setting that is controlled via mouse and keyboard. We discuss the improvements we made to increase the internal cross-connectivity, ease-of-use and navigability, along with the technical details necessary to serve a large and organized collection of books virtually. Second, we deployed this system publicly on the Internet at loapollo.com, soliciting survey feedback from visitors. Our analysis of the survey data and server logs reveal changing behaviours around physical libraries and bookstores due to the ongoing COVID-19 pandemic, and their preferences for using the Virtual Library system.
|
| 22 |
+
|
| 23 |
+
§ 2 RELATED WORK
|
| 24 |
+
|
| 25 |
+
§ 2.1 LIBRARY OF CONGRESS CLASSIFICATION SYSTEM
|
| 26 |
+
|
| 27 |
+
There are different classification systems used for different purposes in different parts of the world, and as we have focused on large collections as those hosted by research and academic libraries, we decided to use the subject-based, hierarchical Library of Congress Classification system (LCC) [1], currently one of the most widely used library classification systems in the world [6].
|
| 28 |
+
|
| 29 |
+
< g r a p h i c s >
|
| 30 |
+
|
| 31 |
+
Figure 2: Search results for ’a tale of two cities’, showing both shelf and book results. (A) Shelf results list LCC classes that are search-hits, while book results are hits for titles, authors and subject headings of books. The terms that led to the search-hits are highlighted. (B) Hovering over the 'Under Shelf' portion of book results expands the shelf hierarchy.
|
| 32 |
+
|
| 33 |
+
LCC divides all knowledge into twenty-one basic classes, each identified by a single letter. These are then further divided into more specific subclasses, identified by two or three letter combinations (class N, Art, has subclasses NA, Architecture; NB, Sculpture etc.). Each subclass includes a loosely hierarchical arrangement of the topics pertinent to the subclass, going from the general to the more specific, often also specifying form, place and time [1,6]. LCC grows with human knowledge and is updated regularly by the Library of Congress [6].
|
| 34 |
+
|
| 35 |
+
§ 2.2 BOOK BROWSING AND SERENDIPITY IN LIBRARIES
|
| 36 |
+
|
| 37 |
+
Information seeking in physical libraries takes two forms: search of known items and browsing. Patrons may begin with a search of a known item, but through undirected browsing of nearby items, often discover new and unknown books serendipitously [7]. Even though patrons could conceivably find interesting items in completely disarranged stacks, libraries aim to ease this fundamental browsing process by cataloguing, classifying and shelving books according to some classification system [2], putting like near like, deemed so through their topics, subtopics, authorship, form, place and time.
|
| 38 |
+
|
| 39 |
+
Library shelves are meticulously organized to encourage serendipity $\left\lbrack {7,8}\right\rbrack$ and call numbers themselves are markers that indicate the semantic content of shelved items [9]. There is some quantified evidence about the neighbour effects created by Dewey Decimal and Library of Congress Classification systems: a nearby book being loaned increases the probability of a subsequent loan by at least 9-fold [10], indicating that browsing is happening regularly and effectively within these classification systems. There is also strong spoken and anecdotal preference for physical library shelf browsing $\left\lbrack {3,{11}}\right\rbrack$ and users bemoan the lack of opportunity to do so online [3-5].
|
| 40 |
+
|
| 41 |
+
More recently, McKay et. al. detailed the actions taken during library browsing by users, like reading signs, scanning, the number of books, shelves, bays touched and examined etc., indicating an idiosyncratic browsing behaviour by each user as well as a high success rate, with ${87}\%$ of users leaving their browsing session with one or more books [12]. These results suggest a unique and creative character to browsing within the general scope of information retrieval.
|
| 42 |
+
|
| 43 |
+
§ 2.3 BOOK BROWSING AND SERENDIPITY IN THE DIGITAL AGE
|
| 44 |
+
|
| 45 |
+
Even though libraries have been amongst the earliest adopters of computers, digitization and online access [13] and there has been a clearly expressed sentiment favoring browsing $\left\lbrack {3 - 5,{11}}\right\rbrack$ , there has been little commensurate effort to carry the physical shelf browsing experience to online libraries. Online library interfaces are almost universally search-result based, and we know the vast majority of users only see up to ten search results [14], and only interact with one or two in any depth for further information [15]. These small numbers are hardly conducive to browsing and serendipitous discovery. The recommender systems used by online vendors are generally good to discover popular books, popular tastes and genres while they often fail to provide novel ranges of selections necessary for browsing and serendipity [16].
|
| 46 |
+
|
| 47 |
+
There is some novel recent work that has been developed to facilitate serendipitous browsing in libraries [17, 18]. The Bohemian Bookshelf [17] aimed to encourage serendipity by creating five interlinked visualizations over its collection, based on books' covers, sizes, keywords, authorships and timelines. This playful approach was deployed on a touch kiosk in a library and was received very positively, but the collection size used for implementation and test was limited to only 250 books, a number too small to be indicative of performance over larger collections where some of the employed visualizations could become too cluttered and complex to navigate.
|
| 48 |
+
|
| 49 |
+
The Blended Shelf [18] carried over the physical shelves into very large touch displays that offered 3D visualizations of a library collection of ${2.1}\mathrm{\;m}$ books, conforming to the standard classification used by that library. The 3D shelves were draggable by swipe, can be searched and reordered and the classifications can be navigated via a breadcrumb-cued input mechanism. However there are no deployments, tests or user studies regarding their implementation.
|
| 50 |
+
|
| 51 |
+
We have taken a reality-based presentation approach, creating a 3D online library of ${9.4}\mathrm{\;m}$ books on infinitely scrollable shelves that conform to the LCC ordering, that is also navigable through search, cross-connected subjects, and a navigatable classification tree.
|
| 52 |
+
|
| 53 |
+
§ 3 DESIGN AND IMPLEMENTATION
|
| 54 |
+
|
| 55 |
+
We have regarded the strongly expressed preference for physical shelf browsing $\left\lbrack {3 - 5,{11}}\right\rbrack$ together with the long cultivated art of library classification systems [1] as the guiding principles of our design process, and aimed to bring the physical library experience online while making improvements to enhance the connectedness and navigability of the served online collection.
|
| 56 |
+
|
| 57 |
+
§ 3.1 DESIGNING TO FAITHFULLY RECREATE PHYSICAL LIBRARIES
|
| 58 |
+
|
| 59 |
+
We had two conflicting design interests. First, we wanted to design and deploy a virtual library application at scale available to everyone on web browsers. Second, we also wanted this to be as close to a physical library experience as possible. Naïvely optimizing for the latter would have meant building an online replica of a huge library in 3D, that would then be navigated by mouse and keyboard through some combination of user motion and teleportation. However, that would have been an alien set of interactions to most people, be hard to navigate and possibly even clunky.
|
| 60 |
+
|
| 61 |
+
< g r a p h i c s >
|
| 62 |
+
|
| 63 |
+
Figure 3: (A) Clicks on shelved books bring up synoptic book panels. The elements in the subject heading table are clickable. (B) The search-results after the “church architecture” subject heading was clicked. The hits on subject headings are highlighted with shades of green.
|
| 64 |
+
|
| 65 |
+
Instead we extracted the most crucial part of the physical book browsing experience from the physical library: the shelves. The user, essentially the camera, is placed in front of a set of floating shelves, and is allowed 20 degrees of freedom of camera motion left and right, and 10 degrees up and down, achievable with a mouse or a trackpad. A wide-angle view of the shelves is seen on Figure 1. The user can "scroll" the shelves left and right as slow or as fast as they want via the regular scroll gesture on their trackpads, scroll-wheels on mouses or arrow-keys on keyboards (Video Figure).
|
| 66 |
+
|
| 67 |
+
< g r a p h i c s >
|
| 68 |
+
|
| 69 |
+
Figure 4: (A) The scroll-view of class hierarchies. Hierarchies can be collapsed by clicking any of the intermediate elements. (B) The collapsed hierarchies after the click at A. The hierarchies can be scrolled up and down, or expanded by clicking their tag panels if they contain child categories.
|
| 70 |
+
|
| 71 |
+
Books on the virtual shelves are arranged in order according to their LCC classifications, just as they would be in a real library. There are also panels showing LCC hierarchies above shelves indicating where the user is within the library. The panels in the center indicate the user's current location, while the left and right panels respectively indicate the previous and next classifications, and thus the content of the shelves that await the user in those directions. A click on the left or right panels scroll the shelves to bring those classes in front of the user. A click on the center panels opens the LCC classification hierarchy as seen on Figure 4. This list can be navigated by up and down scrolling as well as clicks to expand and collapse it. Clicking on the end portion of these hierarchies brings that class & its associated books in front of the user.
|
| 72 |
+
|
| 73 |
+
The user can start typing anytime they want to bring up the search bar, which can be used to search over titles, authors, subjects and LCC classifications. If any book or classification is selected through the search results, the user is transported to the shelf containing this selected book. The books are also sized according to their real dimensions, page numbers and volumes, as seen on Figure 1.
|
| 74 |
+
|
| 75 |
+
§ 3.2 BUILDING A FAMILIAR AND EXPRESSIVE SEARCH BAR
|
| 76 |
+
|
| 77 |
+
Single input search bars are ubiquitous and users are intimately familiar with them. We utilized the search bar as an entry point into our library by designing it as a catch-all input system that searches over book titles, authors, subject headings of books, and LCC categories. Our search provides a way for users to enter into the hierarchically organized stacks by helping them first with identifying a relevant pool of items they might be interested in.
|
| 78 |
+
|
| 79 |
+
Figure 2 shows shelf (class) and book search hits for the query ’A tale of two cities.' Shelf results show the LCC hierarchy from top to bottom, together with the class numbers and the number of books that are listed under that class. Book results list titles, authors and publishers, together with a table showing the subject headings listed for each book. To the right of each book result, you can also see under which shelf (class) that book can be found, hovering over that section expands the shelf hierarchy as can be seen on the right side of Figure 2. The 'under shelf' portions are always colored to clearly designate shelf hierarchies. The search results are not paginated, and can be "infinitely" scrolled downwards until the end of results.
|
| 80 |
+
|
| 81 |
+
§ 3.3 ENGINEERING A SMOOTH BROWSING EXPERIENCE
|
| 82 |
+
|
| 83 |
+
Clicking on a search result, both for books and shelves, zooms the user into a set of floating shelves (Figure 1). The searched-for book appears on the centre shelf and is briefly highlighted as an indicator of its position. The books on shelves conform to LCC sorting and appear as they would on a regular library. One other key feature we have added is browsing over the LCC classification hierarchy as seen on Figure 4. This scroll-view of classification hierarchies is opened by clicking on the centered LCC panels. A listed hierarchy can be expanded or collapsed by clicking on different parts of the hierarchy. Clicking on any panel within a hierarchy collapses that hierarchy down to that panel, reducing the number of hierarchies in the scroll-view. Clicking on the name-tag panel of a hierarchy expands that hierarchy. Clicking on the last panel of a hierarchy transports the user to the beginning of that hierarchical class with the first book of that class highlighted on shelves.
|
| 84 |
+
|
| 85 |
+
Another addition to the browsing experience is through the improved connectivity that comes through search over subject headings of books. Book clicks bring up synoptic windows as seen on part 1 of Figure 3. The colored subject tables are filled with Library of Congress Subject Headings (LCSH) which we have populated for every book from a dataset maintained by domain experts at the Library of Congress [1]. These LCSH headings are clickable and trigger a search over the entire dataset's indexed LCSH fields. Part 2 of Figure 3 shows search results for an LCSH field search; notice that these books can come from entirely different shelves and classes, often very distant from each other, thus providing another way to jump between classes and books. This allows users to transcend the distance imposed by LCC in physical libraries through a subject-based search method, which allows for dynamic links between books. We believe this feature would go a long way to satisfy the expressed desire by users to occasionally see distant books [5].
|
| 86 |
+
|
| 87 |
+
§ 3.4 PROVIDING ACCESS TO DIGITIZED AND PHYSICAL BOOKS
|
| 88 |
+
|
| 89 |
+
Another killer feature of real world libraries is in-depth browsing of individual books. A library-goer can just pick up any book and read until their curiosity is sated. In order to provide a similar experience, when a user clicks on a shelved book, we provide a single page overview. Part 1 of Figure 3 shows an example of this page which displays the book's title, authorship, publishing house, colored subject headings, physical information regarding extent and dimensions, and links to lookup the same book on Amazon and Goodreads, as well as a Peek Inside button that is enabled when there is a free & digitized version available on OpenLibrary (which houses over 3 million books, or around ${30}\%$ of the total collection). A single click on this button opens a new tab with an online reader showing the contents of the book, as seen on Figure 5, providing an in-depth reading experience. The Amazon and Goodreads buttons provide easy access to purchase and social reading options, respectively.
|
| 90 |
+
|
| 91 |
+
< g r a p h i c s >
|
| 92 |
+
|
| 93 |
+
Figure 5: The "Peek Inside" button on book panels opens an e-reader that uses OpenLibrary's digitized book archive.
|
| 94 |
+
|
| 95 |
+
< g r a p h i c s >
|
| 96 |
+
|
| 97 |
+
Figure 6: The cloud architecture of Library of Apollo.
|
| 98 |
+
|
| 99 |
+
§ 4 ONLINE DEPLOYMENT AND SURVEY RESULTS
|
| 100 |
+
|
| 101 |
+
§ 4.1 THE FRONT-END AND THE CLOUD ARCHITECTURE
|
| 102 |
+
|
| 103 |
+
We have developed the front-end of our application using Unity WebGL. The compiled WebAssembly, Javascript and HTML files are hosted on the AWS S3 service through the CDN service of AWS, Cloudfront. The books are sorted according to LCC sorting scheme $\left\lbrack {1,6}\right\rbrack$ and clustered into JSON files that contain 500 books each, gzipped and stored in S3. These data files are fetched when the user is browsing the shelves, and pre-fetched during scrolling when the user is near the end of a cluster.
|
| 104 |
+
|
| 105 |
+
The same data that is stored in S3 is indexed on AWS Cloudsearch to power the search functionalities of our app. All search requests are performed through the REST API we have developed on AWS API Gateway. Search results are populated through the data returned from Cloudsearch. For analytics, user click-data is also recorded on DynamoDB through the same gateway. The search requests to OpenLibrary's digitized books dataset are made in a similar fashion. Our architecture is summarized on Figure 6.
|
| 106 |
+
|
| 107 |
+
§ 4.2 THE DEPLOYED DATASET
|
| 108 |
+
|
| 109 |
+
Courtesy of the Library of Congress [19], through their MARC Open-Access program, we were able to put together ${9.4}\mathrm{\;m}$ books, complete with their LCSH and LCC data, distributed over ${200}\mathrm{k}$ LCC classes. We pre-processed this data to serve our specific needs, and stored and indexed them for use within our virtual library application as described above. Despite the size of the dataset, the website is very smooth to use and inexpensive to operate - monthly operational costs are less than $40 USD.
|
| 110 |
+
|
| 111 |
+
§ 4.3 DEPLOYMENT AND RECRUITMENT
|
| 112 |
+
|
| 113 |
+
We deployed our library at a publicly accessible address, loapollo.com, and seeded links to the project via Reddit and word-of-mouth. Over a span of two weeks, over two hundred unique users visited the site. Each user to the library was assigned a randomly-generated persistent user ID, to track repeated session visits, as well as a session ID for any given visit. The Library front-end automatically logged click interactions with the system in a DynamoDB server for later introspection.
|
| 114 |
+
|
| 115 |
+
Of the 224 unique visitors, 21 (9.4%) visited more than once, with one user visiting the site a total of five times over the course of two weeks. Users interacted for an average of 162 seconds, and produced an average of 8 click interactions; notably, however, both distributions are particularly long-tailed. While some users only visited for very brief moments (often just a single search, followed presumably by some scrolling), 12 users used it for over ten minutes each, and likewise 20 users generated over 20 click events each.
|
| 116 |
+
|
| 117 |
+
< g r a p h i c s >
|
| 118 |
+
|
| 119 |
+
Figure 7: The change in reading habits due to COVID-19 pandemic.
|
| 120 |
+
|
| 121 |
+
§ 4.4 SURVEY
|
| 122 |
+
|
| 123 |
+
When users clicked on any book, a survey link was displayed in the upper-right corner, ensuring that users interacted with the system to a minimal extent prior to taking the survey. Users who accessed the link were invited to fill out a short survey with two major parts: a first section which asked about their existing (pre- and post-pandemic) book browsing habits, and another section with which asked about their experiences with the library. Finally, users could provide open-ended feedback about their experience with the Library. The full survey design can be found in the Supplemental Materials.
|
| 124 |
+
|
| 125 |
+
We received a total of 46 survey responses. Two responses were discarded: one filled out ' 1's for every single question, and another was a clear duplicate submission. Participants reported reading an average of 13.4 books per year (SD 15.2; participants reported anywhere from 1 book per year to 80 books per year).
|
| 126 |
+
|
| 127 |
+
In the first part of the survey (Figure 7), the participants did not report significantly changing their reading habits (mean 2.9), but evidently had a significant shift in book-browsing habits due to the pandemic. Library usage decreased significantly (pre-pandemic mean 2.36, post-pandemic mean 1.54), as did bookshop usage (pre-pandemic mean 2.98, post-pandemic mean 1.45), whereas online browsing significantly increased (pre-pandemic mean 3.14, post-pandemic mean 3.86).
|
| 128 |
+
|
| 129 |
+
In the second part of the survey (Figure 8), the users generally expressed a strong preference for the Library, with over ${80}\% \left( {{36}/{44}}\right)$ users being "somewhat likely" or more to come back, and over 90% of users (41/44) being "somewhat likely" or more to recommend this to a friend. Users generally felt that it did replicate the feel of real libraries and bookshops to some extent, with over half (25/44) noting that it felt similar or very similar. Similarly, around half of participants found it "easier" or "much easier" to find books compared to bookshops, libraries (23/44) and online destinations (21/44), and felt that it contained "more" or "significantly more" new and interesting books compared to bookshops, libraries (23/44) and online destinations $\left( {{24}/{44}}\right)$ . Overall, survey respondents were very positive about the library and its major features.
|
| 130 |
+
|
| 131 |
+
§ 5 DISCUSSION
|
| 132 |
+
|
| 133 |
+
Through our online deployment and survey, we were able to gather valuable feedback from readers and book browsers. In the open feedback column, we had several useful insights generated by users.
|
| 134 |
+
|
| 135 |
+
< g r a p h i c s >
|
| 136 |
+
|
| 137 |
+
Figure 8: The user perception of the library's navigability, ease of use, browsing quality and overall quality.
|
| 138 |
+
|
| 139 |
+
User Interface: Three users wished for a mobile version in the feedback, with one noting "Mobile version must be created and [published] on App stores immediately", indicating the modern importance of smartphone-friendly interfaces. Indeed, although our interface does work on some mobile browsers, it does not on many older browsers due to a lack of WebGL support. We expect this limitation will improve in the future as newer devices generally do support WebGL. Two users commented that they would have liked to change the colour scheme: one user remarked that it was too dark, while another user wanted a "night mode" to make it even darker.
|
| 140 |
+
|
| 141 |
+
Search Performance: Our project primarily focused on book-browsing, rather than search, and consequently our search feature is much simpler than e.g. Google or Amazon's search functionality. Users commented that this could be improved. Four users noted that search could be improved: providing better search capabilities (such as advanced search by author/title/subject fields), improving the presentation of results (e.g. moving favorite or recommended books to the top), improving the discoverability of category or tag search, and ordering works by popularity or relevance. Recommendations, in particular, are interesting as they must function differently in a public library setting (with limited or no access to prior user data) compared with companies like Amazon which have vast access to prior user preferences via tracking and search history.
|
| 142 |
+
|
| 143 |
+
Browsing Capabilities Users generally praised the browsing and search features of the Library. One user noted that they were able to find " 5 books on music theory in 5 minutes", while another noted that "It has everything I am looking for". Users also had suggestions for improving the experience further: one commented that they would have liked even more physicality in the form of different rooms/areas to browse around in, while another suggested a downloadable version for organizing their own books.
|
| 144 |
+
|
| 145 |
+
§ 6 CONCLUSION
|
| 146 |
+
|
| 147 |
+
We have presented the design of the Library of Apollo, a virtual-reality library implemented in a web-browser application and designed to support library-like browsing and discovery. Our system was deployed publicly and attracted over two hundred visitors and 44 survey responses; our survey results suggested high user affinity for the book-browsing experience and library capabilities. Through the combination of features that we designed, we have built a virtual library which lends itself to serendipitous browsing and discoveries across a large, connected book dataset.
|
| 148 |
+
|
| 149 |
+
§ ACKNOWLEDGMENTS
|
| 150 |
+
|
| 151 |
+
Anonymous for review.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,369 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Exploring Smartphone-enabled Text Selection in AR-HMD
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Figure 1: (a) The overall experimental setup consisted of an HoloLens, a smartphone, and an optitrack system. (b) In the HoloLens view, a user sees two text windows. The right one is the 'instruction panel' where the subject sees the text to select. The left is the "action panel' where the subject performs the actual selection. The cursor is shown inside a green dotted box (for illustration purpose only) on the action panel. For each text selection task, the cursor position always starts from the center of the window.
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Text editing is important and at the core of most complex tasks, like writing an email or browsing the web. Efficient and sophisticated techniques exist on desktops and touch devices, but are still under-explored for Augmented Reality Head Mounted Display (AR-HMD). Performing text selection, a necessary step before text editing, in AR display commonly uses techniques such as hand-tracking, voice commands, eye/head-gaze, which are cumbersome and lack precision. In this paper, we explore the use of a smartphone as an input device to support text selection in AR-HMD because of its availability, familiarity, and social acceptability. We propose four eyes-free text selection techniques, all using a smartphone - continuous touch, discrete touch, spatial movement, and raycasting. We compare them in a user study where users have to select text at various granularity levels. Our results suggest that continuous touch, in which we used the smartphone as a trackpad, outperforms the other three techniques in terms of task completion time, accuracy, and user preference.
|
| 12 |
+
|
| 13 |
+
Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
|
| 14 |
+
|
| 15 |
+
## 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Text input and text editing represent a significant portion of our everyday digital tasks. We need it when we browse the web, write emails, or just when we type a password. Because of this ubiquity, it has been the focus of research on most of the platforms we are using daily like desktops, tablets, and mobile phones. The recent focus of the industry on Augmented Reality Head-Mounted Display (AR-HMD), with the development of devices like the Microsoft HoloLens ${}^{1}$ and Magic Leap ${}^{2}$ , made them more and more accessible to us, and their usage is envisioned in our future everyday life. The lack of a physical keyboard and mouse (i.e., the absence of interactive surfaces) with such devices makes text input difficult and an important challenge in AR research. While text input for AR-HMD has been already well-studied [17, 37, 45, 56], very little research focused on editing text that has already been typed by a user. Normally, text editing is a complex task and the first step is to select the text to edit it. This paper will only focus on this text selection part. Such tasks have already been studied on desktop [10] with various modalities (like gaze+gesture [14], gaze with keyboard [50]) as well as touch interfaces [21]. On the other hand, no formal experiments were conducted in AR-HMD contexts.
|
| 18 |
+
|
| 19 |
+
In general, text selection in AR-HMD can be performed using various input modalities, including notably hand-tracking, eye/head-gaze, voice commands [20], and handheld controller [33]. However, these techniques have their limitations. For instance, hand-tracking suffers from achieving character level precision [39], lacks haptic feedback [13], and provokes arm fatigue [30] during prolonged interaction. Eye-gaze and head-gaze suffer from the 'Midas Touch' problem which causes unintended activation of commands in the absence of a proper selection mechanism $\left\lbrack {{28},{31},{54},{58}}\right\rbrack$ . Moreover, frequent head movements in head-gaze interaction increase motion sickness [57]. Voice interaction might not be socially acceptable in public places [25] and it may disturb the communication flow when several users are collaborating. In the case of a dedicated handheld controller, users need to always carry extra specific hardware with them.
|
| 20 |
+
|
| 21 |
+
Recently, researchers have been exploring to use of a smartphone as an input for the AR-HMD because of its availability (it can even be the processing unit of the HMD [44]), familiarity, social acceptability, and tangibility $\left\lbrack {8,{22},{60}}\right\rbrack$ . Undoubtedly, there is a huge potential for designing novel cross-device applications with a combination of an AR display and a smartphone. In the past, smartphones have been used for interacting with different applications running on AR-HMDs such as manipulating 3D objects [40], windows management [46], selecting graphical menus [35] and so on. However, we are unaware of any research that has investigated text selection in an AR display using a commercially available smartphone. In this work, we explored different approaches to select text when using a smartphone as an input controller. We proposed four eyes-free text selection techniques for AR display. These techniques, described in Section 3.1, differ with regard to the mapping of smartphone-based inputs - touch or spatial. We then conduct a user study to compare these four techniques in terms of text selection task performance.
|
| 22 |
+
|
| 23 |
+
The main contributions of this paper are - (1) design and development of a set of smartphone-enabled text selection techniques for AR-HMD; (2) insights from a 20 person comparative study of these techniques in text selection tasks.
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
${}^{1}$ https://www.microsoft.com/en-us/hololens
|
| 28 |
+
|
| 29 |
+
${}^{2}$ https://www.magicleap.com/en-us
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## 2 RELATED WORK
|
| 34 |
+
|
| 35 |
+
In this section, we review previous work on text selection and editing in AR, and on a smartphone. We also review research that combines handheld devices with HMD and large wall displays.
|
| 36 |
+
|
| 37 |
+
### 2.1 Text Selection and Editing in AR
|
| 38 |
+
|
| 39 |
+
Very few research focused on text editing in AR. Ghosh et al. presented EYEditor to facilitate on-the-go text-editing on a smart-glass with a combination of voice and a handheld controller [20]. They used voice to modify the text content, while manual input is used for text navigation and selection. The use of a handheld device is inspiring for our work, however, voice interaction might not be suitable in public places. Lee et al. [36] proposed two force-assisted text acquisition techniques where the user exerts a force on a thumb-sized circular button located on an iPhone 7 and selects text which is shown on a laptop emulating the Microsoft Hololens display. They envision that this miniature force-sensitive area $\left( {{12}\mathrm{\;{mm}} \times {13}\mathrm{\;{mm}}}\right)$ can be fitted into a smart-ring. Although their result is promising, a specific force-sensitive device is required.
|
| 40 |
+
|
| 41 |
+
In this paper, we follow the direction of the two papers previously presented and continue to explore the use of a smartphone in combination with an AR-HMD. While their use for text selection is still rare, it has been investigated more broadly for other tasks.
|
| 42 |
+
|
| 43 |
+
### 2.2 Combining Handheld Devices and HMDs/Large Wall Displays
|
| 44 |
+
|
| 45 |
+
By combining handheld devices and HMDs, researchers try to make the most of the benefits of both [60]. On one hand, the handheld device brings a 2D high-resolution display that provides a multi-touch, tangible, and familiar interactive surface. On the other hand, HMDs provide a spatialized, 3D, and almost infinite work-space. With MultiFi [22], Grubert et al. showed that such a combination is more efficient than a single device for pointing and searching tasks. For a similar setup, Zhu and Grossman proposed a set of techniques and demonstrated how it can be used to manipulate $3\mathrm{D}$ objects [60]. Similarly, Ren et al. [46] demonstrated how it can be used to perform windows management. Finally, in VESAD [43], Normand et al. used AR to directly extend the smartphone display.
|
| 46 |
+
|
| 47 |
+
Regarding the type of input provided by the handheld device, it is possible to only focus on using touch interactions, as it is proposed in Input Forager [1] and Dual-MR [34]. Waldow et al. compared the use of touch to perform 3D object manipulation to gaze and mid-air gestures and showed that touch was more efficient [53]. It is also possible to track the handheld device in space and allow for 3D spatial interactions. It has been done in DualCAD in which Millette and McGuffin used a smartphone tracked in space to create and manipulate shapes using both spatial interactions and touch gestures [40]. With ARPointer [48], Ro et al. proposed a similar system and showed it led to better performance for object manipulation than a mouse and keyboard and a combination of gaze and mid-air gestures. When comparing the use of touch and spatial interaction with a smartphone, Budhiraja et al. showed that touch was preferred by participants for a pointing task [7], but Büschel et al. showed that spatial interaction was more efficient and preferred for a navigation task in 3D [8]. In both cases, Chen et al. showed that the interaction should be viewport-based and not world-based [16].
|
| 48 |
+
|
| 49 |
+
Overall, previous research showed a handheld device provides a good alternative input for augmented reality display in various tasks. In this paper, we focus on a text selection task, which has not been studied yet. It is not clear yet if only tactile interactions should be used on the handheld device or if it should also be tracked to provide spatial interactions. Thus, we propose the two alternatives in our techniques and compare them.
|
| 50 |
+
|
| 51 |
+
The use of handheld devices as input was also investigated in combination with large wall-displays. It is a use case close to the one presented in this paper as text is displayed inside a 2D virtual window. Campbell et al. studied the use of a Wiimote as a distant pointing device [9]. With a pointing task, the authors compared its use with an absolute mapping (i.e. raytracing) to a relative mapping, and showed that participants were faster with the absolute mapping. Vogel and Balakrishnan found similar results between the two mappings (with the difference that they directly tracked the hand), but only with large targets and when clutching was necessary [52]. They also found that participants had a lower accuracy with an absolute mapping. This lower accuracy for an absolute mapping with spatial interaction is also shown when compared with distant touch interaction of the handheld device as a trackpad, with the same task [4]. Jain et al. also compared touch interaction with spatial interaction, but with a relative mapping, and found that the spatial interaction was faster but less accurate [29]. The accuracy result is confirmed by a recent study from Siddhpuria et al. in which the authors also compared the use of absolute and relative mapping with the touch interaction, and found that the relative mapping is faster [49]. These studies were all done for a pointing task, and overall showed that using the handheld device as a trackpad (so with a relative mapping) is more efficient (to avoid clutching, one can change the transfer function [42]). In their paper, Siddhpuria et al. highlighted the fact that more studies needed to be done with a more complex task to validate their results. To our knowledge, this has been done only by Baldhauf et al. with a drawing task, and they showed that spatial interaction with an absolute mapping was faster than using the handheld device as a trackpad without any impacts on the accuracy [4]. In this paper, we take a step in this direction and use a text selection task. Considering the result from Baldauf et al., we cannot assume that touch interaction will perform better.
|
| 52 |
+
|
| 53 |
+
### 2.3 Text Selection on Handheld Devices
|
| 54 |
+
|
| 55 |
+
Text selection has not been yet investigated with the combination of a handheld device and an AR-HMD, but it has been studied on handheld devices independently. Using a touchscreen, adjustment handles are the primary form of text selection techniques. However, due to the fat-finger problem [5], it can be difficult to modify the selection by one character. A first solution is to allow users to only select the start and the end of the selection as it is done in TextPin in which it is shown to be more efficient than the default technique [26]. Fuccella et al. [19] and Zhang et al. [59] proposed to use the keyboard area to allow the user to control the selection using gestures and showed it was also more efficient than the default technique. Ando et al. adapted the principle of shortcuts and associated different actions with the keys of the virtual keyboard that was activated with a modifier action performed after. In the first paper, the modifier was the tilting of the device [2], and in a second one, it was a sliding gesture starting on the key [3]. The latter was more efficient than the first one and the default technique. With BezelCopy [15], a gesture on the bezel of the phone allow for a first rough selection that can be refined after. Finally, other solutions used a non-traditional smartphone. Le et al. used a fully touch-sensitive device to allow users to perform gestures on the back of the device [32]. Gaze N'Touch [47] used gaze to define the start and end of the selection. Goguey et al. explored the use of a force-sensitive screen to control the selection [21], and Eady and Girouard used a deformable screen to explore the use of the bending of the screen [18].
|
| 56 |
+
|
| 57 |
+
In this work, we choose to focus on commercially available smart-phones, and we will not explore in this paper, the use of deformable, or fully touch-sensitive ones. Compared to the use of shortcuts, the use of gestures seems to lead to good performance and can be performed without looking at the screen (i.e. eye-free), which avoids transition between the AR virtual display and the handheld devices.
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
Figure 2: Illustrations of our proposed interaction techniques: (a) continuous touch; (b) discrete touch; (c) spatial movement; (d) raycasting.
|
| 62 |
+
|
| 63 |
+
## 3 DESIGNING SMARTPHONE-BASED TEXT SELECTION IN AR-HMD
|
| 64 |
+
|
| 65 |
+
### 3.1 Proposed Techniques
|
| 66 |
+
|
| 67 |
+
Previous work used a smartphone as an input device to interact with virtual content in AR-HMD mainly in two ways - touch input from the smartphone and tracked the smartphone spatially like AR/VR controller. Similar work on wall-displays suggested that using the smartphone as a trackpad would be the most efficient technique, but this was tested with pointing task (See Related Work). With a drawing task (which could be closer to a text selection task than a pointing task), spatial interaction was actually better [4].
|
| 68 |
+
|
| 69 |
+
Inspired by this, we propose four eyes-free text selection techniques for AR-HMD - two are completely based on mobile touch-screen interaction, whereas the smartphone needs to be tracked in mid-air for the latter two approaches to use spatial interactions. For spatial interaction, we choose a technique with an absolute mapping (Raycasting) and one with a relative one (Spatial Movement). The comparison between both in our case is not straightforward, previous results suggest that a relative mapping would have a better accuracy, but an absolute one would be faster. For touch interaction, we choose to not have an absolute mapping, its use with a large virtual window could lead to a poor accuracy [42], and just have technique that use a relative mapping. In addition to the traditional use of the smartphone as a trackpad (Continuous Touch), we proposed a technique that allow for a discrete selection of text (Discrete Touch). Such discrete selection mechanism has shown good results in a similar context for shape selection [29]. Overall, while we took inspiration from previous work for these techniques, they have never been assessed together for a text selection task.
|
| 70 |
+
|
| 71 |
+
To select text successfully using any of our proposed techniques, a user needs to follow the same sequence of steps each time. First, she moves the cursor, located on the text window in an AR display, to the beginning of the text to be selected (i.e., the first character). Then, she performs a double tap on the phone to confirm the selection of that first character. She can see on the headset screen that the first character got highlighted in yellow color. At the same time, she enters into the text selection mode. Next, she continues moving the cursor to the end position of the text using one of the techniques presented below. While the cursor is moving, the text is also getting highlighted simultaneously up to the current position of the cursor. Finally, she ends the text selection with a second double tap.
|
| 72 |
+
|
| 73 |
+
#### 3.1.1 Continuous Touch
|
| 74 |
+
|
| 75 |
+
In continuous touch, the smartphone touchscreen acts as a trackpad (see Figure. 2(a)). It is an indirect pointing technique where the user moves her thumb on the touchscreen to change the cursor position on the AR display. For the mapping between display and touchscreen, we used a relative mode with clutching. As clutching may degrades performance [12], a control-display (CD) gain was applied to minimize it (see Section 3.2).
|
| 76 |
+
|
| 77 |
+
#### 3.1.2 Discrete Touch
|
| 78 |
+
|
| 79 |
+
This technique is inspired by the text selection with keyboard shortcuts available in both Mac [27] and Windows [24] OS. In this work, we tried to emulate a few keyboard shortcuts. We particularly considered imitating keyboard shortcuts related to the character, word, and line-level text selection. For example, in Mac OS, hold down 1, and pressing $\rightarrow$ or $\leftarrow$ extends text selection one character to the right or left. Whereas hold down $\widehat{U} + = =$ and pressing $\rightarrow$ or $\overset{ \leftarrow }{ \leftarrow }$ allows users to select text one word to the right or left. To perform text selection to the nearest character at the same horizontal location on the line above or below, a user needs to hold down [1] and press $\uparrow$ or $\downarrow$ respectively. In discrete touch interaction, we replicated all these shortcuts using directional swipe gestures (see Figure. 2(b)). Left or right swipe can select text at both levels - word as well as character. By default, it works at the word level. The user performs a long-tap which acts as a toggle button to switch between word and character level selection. On the other hand, up or down swipe selects text at one line above or one line below from the current position. The user can only select one character/word/line at a time with its respective swipe gesture.
|
| 80 |
+
|
| 81 |
+
Note that, to select text using discrete touch, a user first positions the cursor on top of the starting word (not the starting character) of the text to be selected by touch dragging on the smartphone as described in the continuous touch technique. From a pilot study, we observed that moving the cursor every time to the starting word using discrete touch makes the overall interaction slow. Then, she selects that first word with the double tap and uses discrete touch to select text up to the end position as described before.
|
| 82 |
+
|
| 83 |
+
#### 3.1.3 Spatial Movement
|
| 84 |
+
|
| 85 |
+
This technique emulates the smartphone as an air-mouse $\left\lbrack {{38},{51}}\right\rbrack$ for AR-HMD. To control the cursor position on the headset screen, the user holds the phone in front of his torso, places his thumb on the touchscreen, and then she moves the phone in the air with small forearm motions in a plane that is perpendicular to the gaze direction (see Figure. 2(c)). While moving the phone, its tracked positional data in XY-coordinates get translated into the cursor movement in XY-coordinates inside a 2D window. When a user wants to stop the cursor movement, she simply lifts her thumb from the touchscreen. Thumb touch-down and touch-release events define the start and stop of the cursor movement on the AR display. The user determines the speed of the cursor by simply moving the phone faster and slower accordingly. We applied CD-gain between the phone movement and the cursor displacement on the text window (see Section 3.2).
|
| 86 |
+
|
| 87 |
+
<table><tr><td>Techniques</td><td>$C{D}_{Max}$</td><td>$C{D}_{Min}$</td><td>$\lambda$</td><td>${V}_{inf}$</td></tr><tr><td>Continuous Touch</td><td>28.34</td><td>0.0143</td><td>36.71</td><td>0.039</td></tr><tr><td>Spatial Movement</td><td>23.71</td><td>0.0221</td><td>32.83</td><td>0.051</td></tr></table>
|
| 88 |
+
|
| 89 |
+
Table 1: Logistic function parameter values for continuous touch and spatial movement interaction
|
| 90 |
+
|
| 91 |
+
#### 3.1.4 Raycasting
|
| 92 |
+
|
| 93 |
+
Raycasting is a popular interaction technique in AR/VR environments to select $3\mathrm{D}$ virtual objects $\left\lbrack {6,{41}}\right\rbrack$ . In this work, we developed a smartphone-based raycasting technique for selecting text displayed on a 2D window in AR-HMD (see Figure. 2(d)). A 6 DoF tracked smartphone was used to define the origin and orientation of the ray. In the headset display, the user can see the ray in a straight line appearing from the top of the phone. By default, the ray is always visible to users in AR-HMD as long as the phone is being tracked properly. In raycasting, the user needs to do small angular wrist movements for pointing on the text content using the ray. Where the ray hits on the text window, the user sees the cursor there. Compared to other proposed methods, raycasting does not require clutching as it allows direct pointing to the target. The user confirms the target selection on the AR display by providing a touch input (i.e., double tap) from the phone.
|
| 94 |
+
|
| 95 |
+
### 3.2 Implementation
|
| 96 |
+
|
| 97 |
+
To prototype our proposed interaction techniques, we used a Mi-crosoft HoloLens $2\left( {{42}^{ \circ } \times {29}^{ \circ }}\right.$ screen) as an AR-HMD device and an OnePlus 5 as a smartphone. For spatial movement and raycasting interactions, real-time pose information of the smartphone is needed. An OptiTrack ${}^{3}$ system with three Flex-13 cameras was used for accurate tracking with low latency. To bring the hololens and the smartphone into a common coordinate system, we attached passive reflective markers to them and did a calibration between hololens space and optitrack space.
|
| 98 |
+
|
| 99 |
+
In our software framework, the AR application running on HoloLens was implemented using Unity3D (2018.4) and Mixed Reality Toolkit ${}^{4}$ . To render text in HoloLens, we used TextMesh-Pro. A Windows 10 workstation was used to stream tracking data to HoloLens. All pointing techniques with the phone were also developed using Unity3D. We used UNet ${}^{5}$ library for client-server communications between devices over the WiFi network.
|
| 100 |
+
|
| 101 |
+
For continuous touch and spatial movement interactions, we used a generalized logistic function [42] to define the control-display (CD) gain between the move events either on the touchscreen or in the air and the cursor displacement in the AR display:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
{CD}\left( v\right) = \frac{C{D}_{Max} - C{D}_{Min}}{1 + {e}^{-\lambda \times \left( {v - {V}_{inf}}\right) }} + C{D}_{Min} \tag{1}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
$C{D}_{Max}$ and $C{D}_{Min}$ are the asymptotic maximum and minimum amplitudes of CD gain and $\lambda$ is a parameter proportional to the slope of the function at $v = {V}_{\text{inf }}$ with ${V}_{\text{inf }}$ a inflection value of the function. We derived initial values from the parameters of the definitions from Nancel et al. [42], and then empirically optimized for each technique. The parameters were not changed during the study for individual participants. The values are summarized in Table 1.
|
| 108 |
+
|
| 109 |
+
In discrete touch interaction, we implemented up, down, left, and right swipes by obtaining touch position data from the phone. We considered a 700 msec time-window (empirically found) for detecting a long-tap event. Users get vibration feedback from the phone when she performs long-tap successfully. They also receive vibration haptics while double tapping to start and end the text selection in all interaction techniques. Note that, we did not provide haptic feedback for swipe gestures. With each swipe movement, they can see that texts are getting highlighted in yellow color. This acts as visual feedback by default for touch swipes.
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
|
| 113 |
+
Figure 3: Text selection tasks used the experiments: (1) word (2) sub-word (3) word to a character (4) four words (5) one sentence (6) paragraph to three sentences (7) one paragraph (8) two paragraphs (9) three paragraphs (10) whole text.
|
| 114 |
+
|
| 115 |
+
In the spatial movement technique, we noticed that the phone moves slightly during the double tap event. This results in a slight unintentional cursor movement. To reduce that, we suspended cursor movement for ${300}\mathrm{{msec}}$ (empirically found) when there is any touch event on the phone screen.
|
| 116 |
+
|
| 117 |
+
In raycasting, we applied the $1\mathfrak{€}$ Filter [11] with $\beta = {80}$ and min-cutoff $= {0.6}$ (empirically tested) at the ray source to minimize jitter and latency which usually occur due to both hand tremor and double tapping [55]. We set the ray length to 8 meters by default. The user sees the full length of the ray when it is not hitting the text panel.
|
| 118 |
+
|
| 119 |
+
## 4 EXPERIMENT
|
| 120 |
+
|
| 121 |
+
To assess the impact of the different characteristics of these four interaction techniques we perform a comparative study with a text selection task while users are standing up. Particularly, we are interested to evaluate the performance of these techniques in terms of task completion time, accuracy, and perceived workload.
|
| 122 |
+
|
| 123 |
+
### 4.1 Participants and Apparatus
|
| 124 |
+
|
| 125 |
+
In our experiment, we recruited 20 unpaid participants (P1-P20) (13 males +7 females) from a local university campus. Their ages ranged from 23 to 46 years (mean $= {27.84},\mathrm{{SD}} = {6.16}$ ). Four were left-handed. All were daily users of smartphones and desktops. With respect to their experience with AR/VR technology, 7 participants ranked themselves as an expert because they are studying and working on the same field, 4 participants were beginners as they played some games in VR, while others had no prior experience. They all had either normal or corrected-to-normal vision. We used the apparatus and prototype described in Subsection 3.2.
|
| 126 |
+
|
| 127 |
+
### 4.2 Task
|
| 128 |
+
|
| 129 |
+
In this study, we ask participants to perform a series of text selection using our proposed techniques. Participants were standing up for the entire duration of the experiment. We reproduce different realistic usage by varying the type of text selection to do, like the selection of a word, a sentence, a paragraph, etc. Figure 3 shows all the types of text selection that were asked to the participants. Concretely, the experiment scene in HoloLens consisted of two vertical windows of ${102.4}\mathrm{\;{cm}} \times {57.6}\mathrm{\;{cm}}$ positioned at a distance of ${180}\mathrm{\;{cm}}$ from the headset at the start of the application (i.e., its visual size was ${31.75}^{ \circ } \times {18.1806}^{ \circ }$ ). The windows were anchored in the world coordinate. These two panels contain the same text. Participants are asked to select the text in the action panel (left panel in Figure 1(b)) that is highlighted in the instruction panel (right panel in Figure 1(b)). The user controls a cursor (i.e., a small circular dot in red color as shown in Figure 1(b)) using one of the techniques on the smartphone. Its position is always bounded by the window size. The text content was generated by Random Text Generator ${}^{6}$ and was displayed using the Liberation Sans font with a font-size of ${25}\mathrm{{pt}}$ (to allow a comfortable viewing from a few meters).
|
| 130 |
+
|
| 131 |
+
---
|
| 132 |
+
|
| 133 |
+
${}^{3}$ https://optitrack.com/
|
| 134 |
+
|
| 135 |
+
${}^{4}$ https://github.com/microsoft/MixedRealityToolkit-Unity
|
| 136 |
+
|
| 137 |
+
${}^{5}$ https://docs.unity3d.com/Manual/UNet.html
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
|
| 143 |
+
Figure 4: (a) Mean task completion time for our proposed four interaction techniques. Lower scores are better. (b) Mean error rate of interaction techniques. Lower scores are better. Error bars show 95% confidence interval. Statistical significances are marked with stars $( * * : p < {.01}$ and $* : p < {.05})$ .
|
| 144 |
+
|
| 145 |
+
### 4.3 Study Design
|
| 146 |
+
|
| 147 |
+
We used a within-subject design with 2 factor: 4 INTERACTION TECHNIQUE (Continuous touch, Discrete touch, Spatial movement, and Raycasting) × 10 TEXT SELECTION TYPE (shown in Figure 3) $\times {20}$ participants = 800 trials. The order of INTERACTION TECHNIQUE was counterbalanced across participants using a Latin Square. The order of TEXT SELECTION TYPE is randomized in each block for each INTERACTION TECHNIQUE (but same for each participant).
|
| 148 |
+
|
| 149 |
+
### 4.4 Procedure
|
| 150 |
+
|
| 151 |
+
We welcomed participants upon arrival. They were asked to read and sign the consent form, fill out a pre-study questionnaire to collect demographic information and prior AR/VR experience. Next, we gave them a brief introduction to the experiment background, hardware, the four interaction techniques, and the task involved in the study. After that, we helped participants to wear HoloLens comfortably and complete the calibration process for their personal interpupillary distance (IPD). For each block of INTERACTION TECHNIQUE, participants completed a practice phase followed by a test session. During the practice, the experimenter explained how the current technique worked, and participants were encouraged to ask questions. Then, they had time to train themselves with the technique until they were fully satisfied, which took around 7 minutes on average. Once they felt confident with the technique, the experimenter launched the application for the test session. They were instructed to do the task as quickly and accurately as possible in the standing condition. To avoid noise due to participants using either one or two hands, we asked to only use their dominant hand.
|
| 152 |
+
|
| 153 |
+
At the beginning of each trial in the test session, the text to select was highlighted in the instruction panel. Once they were satisfied with their selection, participants had to press a dedicated button on the phone screen to get to the new task. They were allowed to use their non-dominant hand only to press this button. At the end of each block of INTERACTION TECHNIQUE, they answered a NASA-TLX questionnaire [23] on iPad, and moved to the next condition.
|
| 154 |
+
|
| 155 |
+
At the end of the experiment, we asked participants a questionnaire in which they had to rank techniques by speed, accuracy, and overall preference and performed an informal post-test interview.
|
| 156 |
+
|
| 157 |
+
The entire experiment took approximately 80 minutes in total. Participants were allowed to take breaks between sessions during which they could sit and encourage to comment at any time during the experiment. To respect COVID-19 safety protocol, participants wore FFP2 mask and maintained a 1 meter distance with the experimenter at all times.
|
| 158 |
+
|
| 159 |
+
### 4.5 Measures
|
| 160 |
+
|
| 161 |
+
We recorded completion time as the time taken to select the text from its first character to the last character, which is the time difference between the first and second double tap. If they selected more or less characters than expected, the trial was considered wrong. We then calculated the error rate as the percentage of wrong trials for each condition. Finally, as stated above, participants filled a NASA TLX questionnaire to measure the subjective workload of each INTERACTION TECHNIQUE, and their preference was measured using a ranking questionnaire at the end of the experiment.
|
| 162 |
+
|
| 163 |
+
### 4.6 Hypotheses
|
| 164 |
+
|
| 165 |
+
In our experiment, we hypothesized that:
|
| 166 |
+
|
| 167 |
+
H1. Continuous touch, Spatial movement, and Raycasting will be faster than Discrete touch because a user needs to spend more time for multiple swipes and do frequent mode switching to select text at the character/word/sentence level.
|
| 168 |
+
|
| 169 |
+
H2. Discrete touch will be more mentally demanding compared to all other techniques because the user needs to remember the mapping between swipe gestures and text granularity, as well as the long-tap for mode switching.
|
| 170 |
+
|
| 171 |
+
H3. The user will perceive that Spatial movement will be more physically demanding as it involves more forearm movements. H4. The user will make more errors in Raycasting, and it will be more frustrating because double tapping for target confirmation while holding the phone in one-hand will introduce more jitter [55].
|
| 172 |
+
|
| 173 |
+
---
|
| 174 |
+
|
| 175 |
+
${}^{6}$ http://randomtextgenerator.com/
|
| 176 |
+
|
| 177 |
+
---
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
|
| 181 |
+
Figure 5: Mean scores for the ranking questionnaire which are in 3 point likert scale. Higher marks are better. Error bars show 95% confidence interval. Statistical significances are marked with stars (**: $p < {.01}$ and *: $p < {.05}$ ).
|
| 182 |
+
|
| 183 |
+
H5. Overall, Continuous touch would be the most preferred text selection technique as it works similarly to the trackpad which is already familiar to users.
|
| 184 |
+
|
| 185 |
+
## 5 RESULT
|
| 186 |
+
|
| 187 |
+
To test our hypothesis, we conducted a series of analyses using IBM SPSS software. Shapiro-Wilk tests showed that the task completion time, total error, and questionnaire data were not normally distributed. Therefore, we used the Friedman test with the interaction technique as an independent variable to analyze our experimental data. When significant effects were found, we reported post hoc tests using the Wilcoxon signed-rank test and applied Bonferroni corrections for all pair-wise comparisons. We set an $\alpha = {0.05}$ in all significance tests. Due to a logging issue, we had to discard one participant and did the analysis with 19 instead of 20 participants.
|
| 188 |
+
|
| 189 |
+
### 5.1 Task Completion Time
|
| 190 |
+
|
| 191 |
+
There was a statistically significant difference in task completion time depending on which interaction technique was used for text selection $\left\lbrack {{\chi }^{2}\left( 3\right) = {33.37}, p < {.001}}\right\rbrack$ (see Figure 4(a)). Post hoc tests showed that Continuous touch $\left\lbrack {\mathrm{M} = {5.16},\mathrm{{SD}} = {0.84}}\right\rbrack$ , Spatial movement $\left\lbrack {\mathrm{M} = {5.73},\mathrm{{SD}} = {1.38}}\right\rbrack$ , and Raycasting $\lbrack \mathrm{M} = {5.43},\mathrm{{SD}} =$ 1.66] were faster than Discrete touch $\left\lbrack {\mathrm{M} = {8.78},\mathrm{{SD}} = {2.09}}\right\rbrack$ .
|
| 192 |
+
|
| 193 |
+
### 5.2 Error Rate
|
| 194 |
+
|
| 195 |
+
We found significant effects of the interaction technique on error rate $\left\lbrack {{\chi }^{2}\left( 3\right) = {39.45}, p < {.001}}\right\rbrack$ (see Figure 4(b)). Post hoc tests showed that Raycasting $\left\lbrack {\mathrm{M} = {24.21},\mathrm{\;{SD}} = {13.46}}\right\rbrack$ was more error-prone than Continuous touch $\left\lbrack {\mathrm{M} = {1.05},\mathrm{{SD}} = {3.15}}\right\rbrack$ , Discrete touch $\lbrack \mathrm{M} = {4.73}$ , $\mathrm{{SD}} = {9.05}\rbrack$ , and Spatial movement $\left\lbrack {\mathrm{M} = {8.42},\mathrm{{SD}} = {12.58}}\right\rbrack$ .
|
| 196 |
+
|
| 197 |
+
### 5.3 Questionnaires
|
| 198 |
+
|
| 199 |
+
For NASA-TLX, we found significant differences for mental demand $\left\lbrack {{\chi }^{2}\left( 3\right) = {9.65}, p = {.022}}\right\rbrack$ , physical demand $\left\lbrack {{\chi }^{2}\left( 3\right) = {29.75}}\right.$ , $p < {.001}\rbrack$ , performance $\left\lbrack {{\chi }^{2}\left( 3\right) = {40.14}, p < {.001}}\right\rbrack$ , frustration $\left\lbrack {\chi }^{2}\right.$ $\left( 3\right) = {39.53}, p < {.001}\rbrack$ , and effort $\left\lbrack {{\chi }^{2}\left( 3\right) = {32.69}, p < {.001}}\right\rbrack$ . Post hoc tests showed that Raycasting and Discrete touch had significantly higher mental demand compared to Continuous touch and Spatial movement. On the other hand, physical demand was lowest for Continuous touch, whereas users rated significantly higher physical demand for Raycasting and Spatial movement. In terms of performance, Raycasting was rated significantly lower than the other techniques. Raycasting was also rated significantly more frustrating. Moreover, Continuous touch was least frustrating and better in performance than Spatial movement. Figure 6 shows a bar chart of the NASA-TLX workload sub-scales for our experiment.
|
| 200 |
+
|
| 201 |
+
For ranking questionnaires, there were significant differences for speed $\left\lbrack {{\chi }^{2}\left( 3\right) = {26.40}, p < {.001}}\right\rbrack$ , accuracy $\left\lbrack {{\chi }^{2}\left( 3\right) = {45.5}}\right.$ , $p < {.001}\rbrack$ , and preference $\left\lbrack {{\chi }^{2}\left( 3\right) = {38.56}, p < {.001}}\right\rbrack$ . Post hoc test showed that users ranked Discrete touch as the slowest and Raycasting as the least accurate technique. The most preferred technique was Continuous touch whereas Raycasting was the least. Users also favored discrete touch as well as Spatial movement based text selection approach. Figure 5 summarises participants responses for ranking questionnaires.
|
| 202 |
+
|
| 203 |
+
## 6 DISCUSSION & DESIGN IMPLICATIONS
|
| 204 |
+
|
| 205 |
+
Our results suggest that Continuous Touch is the technique that was preferred by the participants (confirming $\mathbf{{H5}}$ ). It was the least physically demanding technique and the less frustrating one. It was also more satisfying regarding performance than the two spatial ones (Raycasting and Spatial Movement). Finally, it was less mentally demanding than Discrete Touch and Raycasting. Participants pointed out that this technique was simple, intuitive, and familiar to them as they are using trackpad and touchscreen every day. During the training session, we noticed that they took the least time to understand its working principle. In the interview, P8 commented, "I can select text fast and accurately. Although I noticed a bit of overshooting in the cursor positioning, it can be adjusted by tuning CD gain". P17 said, "I can keep my hands down while selecting text. This gives me more comfort".
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
|
| 209 |
+
Figure 6: Mean scores for the NASA-TLX task load questionnaire which are in range of 1 to 10. Lower marks are better, except for performance. Error bars show 95% confidence interval. Statistical significances are marked with stars (**: $p < {.01}$ and *: $p < {.05}$ ).
|
| 210 |
+
|
| 211 |
+
On the other hand, Raycasting was the least preferred technique and led to the lowest task accuracy (participants were also the least satisfied with their performance using this technique). This can be explained by the fact that it was the most physically demanding and the most frustrating (confirming H5). Finally, it was more mentally demanding than Continuous Touch and Spatial Movement. In their comments, participants reported about the lack of stability due to the one-handed phone holding posture. Some participants complained that they felt uncomfortable to hold this OnePlus 5 phone in one hand as it was a bit bigger compared to their hand size. This introduced even more jitter for them in Raycasting while double-tapping for target confirmation. P10 commented, "I am sure I will perform Raycasting with fewer errors if I can use my both hands to hold the phone". Moreover, from the logged data, we noticed that they made more mistakes when the target character was positioned inside a word rather than either at the beginning or at the end, which was confirmed in the discussion with participants.
|
| 212 |
+
|
| 213 |
+
As we expected, Discrete Touch was the slowest technique (confirming $\mathbf{{H1}}$ ), but was not the most mentally demanding, as it was only more demanding than Continuous Touch (rejecting $\mathbf{{H2}}$ ). It is also more physically demanding than Continuous Touch, but less than Spatial Movement and Raycasting. Several participants mentioned that it is excellent for the short word to word or sentence to sentence selection, but not for long text as multiple swipes are required. They also pointed out that performing mode switching with a long-tap of 700 msec was a bit tricky and lost some time there during text selection. Although they got better with it over time, still they are uncertain to do it successfully in one attempt.
|
| 214 |
+
|
| 215 |
+
Finally, contrary to our expectation, Spatial Movement was not the most physically demanding technique, as it was less demanding than Raycasting (but more than Continuous Touch and Discrete Touch). It was also less mentally demanding than Raycasting and led to less frustration. However, it led to more frustration and participants were less satisfied with their performance with this technique than with Continuous Touch. According to participants, with this technique, moving the forearm needs physical effort undoubtedly, but they only need to move it for a very short distance which was fine for them. From the user interview, we came to know that they did not use much clutching (less than with Continuous Touch). P13 mentioned, "In Spatial Movement, I completed most of the tasks without using clutching at all".
|
| 216 |
+
|
| 217 |
+
Overall, our results suggest that between Touch and Spatial interactions, it would be better to use Touch for text selection, which confirms findings from Siddhpuria et al. for pointing tasks [49]. Continuous Touch was overall preferred, faster, and less demanding than Discrete Touch, which goes against results from the work by Jain et al. for shape selection [29]. Such difference can be explained by the fact that with text selection, there is a minimum of two levels of discretization (characters and words), which makes it mentally demanding. It can also be explained by the high number of words (and even more characters) in a text, contrary to the number of shapes in Jain et al. experiment. This led to a high number of discrete actions for the selection, and thus, a higher physical demand. However, surprisingly, most of the participants appreciated the idea of Discrete Touch. If a tactile interface is not available on the handheld device, our results suggest to use a spatial interaction technique that uses a relative mapping, as we did with Spatial Movement. We could not find any differences in time, contrary to the work by Campbell et al. [9], but it leads to fewer errors, which confirms what was found by Vogel and Balakrishnan [52]. It is also less physically and mentally demanding and leads to less frustration than an absolute mapping. On the technical side, a spatial interaction technique with a relative mapping can be easily achieved without an external sensor (as it was done for example by Siddhpuria et al. [49]).
|
| 218 |
+
|
| 219 |
+
## 7 LIMITATIONS
|
| 220 |
+
|
| 221 |
+
There were two major limitations. First, we used an external tracking system which limits us to lab study only. As a result, it is difficult to understand the social acceptability of each technique until we consider the real-world on-the-go situation. However, technical progress in inside-out tracking ${}^{7}$ means that it will be possible, soon, to have smartphones that can track themselves accurately in 3D space. Second, some of our participants had difficulties holding the phone in one-hand because the phone was a bit bigger for their hands. They mentioned that although they were trying to move their thumb faster in continuous touch and discrete touch interactions, they were not able to do it comfortably due to the afraid of dropping the phone. This bigger phone size also influenced their raycasting performance particularly when they need to do a double tap for target confirmation. Hence, using one phone size for all was an important constraint in this experiment.
|
| 222 |
+
|
| 223 |
+
---
|
| 224 |
+
|
| 225 |
+
${}^{7}$ https://developers.google.com/ar
|
| 226 |
+
|
| 227 |
+
---
|
| 228 |
+
|
| 229 |
+
## 8 CONCLUSION AND FUTURE WORK
|
| 230 |
+
|
| 231 |
+
In this research, we investigated the use of a smartphone as an eyes-free interactive controller to select text in augmented reality head-mounted display. We proposed four interaction techniques: 2 that use the tactile surface of the smartphone: Continuous Touch and Discrete Touch, and two that track the device in space: Spatial Movement and Raycasting. We evaluated these four techniques in a text selection task study. The results suggested that techniques using the tactile surface of the device are more suited for text selection than spatial one, Continuous Touch being the most efficient. If a tactile surface was not available, it would be better to use a spatial technique (i.e. with the device tracked in space) that uses a relative mapping between the user gesture and the virtual screen, compared to a classic Raycasting technique that uses an absolute mapping.
|
| 232 |
+
|
| 233 |
+
In this work, we have focused on interaction techniques based on smartphone inputs. This allowed us to better understand which approach should be favored in that context. In the future, it would be interesting to explore a more global usage scenario such as a text editing interface in AR-HMD using smartphone-based input where users need to perform other interaction tasks such as text input and commands execution simultaneously. Another direction to future work is to compare phone-based techniques to other input techniques like hand tracking, head/eye gaze, and voice commands. Furthermore, we only considered standing condition, but it would be interesting to study text selection performance while the user is walking.
|
| 234 |
+
|
| 235 |
+
## REFERENCES
|
| 236 |
+
|
| 237 |
+
[1] M. Al-Sada, F. Ishizawa, J. Tsurukawa, and T. Nakajima. Input forager:
|
| 238 |
+
|
| 239 |
+
A user-driven interaction adaptation approach for head worn displays. In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, MUM '16, p. 115-122. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/3012709. 3012719
|
| 240 |
+
|
| 241 |
+
[2] T. Ando, T. Isomoto, B. Shizuki, and S. Takahashi. Press & tilt: One-handed text selection and command execution on smartphone. In Proceedings of the 30th Australian Conference on Computer-Human Interaction, OzCHI '18, p. 401-405. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3292147.3292178
|
| 242 |
+
|
| 243 |
+
[3] T. Ando, T. Isomoto, B. Shizuki, and S. Takahashi. One-handed rapid text selection and command execution method for smartphones. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA '19, p. 1-6. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290607. 3312850
|
| 244 |
+
|
| 245 |
+
[4] M. Baldauf, P. Fröhlich, J. Buchta, and T. Stürmer. From touchpad to smart lens. International Journal of Mobile Human Computer Interaction, 5:1-20, 08 2015. doi: 10.4018/jmhci.2013040101
|
| 246 |
+
|
| 247 |
+
[5] S. Boring, D. Ledo, X. A. Chen, N. Marquardt, A. Tang, and S. Greenberg. The fat thumb: Using the thumb's contact size for single-handed mobile interaction. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '12, p. 39-48. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2371574.2371582
|
| 248 |
+
|
| 249 |
+
[6] D. A. Bowman, E. Kruijff, J. J. LaViola, and I. Poupyrev. 3D User Interfaces: Theory and Practice. Addison Wesley Longman Publishing Co., Inc., USA, 2004.
|
| 250 |
+
|
| 251 |
+
[7] R. Budhiraja, G. A. Lee, and M. Billinghurst. Using a hhd with a hmd for mobile ar interaction. In 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1-6, 2013. doi: 10. 1109/ISMAR.2013.6671837
|
| 252 |
+
|
| 253 |
+
[8] W. Büschel, A. Mitschick, T. Meyer, and R. Dachselt. Investigating smartphone-based pan and zoom in $3\mathrm{\;d}$ data spaces in augmented
|
| 254 |
+
|
| 255 |
+
reality. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI
|
| 256 |
+
|
| 257 |
+
'19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3338286.3340113
|
| 258 |
+
|
| 259 |
+
[9] B. Campbell, K. O'Brien, M. Byrne, and B. Bachman. Fitts' law predictions with an alternative pointing device (wiimote(r)). Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 52,09 2008. doi: 10.1177/154193120805201904
|
| 260 |
+
|
| 261 |
+
[10] S. K. Card, W. K. English, and B. J. Burr. Evaluation of mouse, rate-controlled isometric joystick, step keys, and text keys for text selection on a crt. Ergonomics, 21(8):601-613, 1978. doi: 10.1080/ 00140137808931762
|
| 262 |
+
|
| 263 |
+
[11] G. Casiez, N. Roussel, and D. Vogel. 1€ Filter: A Simple Speed-based Low-pass Filter for Noisy Input in Interactive Systems. In CHI'12, the 30th Conference on Human Factors in Computing Systems, pp. 2527- 2530. ACM, Austin, United States, May 2012. doi: 10.1145/2207676. 2208639
|
| 264 |
+
|
| 265 |
+
[12] G. Casiez, D. Vogel, Q. Pan, and C. Chaillou. Rubberedge: Reducing clutching by combining position and rate control with elastic feedback. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, UIST '07, p. 129-138. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1294211. 1294234
|
| 266 |
+
|
| 267 |
+
[13] L.-W. Chan, H.-S. Kao, M. Y. Chen, M.-S. Lee, J. Hsu, and Y.-P. Hung. Touching the Void: Direct-Touch Interaction for Intangible Displays, p. 2625-2634. Association for Computing Machinery, New York, NY, USA, 2010.
|
| 268 |
+
|
| 269 |
+
[14] I. Chatterjee, R. Xiao, and C. Harrison. Gaze+gesture: Expressive, precise and targeted free-space interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI '15, p. 131-138. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2818346.2820752
|
| 270 |
+
|
| 271 |
+
[15] C. Chen, S. T. Perrault, S. Zhao, and W. T. Ooi. Bezelcopy: An efficient cross-application copy-paste technique for touchscreen smartphones. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces, AVI '14, p. 185-192. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2598153. 2598162
|
| 272 |
+
|
| 273 |
+
[16] Y. Chen, K. Katsuragawa, and E. Lank. Understanding viewport-and world-based pointing with everyday smart devices in immersive augmented reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/ 3313831.3376592
|
| 274 |
+
|
| 275 |
+
[17] J. J. Dudley, K. Vertanen, and P. O. Kristensson. Fast and precise touch-based text entry for head-mounted augmented reality with variable occlusion. ACM Trans. Comput.-Hum. Interact., 25(6), Dec. 2018. doi: 10.1145/3232163
|
| 276 |
+
|
| 277 |
+
[18] A. K. Eady and A. Girouard. Caret manipulation using deformable input in mobile devices. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '15, p. 587-591. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2677199.2687916
|
| 278 |
+
|
| 279 |
+
[19] V. Fuccella, P. Isokoski, and B. Martin. Gestures and widgets: Performance in text editing on multi-touch capable mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ' 13, p. 2785-2794. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654.2481385
|
| 280 |
+
|
| 281 |
+
[20] D. Ghosh, P. S. Foong, S. Zhao, C. Liu, N. Janaka, and V. Erusu. Eyeditor: Towards on-the-go heads-up text editing using voice and manual input. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/ 3313831.3376173
|
| 282 |
+
|
| 283 |
+
[21] A. Goguey, S. Malacria, and C. Gutwin. Improving discoverability and expert performance in force-sensitive text selection for touch devices with mode gauges. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, p. 1-12. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/ 3173574.3174051
|
| 284 |
+
|
| 285 |
+
[22] J. Grubert, M. Heinisch, A. Quigley, and D. Schmalstieg. Multifi: Multi fidelity interaction with displays on and around the body. In Proceedings of the 33rd Annual ACM Conference on Human Factors
|
| 286 |
+
|
| 287 |
+
in Computing Systems, CHI '15, p. 3933-3942. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123. 2702331
|
| 288 |
+
|
| 289 |
+
[23] S. G. Hart. Nasa-task load index (nasa-tlx); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, vol. 50, pp. 904-908. Sage publications Sage CA: Los Angeles, CA, 2006.
|
| 290 |
+
|
| 291 |
+
[24] C. Hoffman. 42+ text-editing keyboard shortcuts that work almost everywhere. https://www.howtogeek.com/115664/42-text-editing-keyboard-shortcuts-that-work-almost-everywhere/, 2020. Accessed: 2020-11-01.
|
| 292 |
+
|
| 293 |
+
[25] Y.-T. Hsieh, A. Jylhä, V. Orso, L. Gamberini, and G. Jacucci. Designing a willing-to-use-in-public hand gestural interaction technique for smart glasses. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 4203-4215. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2858036. 2858436
|
| 294 |
+
|
| 295 |
+
[26] W. Huan, H. Tu, and Z. Li. Enabling finger pointing based text selection on touchscreen mobile devices. In Proceedings of the Seventh International Symposium of Chinese CHI, Chinese CHI '19, p. 93-96. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3332169.3332172
|
| 296 |
+
|
| 297 |
+
[27] A. Inc. Mac keyboard shortcuts. https://support.apple.com/ en-us/HT201236, 2020. Accessed: 2020-11-01.
|
| 298 |
+
|
| 299 |
+
[28] R. J. K. Jacob. What you look at is what you get: Eye movement-based interaction techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '90, p. 11-18. Association for Computing Machinery, New York, NY, USA, 1990. doi: 10.1145/ 97243.97246
|
| 300 |
+
|
| 301 |
+
[29] M. Jain, A. Cockburn, and S. Madhvanath. Comparison of Phone-Based Distal Pointing Techniques for Point-Select Tasks. In P. Kotzé, G. Marsden, G. Lindgaard, J. Wesson, and M. Winckler, eds., 14th International Conference on Human-Computer Interaction (INTERACT), vol. LNCS-8118 of Human-Computer Interaction - INTERACT 2013, pp. 714-721. Springer, Cape Town, South Africa, Sept. 2013. Part 15: Mobile Interaction Design. doi: 10.1007/978-3-642-40480-1 49
|
| 302 |
+
|
| 303 |
+
[30] S. Jang, W. Stuerzlinger, S. Ambike, and K. Ramani. Modeling cumulative arm fatigue in mid-air interaction based on perceived exertion and kinetics of arm motion. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 3328-3339. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3025453.3025523
|
| 304 |
+
|
| 305 |
+
[31] M. Kytö, B. Ens, T. Piumsomboon, G. A. Lee, and M. Billinghurst. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2018.
|
| 306 |
+
|
| 307 |
+
[32] H. V. Le, S. Mayer, M. Weiß, J. Vogelsang, H. Weingärtner, and N. Henze. Shortcut gestures for mobile text editing on fully touch sensitive smartphones. ACM Trans. Comput.-Hum. Interact., 27(5), Aug. 2020. doi: 10.1145/3396233
|
| 308 |
+
|
| 309 |
+
[33] M. Leap. Magic leap handheld controller. https: //developer.magicleap.com/en-us/learn/guides/ design-magic-leap-one-control, 2020. Accessed: 2020- 11-01.
|
| 310 |
+
|
| 311 |
+
[34] C.-J. Lee and H.-K. Chu. Dual-mr: Interaction with mixed reality using smartphones. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, VRST '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/ 3281505.3281618
|
| 312 |
+
|
| 313 |
+
[35] H. Lee, D. Kim, and W. Woo. Graphical menus using a mobile phone for wearable ar systems. In 2011 International Symposium on Ubiquitous Virtual Reality, pp. 55-58. IEEE, 2011.
|
| 314 |
+
|
| 315 |
+
[36] L. Lee, Y. Zhu, Y. Yau, T. Braud, X. Su, and P. Hui. One-thumb text acquisition on force-assisted miniature interfaces for mobile headsets. In 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 1-10, 2020. doi: 10.1109/
|
| 316 |
+
|
| 317 |
+
PerCom45495.2020.9127378
|
| 318 |
+
|
| 319 |
+
[37] L. H. Lee, K. Yung Lam, Y. P. Yau, T. Braud, and P. Hui. Hibey: Hide the keyboard in augmented reality. In 2019 IEEE International
|
| 320 |
+
|
| 321 |
+
Conference on Pervasive Computing and Communications (PerCom, pp. 1-10, 2019. doi: 10.1109/PERCOM.2019.8767420
|
| 322 |
+
|
| 323 |
+
[38] N. C. Ltd. Nintendo wii. http://wii.com/, 2006. Accessed: 2020- 11-03.
|
| 324 |
+
|
| 325 |
+
[39] P. Lubos, G. Bruder, and F. Steinicke. Analysis of direct selection in head-mounted display environments. In 2014 IEEE Symposium on 3D User Interfaces (3DUI), pp. 11-18, 2014. doi: 10.1109/3DUI.2014. 6798834
|
| 326 |
+
|
| 327 |
+
[40] A. Millette and M. J. McGuffin. Dualcad: Integrating augmented reality with a desktop gui and smartphone interaction. In 2016 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), pp. 21-26, 2016. doi: 10.1109/ISMAR-Adjunct.2016.0030
|
| 328 |
+
|
| 329 |
+
[41] M. R. Mine. Virtual environment interaction techniques. Technical report, USA, 1995.
|
| 330 |
+
|
| 331 |
+
[42] M. Nancel, O. Chapuis, E. Pietriga, X.-D. Yang, P. P. Irani, and M. Beaudouin-Lafon. High-precision pointing on large wall displays using small handheld devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, p. 831-840. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654.2470773
|
| 332 |
+
|
| 333 |
+
[43] E. Normand and M. J. McGuffin. Enlarging a smartphone with ar to create a handheld vesad (virtually extended screen-aligned display). In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 123-133, 2018. doi: 10.1109/ISMAR.2018.00043
|
| 334 |
+
|
| 335 |
+
[44] Nreal. https://www.nreal.ai/, 2020. Accessed: 2020-11-03.
|
| 336 |
+
|
| 337 |
+
[45] D.-M. Pham and W. Stuerzlinger. Hawkey: Efficient and versatile text entry for virtual reality. In 25th ACM Symposium on Virtual Reality Software and Technology, VRST '19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3359996.3364265
|
| 338 |
+
|
| 339 |
+
[46] J. Ren, Y. Weng, C. Zhou, C. Yu, and Y. Shi. Understanding window management interactions in ar headset + smartphone interface. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA '20, p. 1-8. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3334480. 3382812
|
| 340 |
+
|
| 341 |
+
[47] R. Rivu, Y. Abdrabou, K. Pfeuffer, M. Hassib, and F. Alt. Gaze'n'touch: Enhancing text selection on mobile devices using gaze. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA '20, p. 1-8. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3334480.3382802
|
| 342 |
+
|
| 343 |
+
[48] H. Ro, J. Byun, Y. Park, N. Lee, and T. Han. Ar pointer: Advanced ray-casting interface using laser pointer metaphor for object manipulation in 3d augmented reality environment. Applied Sciences (Switzerland), 9(15), Aug. 2019. doi: 10.3390/app9153078
|
| 344 |
+
|
| 345 |
+
[49] S. Siddhpuria, S. Malacria, M. Nancel, and E. Lank. Pointing at a Distance with Everyday Smart Devices, p. 1-11. Association for Computing Machinery, New York, NY, USA, 2018.
|
| 346 |
+
|
| 347 |
+
[50] S. Sindhwani, C. Lutteroth, and G. Weber. Retype: Quick text editing with keyboard and gaze. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/ 3290605.3300433
|
| 348 |
+
|
| 349 |
+
[51] R. Technology. iphone air mouse. http://mobilemouse.com/, 2008. Accessed: 2020-11-03.
|
| 350 |
+
|
| 351 |
+
[52] D. Vogel and R. Balakrishnan. Distant freehand pointing and clicking on very large, high resolution displays. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology, UIST '05, p. 33-42. Association for Computing Machinery, New York, NY, USA, 2005. doi: 10.1145/1095034.1095041
|
| 352 |
+
|
| 353 |
+
[53] K. Waldow, M. Misiak, U. Derichs, O. Clausen, and A. Fuhrmann. An evaluation of smartphone-based interaction in ar for constrained object manipulation. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, VRST ' 18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/ 3281505.3281608
|
| 354 |
+
|
| 355 |
+
[54] C. Ware and H. H. Mikaelian. An evaluation of an eye tracker as a device for computer input2. In Proceedings of the SIGCHI/GI Conference
|
| 356 |
+
|
| 357 |
+
on Human Factors in Computing Systems and Graphics Interface, CHI '87, p. 183-188. Association for Computing Machinery, New York, NY, USA, 1986. doi: 10.1145/29933.275627
|
| 358 |
+
|
| 359 |
+
[55] D. Wolf, J. Gugenheimer, M. Combosch, and E. Rukzio. Understanding the heisenberg effect of spatial interaction: A selection induced error for spatially tracked input devices. In Proceedings of the ${2020}\mathrm{{CHI}}$ Conference on Human Factors in Computing Systems, CHI '20, p. 1-10. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376876
|
| 360 |
+
|
| 361 |
+
[56] W. Xu, H. Liang, A. He, and Z. Wang. Pointing and selection methods for text entry in augmented reality head mounted displays. In 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 279-288, 2019. doi: 10.1109/ISMAR.2019.00026
|
| 362 |
+
|
| 363 |
+
[57] W. Xu, H.-N. Liang, Y. Zhao, D. Yu, and D. Monteiro. Dmove: Directional motion-based interaction for augmented reality head-mounted displays. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605. 3300674
|
| 364 |
+
|
| 365 |
+
[58] Y. Yan, Y. Shi, C. Yu, and Y. Shi. Headcross: Exploring head-based crossing selection on head-mounted displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 4(1), Mar. 2020. doi: 10.1145/3380983
|
| 366 |
+
|
| 367 |
+
[59] M. Zhang and J. O. Wobbrock. Gedit: Keyboard gestures for mobile text editing. In Proceedings of Graphics Interface 2020, GI 2020, pp. 470-473. Canadian Human-Computer Communications Society, 2020. doi: 10.20380/GI2020.47
|
| 368 |
+
|
| 369 |
+
[60] F. Zhu and T. Grossman. Bishare: Exploring bidirectional interactions between smartphones and head-mounted augmented reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376233
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/FX0nrz8XD3I/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,227 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ EXPLORING SMARTPHONE-ENABLED TEXT SELECTION IN AR-HMD
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
Continuous Text Selection User Study b We are sufficient instead recovered increased are. Pad tail free ten new lower even lead reached out bearing evillad. Resources sursections amount acs yo on no primary
|
| 6 |
+
|
| 7 |
+
Figure 1: (a) The overall experimental setup consisted of an HoloLens, a smartphone, and an optitrack system. (b) In the HoloLens view, a user sees two text windows. The right one is the 'instruction panel' where the subject sees the text to select. The left is the "action panel' where the subject performs the actual selection. The cursor is shown inside a green dotted box (for illustration purpose only) on the action panel. For each text selection task, the cursor position always starts from the center of the window.
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Text editing is important and at the core of most complex tasks, like writing an email or browsing the web. Efficient and sophisticated techniques exist on desktops and touch devices, but are still under-explored for Augmented Reality Head Mounted Display (AR-HMD). Performing text selection, a necessary step before text editing, in AR display commonly uses techniques such as hand-tracking, voice commands, eye/head-gaze, which are cumbersome and lack precision. In this paper, we explore the use of a smartphone as an input device to support text selection in AR-HMD because of its availability, familiarity, and social acceptability. We propose four eyes-free text selection techniques, all using a smartphone - continuous touch, discrete touch, spatial movement, and raycasting. We compare them in a user study where users have to select text at various granularity levels. Our results suggest that continuous touch, in which we used the smartphone as a trackpad, outperforms the other three techniques in terms of task completion time, accuracy, and user preference.
|
| 12 |
+
|
| 13 |
+
Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
|
| 14 |
+
|
| 15 |
+
§ 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Text input and text editing represent a significant portion of our everyday digital tasks. We need it when we browse the web, write emails, or just when we type a password. Because of this ubiquity, it has been the focus of research on most of the platforms we are using daily like desktops, tablets, and mobile phones. The recent focus of the industry on Augmented Reality Head-Mounted Display (AR-HMD), with the development of devices like the Microsoft HoloLens ${}^{1}$ and Magic Leap ${}^{2}$ , made them more and more accessible to us, and their usage is envisioned in our future everyday life. The lack of a physical keyboard and mouse (i.e., the absence of interactive surfaces) with such devices makes text input difficult and an important challenge in AR research. While text input for AR-HMD has been already well-studied [17, 37, 45, 56], very little research focused on editing text that has already been typed by a user. Normally, text editing is a complex task and the first step is to select the text to edit it. This paper will only focus on this text selection part. Such tasks have already been studied on desktop [10] with various modalities (like gaze+gesture [14], gaze with keyboard [50]) as well as touch interfaces [21]. On the other hand, no formal experiments were conducted in AR-HMD contexts.
|
| 18 |
+
|
| 19 |
+
In general, text selection in AR-HMD can be performed using various input modalities, including notably hand-tracking, eye/head-gaze, voice commands [20], and handheld controller [33]. However, these techniques have their limitations. For instance, hand-tracking suffers from achieving character level precision [39], lacks haptic feedback [13], and provokes arm fatigue [30] during prolonged interaction. Eye-gaze and head-gaze suffer from the 'Midas Touch' problem which causes unintended activation of commands in the absence of a proper selection mechanism $\left\lbrack {{28},{31},{54},{58}}\right\rbrack$ . Moreover, frequent head movements in head-gaze interaction increase motion sickness [57]. Voice interaction might not be socially acceptable in public places [25] and it may disturb the communication flow when several users are collaborating. In the case of a dedicated handheld controller, users need to always carry extra specific hardware with them.
|
| 20 |
+
|
| 21 |
+
Recently, researchers have been exploring to use of a smartphone as an input for the AR-HMD because of its availability (it can even be the processing unit of the HMD [44]), familiarity, social acceptability, and tangibility $\left\lbrack {8,{22},{60}}\right\rbrack$ . Undoubtedly, there is a huge potential for designing novel cross-device applications with a combination of an AR display and a smartphone. In the past, smartphones have been used for interacting with different applications running on AR-HMDs such as manipulating 3D objects [40], windows management [46], selecting graphical menus [35] and so on. However, we are unaware of any research that has investigated text selection in an AR display using a commercially available smartphone. In this work, we explored different approaches to select text when using a smartphone as an input controller. We proposed four eyes-free text selection techniques for AR display. These techniques, described in Section 3.1, differ with regard to the mapping of smartphone-based inputs - touch or spatial. We then conduct a user study to compare these four techniques in terms of text selection task performance.
|
| 22 |
+
|
| 23 |
+
The main contributions of this paper are - (1) design and development of a set of smartphone-enabled text selection techniques for AR-HMD; (2) insights from a 20 person comparative study of these techniques in text selection tasks.
|
| 24 |
+
|
| 25 |
+
${}^{1}$ https://www.microsoft.com/en-us/hololens
|
| 26 |
+
|
| 27 |
+
${}^{2}$ https://www.magicleap.com/en-us
|
| 28 |
+
|
| 29 |
+
§ 2 RELATED WORK
|
| 30 |
+
|
| 31 |
+
In this section, we review previous work on text selection and editing in AR, and on a smartphone. We also review research that combines handheld devices with HMD and large wall displays.
|
| 32 |
+
|
| 33 |
+
§ 2.1 TEXT SELECTION AND EDITING IN AR
|
| 34 |
+
|
| 35 |
+
Very few research focused on text editing in AR. Ghosh et al. presented EYEditor to facilitate on-the-go text-editing on a smart-glass with a combination of voice and a handheld controller [20]. They used voice to modify the text content, while manual input is used for text navigation and selection. The use of a handheld device is inspiring for our work, however, voice interaction might not be suitable in public places. Lee et al. [36] proposed two force-assisted text acquisition techniques where the user exerts a force on a thumb-sized circular button located on an iPhone 7 and selects text which is shown on a laptop emulating the Microsoft Hololens display. They envision that this miniature force-sensitive area $\left( {{12}\mathrm{\;{mm}} \times {13}\mathrm{\;{mm}}}\right)$ can be fitted into a smart-ring. Although their result is promising, a specific force-sensitive device is required.
|
| 36 |
+
|
| 37 |
+
In this paper, we follow the direction of the two papers previously presented and continue to explore the use of a smartphone in combination with an AR-HMD. While their use for text selection is still rare, it has been investigated more broadly for other tasks.
|
| 38 |
+
|
| 39 |
+
§ 2.2 COMBINING HANDHELD DEVICES AND HMDS/LARGE WALL DISPLAYS
|
| 40 |
+
|
| 41 |
+
By combining handheld devices and HMDs, researchers try to make the most of the benefits of both [60]. On one hand, the handheld device brings a 2D high-resolution display that provides a multi-touch, tangible, and familiar interactive surface. On the other hand, HMDs provide a spatialized, 3D, and almost infinite work-space. With MultiFi [22], Grubert et al. showed that such a combination is more efficient than a single device for pointing and searching tasks. For a similar setup, Zhu and Grossman proposed a set of techniques and demonstrated how it can be used to manipulate $3\mathrm{D}$ objects [60]. Similarly, Ren et al. [46] demonstrated how it can be used to perform windows management. Finally, in VESAD [43], Normand et al. used AR to directly extend the smartphone display.
|
| 42 |
+
|
| 43 |
+
Regarding the type of input provided by the handheld device, it is possible to only focus on using touch interactions, as it is proposed in Input Forager [1] and Dual-MR [34]. Waldow et al. compared the use of touch to perform 3D object manipulation to gaze and mid-air gestures and showed that touch was more efficient [53]. It is also possible to track the handheld device in space and allow for 3D spatial interactions. It has been done in DualCAD in which Millette and McGuffin used a smartphone tracked in space to create and manipulate shapes using both spatial interactions and touch gestures [40]. With ARPointer [48], Ro et al. proposed a similar system and showed it led to better performance for object manipulation than a mouse and keyboard and a combination of gaze and mid-air gestures. When comparing the use of touch and spatial interaction with a smartphone, Budhiraja et al. showed that touch was preferred by participants for a pointing task [7], but Büschel et al. showed that spatial interaction was more efficient and preferred for a navigation task in 3D [8]. In both cases, Chen et al. showed that the interaction should be viewport-based and not world-based [16].
|
| 44 |
+
|
| 45 |
+
Overall, previous research showed a handheld device provides a good alternative input for augmented reality display in various tasks. In this paper, we focus on a text selection task, which has not been studied yet. It is not clear yet if only tactile interactions should be used on the handheld device or if it should also be tracked to provide spatial interactions. Thus, we propose the two alternatives in our techniques and compare them.
|
| 46 |
+
|
| 47 |
+
The use of handheld devices as input was also investigated in combination with large wall-displays. It is a use case close to the one presented in this paper as text is displayed inside a 2D virtual window. Campbell et al. studied the use of a Wiimote as a distant pointing device [9]. With a pointing task, the authors compared its use with an absolute mapping (i.e. raytracing) to a relative mapping, and showed that participants were faster with the absolute mapping. Vogel and Balakrishnan found similar results between the two mappings (with the difference that they directly tracked the hand), but only with large targets and when clutching was necessary [52]. They also found that participants had a lower accuracy with an absolute mapping. This lower accuracy for an absolute mapping with spatial interaction is also shown when compared with distant touch interaction of the handheld device as a trackpad, with the same task [4]. Jain et al. also compared touch interaction with spatial interaction, but with a relative mapping, and found that the spatial interaction was faster but less accurate [29]. The accuracy result is confirmed by a recent study from Siddhpuria et al. in which the authors also compared the use of absolute and relative mapping with the touch interaction, and found that the relative mapping is faster [49]. These studies were all done for a pointing task, and overall showed that using the handheld device as a trackpad (so with a relative mapping) is more efficient (to avoid clutching, one can change the transfer function [42]). In their paper, Siddhpuria et al. highlighted the fact that more studies needed to be done with a more complex task to validate their results. To our knowledge, this has been done only by Baldhauf et al. with a drawing task, and they showed that spatial interaction with an absolute mapping was faster than using the handheld device as a trackpad without any impacts on the accuracy [4]. In this paper, we take a step in this direction and use a text selection task. Considering the result from Baldauf et al., we cannot assume that touch interaction will perform better.
|
| 48 |
+
|
| 49 |
+
§ 2.3 TEXT SELECTION ON HANDHELD DEVICES
|
| 50 |
+
|
| 51 |
+
Text selection has not been yet investigated with the combination of a handheld device and an AR-HMD, but it has been studied on handheld devices independently. Using a touchscreen, adjustment handles are the primary form of text selection techniques. However, due to the fat-finger problem [5], it can be difficult to modify the selection by one character. A first solution is to allow users to only select the start and the end of the selection as it is done in TextPin in which it is shown to be more efficient than the default technique [26]. Fuccella et al. [19] and Zhang et al. [59] proposed to use the keyboard area to allow the user to control the selection using gestures and showed it was also more efficient than the default technique. Ando et al. adapted the principle of shortcuts and associated different actions with the keys of the virtual keyboard that was activated with a modifier action performed after. In the first paper, the modifier was the tilting of the device [2], and in a second one, it was a sliding gesture starting on the key [3]. The latter was more efficient than the first one and the default technique. With BezelCopy [15], a gesture on the bezel of the phone allow for a first rough selection that can be refined after. Finally, other solutions used a non-traditional smartphone. Le et al. used a fully touch-sensitive device to allow users to perform gestures on the back of the device [32]. Gaze N'Touch [47] used gaze to define the start and end of the selection. Goguey et al. explored the use of a force-sensitive screen to control the selection [21], and Eady and Girouard used a deformable screen to explore the use of the bending of the screen [18].
|
| 52 |
+
|
| 53 |
+
In this work, we choose to focus on commercially available smart-phones, and we will not explore in this paper, the use of deformable, or fully touch-sensitive ones. Compared to the use of shortcuts, the use of gestures seems to lead to good performance and can be performed without looking at the screen (i.e. eye-free), which avoids transition between the AR virtual display and the handheld devices.
|
| 54 |
+
|
| 55 |
+
(a) (b) (c) (d)
|
| 56 |
+
|
| 57 |
+
Figure 2: Illustrations of our proposed interaction techniques: (a) continuous touch; (b) discrete touch; (c) spatial movement; (d) raycasting.
|
| 58 |
+
|
| 59 |
+
§ 3 DESIGNING SMARTPHONE-BASED TEXT SELECTION IN AR-HMD
|
| 60 |
+
|
| 61 |
+
§ 3.1 PROPOSED TECHNIQUES
|
| 62 |
+
|
| 63 |
+
Previous work used a smartphone as an input device to interact with virtual content in AR-HMD mainly in two ways - touch input from the smartphone and tracked the smartphone spatially like AR/VR controller. Similar work on wall-displays suggested that using the smartphone as a trackpad would be the most efficient technique, but this was tested with pointing task (See Related Work). With a drawing task (which could be closer to a text selection task than a pointing task), spatial interaction was actually better [4].
|
| 64 |
+
|
| 65 |
+
Inspired by this, we propose four eyes-free text selection techniques for AR-HMD - two are completely based on mobile touch-screen interaction, whereas the smartphone needs to be tracked in mid-air for the latter two approaches to use spatial interactions. For spatial interaction, we choose a technique with an absolute mapping (Raycasting) and one with a relative one (Spatial Movement). The comparison between both in our case is not straightforward, previous results suggest that a relative mapping would have a better accuracy, but an absolute one would be faster. For touch interaction, we choose to not have an absolute mapping, its use with a large virtual window could lead to a poor accuracy [42], and just have technique that use a relative mapping. In addition to the traditional use of the smartphone as a trackpad (Continuous Touch), we proposed a technique that allow for a discrete selection of text (Discrete Touch). Such discrete selection mechanism has shown good results in a similar context for shape selection [29]. Overall, while we took inspiration from previous work for these techniques, they have never been assessed together for a text selection task.
|
| 66 |
+
|
| 67 |
+
To select text successfully using any of our proposed techniques, a user needs to follow the same sequence of steps each time. First, she moves the cursor, located on the text window in an AR display, to the beginning of the text to be selected (i.e., the first character). Then, she performs a double tap on the phone to confirm the selection of that first character. She can see on the headset screen that the first character got highlighted in yellow color. At the same time, she enters into the text selection mode. Next, she continues moving the cursor to the end position of the text using one of the techniques presented below. While the cursor is moving, the text is also getting highlighted simultaneously up to the current position of the cursor. Finally, she ends the text selection with a second double tap.
|
| 68 |
+
|
| 69 |
+
§ 3.1.1 CONTINUOUS TOUCH
|
| 70 |
+
|
| 71 |
+
In continuous touch, the smartphone touchscreen acts as a trackpad (see Figure. 2(a)). It is an indirect pointing technique where the user moves her thumb on the touchscreen to change the cursor position on the AR display. For the mapping between display and touchscreen, we used a relative mode with clutching. As clutching may degrades performance [12], a control-display (CD) gain was applied to minimize it (see Section 3.2).
|
| 72 |
+
|
| 73 |
+
§ 3.1.2 DISCRETE TOUCH
|
| 74 |
+
|
| 75 |
+
This technique is inspired by the text selection with keyboard shortcuts available in both Mac [27] and Windows [24] OS. In this work, we tried to emulate a few keyboard shortcuts. We particularly considered imitating keyboard shortcuts related to the character, word, and line-level text selection. For example, in Mac OS, hold down 1, and pressing $\rightarrow$ or $\leftarrow$ extends text selection one character to the right or left. Whereas hold down $\widehat{U} + = =$ and pressing $\rightarrow$ or $\overset{ \leftarrow }{ \leftarrow }$ allows users to select text one word to the right or left. To perform text selection to the nearest character at the same horizontal location on the line above or below, a user needs to hold down [1] and press $\uparrow$ or $\downarrow$ respectively. In discrete touch interaction, we replicated all these shortcuts using directional swipe gestures (see Figure. 2(b)). Left or right swipe can select text at both levels - word as well as character. By default, it works at the word level. The user performs a long-tap which acts as a toggle button to switch between word and character level selection. On the other hand, up or down swipe selects text at one line above or one line below from the current position. The user can only select one character/word/line at a time with its respective swipe gesture.
|
| 76 |
+
|
| 77 |
+
Note that, to select text using discrete touch, a user first positions the cursor on top of the starting word (not the starting character) of the text to be selected by touch dragging on the smartphone as described in the continuous touch technique. From a pilot study, we observed that moving the cursor every time to the starting word using discrete touch makes the overall interaction slow. Then, she selects that first word with the double tap and uses discrete touch to select text up to the end position as described before.
|
| 78 |
+
|
| 79 |
+
§ 3.1.3 SPATIAL MOVEMENT
|
| 80 |
+
|
| 81 |
+
This technique emulates the smartphone as an air-mouse $\left\lbrack {{38},{51}}\right\rbrack$ for AR-HMD. To control the cursor position on the headset screen, the user holds the phone in front of his torso, places his thumb on the touchscreen, and then she moves the phone in the air with small forearm motions in a plane that is perpendicular to the gaze direction (see Figure. 2(c)). While moving the phone, its tracked positional data in XY-coordinates get translated into the cursor movement in XY-coordinates inside a 2D window. When a user wants to stop the cursor movement, she simply lifts her thumb from the touchscreen. Thumb touch-down and touch-release events define the start and stop of the cursor movement on the AR display. The user determines the speed of the cursor by simply moving the phone faster and slower accordingly. We applied CD-gain between the phone movement and the cursor displacement on the text window (see Section 3.2).
|
| 82 |
+
|
| 83 |
+
max width=
|
| 84 |
+
|
| 85 |
+
Techniques $C{D}_{Max}$ $C{D}_{Min}$ $\lambda$ ${V}_{inf}$
|
| 86 |
+
|
| 87 |
+
1-5
|
| 88 |
+
Continuous Touch 28.34 0.0143 36.71 0.039
|
| 89 |
+
|
| 90 |
+
1-5
|
| 91 |
+
Spatial Movement 23.71 0.0221 32.83 0.051
|
| 92 |
+
|
| 93 |
+
1-5
|
| 94 |
+
|
| 95 |
+
Table 1: Logistic function parameter values for continuous touch and spatial movement interaction
|
| 96 |
+
|
| 97 |
+
§ 3.1.4 RAYCASTING
|
| 98 |
+
|
| 99 |
+
Raycasting is a popular interaction technique in AR/VR environments to select $3\mathrm{D}$ virtual objects $\left\lbrack {6,{41}}\right\rbrack$ . In this work, we developed a smartphone-based raycasting technique for selecting text displayed on a 2D window in AR-HMD (see Figure. 2(d)). A 6 DoF tracked smartphone was used to define the origin and orientation of the ray. In the headset display, the user can see the ray in a straight line appearing from the top of the phone. By default, the ray is always visible to users in AR-HMD as long as the phone is being tracked properly. In raycasting, the user needs to do small angular wrist movements for pointing on the text content using the ray. Where the ray hits on the text window, the user sees the cursor there. Compared to other proposed methods, raycasting does not require clutching as it allows direct pointing to the target. The user confirms the target selection on the AR display by providing a touch input (i.e., double tap) from the phone.
|
| 100 |
+
|
| 101 |
+
§ 3.2 IMPLEMENTATION
|
| 102 |
+
|
| 103 |
+
To prototype our proposed interaction techniques, we used a Mi-crosoft HoloLens $2\left( {{42}^{ \circ } \times {29}^{ \circ }}\right.$ screen) as an AR-HMD device and an OnePlus 5 as a smartphone. For spatial movement and raycasting interactions, real-time pose information of the smartphone is needed. An OptiTrack ${}^{3}$ system with three Flex-13 cameras was used for accurate tracking with low latency. To bring the hololens and the smartphone into a common coordinate system, we attached passive reflective markers to them and did a calibration between hololens space and optitrack space.
|
| 104 |
+
|
| 105 |
+
In our software framework, the AR application running on HoloLens was implemented using Unity3D (2018.4) and Mixed Reality Toolkit ${}^{4}$ . To render text in HoloLens, we used TextMesh-Pro. A Windows 10 workstation was used to stream tracking data to HoloLens. All pointing techniques with the phone were also developed using Unity3D. We used UNet ${}^{5}$ library for client-server communications between devices over the WiFi network.
|
| 106 |
+
|
| 107 |
+
For continuous touch and spatial movement interactions, we used a generalized logistic function [42] to define the control-display (CD) gain between the move events either on the touchscreen or in the air and the cursor displacement in the AR display:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{CD}\left( v\right) = \frac{C{D}_{Max} - C{D}_{Min}}{1 + {e}^{-\lambda \times \left( {v - {V}_{inf}}\right) }} + C{D}_{Min} \tag{1}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$C{D}_{Max}$ and $C{D}_{Min}$ are the asymptotic maximum and minimum amplitudes of CD gain and $\lambda$ is a parameter proportional to the slope of the function at $v = {V}_{\text{ inf }}$ with ${V}_{\text{ inf }}$ a inflection value of the function. We derived initial values from the parameters of the definitions from Nancel et al. [42], and then empirically optimized for each technique. The parameters were not changed during the study for individual participants. The values are summarized in Table 1.
|
| 114 |
+
|
| 115 |
+
In discrete touch interaction, we implemented up, down, left, and right swipes by obtaining touch position data from the phone. We considered a 700 msec time-window (empirically found) for detecting a long-tap event. Users get vibration feedback from the phone when she performs long-tap successfully. They also receive vibration haptics while double tapping to start and end the text selection in all interaction techniques. Note that, we did not provide haptic feedback for swipe gestures. With each swipe movement, they can see that texts are getting highlighted in yellow color. This acts as visual feedback by default for touch swipes.
|
| 116 |
+
|
| 117 |
+
Supplied feelings mr of dissuade recurred no it offering honoured. Am of of in By an outlived insisted procured improved am. Paid hill fine ten now love even leaf. collecting devonshire favourable excellence. Her sixteen end ashamed cottage yet Warmly warmth six one any wisdom. Family giving is pulled beauty chatty highly no. Blessing appetite domestic did mrs judgment rendered entirely. Highly indeed had. Excited him now natural saw passage offices you minuter. At by asked being court hopes. Farther so friends am to detract. Forbade concern do private be. Offending Windows talking painted pasture yet its express parties us. Sure last upon he sam packages we. For assurance concluded then something depending discourse see led - collected. Packages oh no denoting n., advanced humored. Pressed be so thought. reached get hearing invited. Resources ourselves sweetness ye do no perfectly. on. Felicity informed yet had admitted strictly how you as knew next. Of believed or diverted no rejoiced. ${\mathrm{E}}^{2}$ friendship sufficient cosistance can prosperous met. As game he show it park do. Was has unknown few certain ten promise. No finished my an likewise cheerful
|
| 118 |
+
|
| 119 |
+
Figure 3: Text selection tasks used the experiments: (1) word (2) sub-word (3) word to a character (4) four words (5) one sentence (6) paragraph to three sentences (7) one paragraph (8) two paragraphs (9) three paragraphs (10) whole text.
|
| 120 |
+
|
| 121 |
+
In the spatial movement technique, we noticed that the phone moves slightly during the double tap event. This results in a slight unintentional cursor movement. To reduce that, we suspended cursor movement for ${300}\mathrm{{msec}}$ (empirically found) when there is any touch event on the phone screen.
|
| 122 |
+
|
| 123 |
+
In raycasting, we applied the $1\mathfrak{€}$ Filter [11] with $\beta = {80}$ and min-cutoff $= {0.6}$ (empirically tested) at the ray source to minimize jitter and latency which usually occur due to both hand tremor and double tapping [55]. We set the ray length to 8 meters by default. The user sees the full length of the ray when it is not hitting the text panel.
|
| 124 |
+
|
| 125 |
+
§ 4 EXPERIMENT
|
| 126 |
+
|
| 127 |
+
To assess the impact of the different characteristics of these four interaction techniques we perform a comparative study with a text selection task while users are standing up. Particularly, we are interested to evaluate the performance of these techniques in terms of task completion time, accuracy, and perceived workload.
|
| 128 |
+
|
| 129 |
+
§ 4.1 PARTICIPANTS AND APPARATUS
|
| 130 |
+
|
| 131 |
+
In our experiment, we recruited 20 unpaid participants (P1-P20) (13 males +7 females) from a local university campus. Their ages ranged from 23 to 46 years (mean $= {27.84},\mathrm{{SD}} = {6.16}$ ). Four were left-handed. All were daily users of smartphones and desktops. With respect to their experience with AR/VR technology, 7 participants ranked themselves as an expert because they are studying and working on the same field, 4 participants were beginners as they played some games in VR, while others had no prior experience. They all had either normal or corrected-to-normal vision. We used the apparatus and prototype described in Subsection 3.2.
|
| 132 |
+
|
| 133 |
+
§ 4.2 TASK
|
| 134 |
+
|
| 135 |
+
In this study, we ask participants to perform a series of text selection using our proposed techniques. Participants were standing up for the entire duration of the experiment. We reproduce different realistic usage by varying the type of text selection to do, like the selection of a word, a sentence, a paragraph, etc. Figure 3 shows all the types of text selection that were asked to the participants. Concretely, the experiment scene in HoloLens consisted of two vertical windows of ${102.4}\mathrm{\;{cm}} \times {57.6}\mathrm{\;{cm}}$ positioned at a distance of ${180}\mathrm{\;{cm}}$ from the headset at the start of the application (i.e., its visual size was ${31.75}^{ \circ } \times {18.1806}^{ \circ }$ ). The windows were anchored in the world coordinate. These two panels contain the same text. Participants are asked to select the text in the action panel (left panel in Figure 1(b)) that is highlighted in the instruction panel (right panel in Figure 1(b)). The user controls a cursor (i.e., a small circular dot in red color as shown in Figure 1(b)) using one of the techniques on the smartphone. Its position is always bounded by the window size. The text content was generated by Random Text Generator ${}^{6}$ and was displayed using the Liberation Sans font with a font-size of ${25}\mathrm{{pt}}$ (to allow a comfortable viewing from a few meters).
|
| 136 |
+
|
| 137 |
+
${}^{3}$ https://optitrack.com/
|
| 138 |
+
|
| 139 |
+
${}^{4}$ https://github.com/microsoft/MixedRealityToolkit-Unity
|
| 140 |
+
|
| 141 |
+
${}^{5}$ https://docs.unity3d.com/Manual/UNet.html
|
| 142 |
+
|
| 143 |
+
12 40% 35% 30% Error Rate 20% 15% 10% 5% 0% Continuous Touch Discrete Touch Spatial Movement Raycasting (b) 10 Mean Time (sec) 8 6 2 0 Continuous Touch Discrete Touch Spatial Movement Raycasting (a)
|
| 144 |
+
|
| 145 |
+
Figure 4: (a) Mean task completion time for our proposed four interaction techniques. Lower scores are better. (b) Mean error rate of interaction techniques. Lower scores are better. Error bars show 95% confidence interval. Statistical significances are marked with stars $( * * : p < {.01}$ and $* : p < {.05})$ .
|
| 146 |
+
|
| 147 |
+
§ 4.3 STUDY DESIGN
|
| 148 |
+
|
| 149 |
+
We used a within-subject design with 2 factor: 4 INTERACTION TECHNIQUE (Continuous touch, Discrete touch, Spatial movement, and Raycasting) × 10 TEXT SELECTION TYPE (shown in Figure 3) $\times {20}$ participants = 800 trials. The order of INTERACTION TECHNIQUE was counterbalanced across participants using a Latin Square. The order of TEXT SELECTION TYPE is randomized in each block for each INTERACTION TECHNIQUE (but same for each participant).
|
| 150 |
+
|
| 151 |
+
§ 4.4 PROCEDURE
|
| 152 |
+
|
| 153 |
+
We welcomed participants upon arrival. They were asked to read and sign the consent form, fill out a pre-study questionnaire to collect demographic information and prior AR/VR experience. Next, we gave them a brief introduction to the experiment background, hardware, the four interaction techniques, and the task involved in the study. After that, we helped participants to wear HoloLens comfortably and complete the calibration process for their personal interpupillary distance (IPD). For each block of INTERACTION TECHNIQUE, participants completed a practice phase followed by a test session. During the practice, the experimenter explained how the current technique worked, and participants were encouraged to ask questions. Then, they had time to train themselves with the technique until they were fully satisfied, which took around 7 minutes on average. Once they felt confident with the technique, the experimenter launched the application for the test session. They were instructed to do the task as quickly and accurately as possible in the standing condition. To avoid noise due to participants using either one or two hands, we asked to only use their dominant hand.
|
| 154 |
+
|
| 155 |
+
At the beginning of each trial in the test session, the text to select was highlighted in the instruction panel. Once they were satisfied with their selection, participants had to press a dedicated button on the phone screen to get to the new task. They were allowed to use their non-dominant hand only to press this button. At the end of each block of INTERACTION TECHNIQUE, they answered a NASA-TLX questionnaire [23] on iPad, and moved to the next condition.
|
| 156 |
+
|
| 157 |
+
At the end of the experiment, we asked participants a questionnaire in which they had to rank techniques by speed, accuracy, and overall preference and performed an informal post-test interview.
|
| 158 |
+
|
| 159 |
+
The entire experiment took approximately 80 minutes in total. Participants were allowed to take breaks between sessions during which they could sit and encourage to comment at any time during the experiment. To respect COVID-19 safety protocol, participants wore FFP2 mask and maintained a 1 meter distance with the experimenter at all times.
|
| 160 |
+
|
| 161 |
+
§ 4.5 MEASURES
|
| 162 |
+
|
| 163 |
+
We recorded completion time as the time taken to select the text from its first character to the last character, which is the time difference between the first and second double tap. If they selected more or less characters than expected, the trial was considered wrong. We then calculated the error rate as the percentage of wrong trials for each condition. Finally, as stated above, participants filled a NASA TLX questionnaire to measure the subjective workload of each INTERACTION TECHNIQUE, and their preference was measured using a ranking questionnaire at the end of the experiment.
|
| 164 |
+
|
| 165 |
+
§ 4.6 HYPOTHESES
|
| 166 |
+
|
| 167 |
+
In our experiment, we hypothesized that:
|
| 168 |
+
|
| 169 |
+
H1. Continuous touch, Spatial movement, and Raycasting will be faster than Discrete touch because a user needs to spend more time for multiple swipes and do frequent mode switching to select text at the character/word/sentence level.
|
| 170 |
+
|
| 171 |
+
H2. Discrete touch will be more mentally demanding compared to all other techniques because the user needs to remember the mapping between swipe gestures and text granularity, as well as the long-tap for mode switching.
|
| 172 |
+
|
| 173 |
+
H3. The user will perceive that Spatial movement will be more physically demanding as it involves more forearm movements. H4. The user will make more errors in Raycasting, and it will be more frustrating because double tapping for target confirmation while holding the phone in one-hand will introduce more jitter [55].
|
| 174 |
+
|
| 175 |
+
${}^{6}$ http://randomtextgenerator.com/
|
| 176 |
+
|
| 177 |
+
Continuous Touch Discrete Touch Spatial Movement Raycasting Accuracy Preference 3.5 3.0 2.5 2.0 1.5 1.0 0.5 *** 0 Speed
|
| 178 |
+
|
| 179 |
+
Figure 5: Mean scores for the ranking questionnaire which are in 3 point likert scale. Higher marks are better. Error bars show 95% confidence interval. Statistical significances are marked with stars (**: $p < {.01}$ and *: $p < {.05}$ ).
|
| 180 |
+
|
| 181 |
+
H5. Overall, Continuous touch would be the most preferred text selection technique as it works similarly to the trackpad which is already familiar to users.
|
| 182 |
+
|
| 183 |
+
§ 5 RESULT
|
| 184 |
+
|
| 185 |
+
To test our hypothesis, we conducted a series of analyses using IBM SPSS software. Shapiro-Wilk tests showed that the task completion time, total error, and questionnaire data were not normally distributed. Therefore, we used the Friedman test with the interaction technique as an independent variable to analyze our experimental data. When significant effects were found, we reported post hoc tests using the Wilcoxon signed-rank test and applied Bonferroni corrections for all pair-wise comparisons. We set an $\alpha = {0.05}$ in all significance tests. Due to a logging issue, we had to discard one participant and did the analysis with 19 instead of 20 participants.
|
| 186 |
+
|
| 187 |
+
§ 5.1 TASK COMPLETION TIME
|
| 188 |
+
|
| 189 |
+
There was a statistically significant difference in task completion time depending on which interaction technique was used for text selection $\left\lbrack {{\chi }^{2}\left( 3\right) = {33.37},p < {.001}}\right\rbrack$ (see Figure 4(a)). Post hoc tests showed that Continuous touch $\left\lbrack {\mathrm{M} = {5.16},\mathrm{{SD}} = {0.84}}\right\rbrack$ , Spatial movement $\left\lbrack {\mathrm{M} = {5.73},\mathrm{{SD}} = {1.38}}\right\rbrack$ , and Raycasting $\lbrack \mathrm{M} = {5.43},\mathrm{{SD}} =$ 1.66] were faster than Discrete touch $\left\lbrack {\mathrm{M} = {8.78},\mathrm{{SD}} = {2.09}}\right\rbrack$ .
|
| 190 |
+
|
| 191 |
+
§ 5.2 ERROR RATE
|
| 192 |
+
|
| 193 |
+
We found significant effects of the interaction technique on error rate $\left\lbrack {{\chi }^{2}\left( 3\right) = {39.45},p < {.001}}\right\rbrack$ (see Figure 4(b)). Post hoc tests showed that Raycasting $\left\lbrack {\mathrm{M} = {24.21},\mathrm{\;{SD}} = {13.46}}\right\rbrack$ was more error-prone than Continuous touch $\left\lbrack {\mathrm{M} = {1.05},\mathrm{{SD}} = {3.15}}\right\rbrack$ , Discrete touch $\lbrack \mathrm{M} = {4.73}$ , $\mathrm{{SD}} = {9.05}\rbrack$ , and Spatial movement $\left\lbrack {\mathrm{M} = {8.42},\mathrm{{SD}} = {12.58}}\right\rbrack$ .
|
| 194 |
+
|
| 195 |
+
§ 5.3 QUESTIONNAIRES
|
| 196 |
+
|
| 197 |
+
For NASA-TLX, we found significant differences for mental demand $\left\lbrack {{\chi }^{2}\left( 3\right) = {9.65},p = {.022}}\right\rbrack$ , physical demand $\left\lbrack {{\chi }^{2}\left( 3\right) = {29.75}}\right.$ , $p < {.001}\rbrack$ , performance $\left\lbrack {{\chi }^{2}\left( 3\right) = {40.14},p < {.001}}\right\rbrack$ , frustration $\left\lbrack {\chi }^{2}\right.$ $\left( 3\right) = {39.53},p < {.001}\rbrack$ , and effort $\left\lbrack {{\chi }^{2}\left( 3\right) = {32.69},p < {.001}}\right\rbrack$ . Post hoc tests showed that Raycasting and Discrete touch had significantly higher mental demand compared to Continuous touch and Spatial movement. On the other hand, physical demand was lowest for Continuous touch, whereas users rated significantly higher physical demand for Raycasting and Spatial movement. In terms of performance, Raycasting was rated significantly lower than the other techniques. Raycasting was also rated significantly more frustrating. Moreover, Continuous touch was least frustrating and better in performance than Spatial movement. Figure 6 shows a bar chart of the NASA-TLX workload sub-scales for our experiment.
|
| 198 |
+
|
| 199 |
+
For ranking questionnaires, there were significant differences for speed $\left\lbrack {{\chi }^{2}\left( 3\right) = {26.40},p < {.001}}\right\rbrack$ , accuracy $\left\lbrack {{\chi }^{2}\left( 3\right) = {45.5}}\right.$ , $p < {.001}\rbrack$ , and preference $\left\lbrack {{\chi }^{2}\left( 3\right) = {38.56},p < {.001}}\right\rbrack$ . Post hoc test showed that users ranked Discrete touch as the slowest and Raycasting as the least accurate technique. The most preferred technique was Continuous touch whereas Raycasting was the least. Users also favored discrete touch as well as Spatial movement based text selection approach. Figure 5 summarises participants responses for ranking questionnaires.
|
| 200 |
+
|
| 201 |
+
§ 6 DISCUSSION & DESIGN IMPLICATIONS
|
| 202 |
+
|
| 203 |
+
Our results suggest that Continuous Touch is the technique that was preferred by the participants (confirming $\mathbf{{H5}}$ ). It was the least physically demanding technique and the less frustrating one. It was also more satisfying regarding performance than the two spatial ones (Raycasting and Spatial Movement). Finally, it was less mentally demanding than Discrete Touch and Raycasting. Participants pointed out that this technique was simple, intuitive, and familiar to them as they are using trackpad and touchscreen every day. During the training session, we noticed that they took the least time to understand its working principle. In the interview, P8 commented, "I can select text fast and accurately. Although I noticed a bit of overshooting in the cursor positioning, it can be adjusted by tuning CD gain". P17 said, "I can keep my hands down while selecting text. This gives me more comfort".
|
| 204 |
+
|
| 205 |
+
9 Performance Effort Frustration 7 5 3 1 Mental Demand Physical Demand Temporal Demand
|
| 206 |
+
|
| 207 |
+
Figure 6: Mean scores for the NASA-TLX task load questionnaire which are in range of 1 to 10. Lower marks are better, except for performance. Error bars show 95% confidence interval. Statistical significances are marked with stars (**: $p < {.01}$ and *: $p < {.05}$ ).
|
| 208 |
+
|
| 209 |
+
On the other hand, Raycasting was the least preferred technique and led to the lowest task accuracy (participants were also the least satisfied with their performance using this technique). This can be explained by the fact that it was the most physically demanding and the most frustrating (confirming H5). Finally, it was more mentally demanding than Continuous Touch and Spatial Movement. In their comments, participants reported about the lack of stability due to the one-handed phone holding posture. Some participants complained that they felt uncomfortable to hold this OnePlus 5 phone in one hand as it was a bit bigger compared to their hand size. This introduced even more jitter for them in Raycasting while double-tapping for target confirmation. P10 commented, "I am sure I will perform Raycasting with fewer errors if I can use my both hands to hold the phone". Moreover, from the logged data, we noticed that they made more mistakes when the target character was positioned inside a word rather than either at the beginning or at the end, which was confirmed in the discussion with participants.
|
| 210 |
+
|
| 211 |
+
As we expected, Discrete Touch was the slowest technique (confirming $\mathbf{{H1}}$ ), but was not the most mentally demanding, as it was only more demanding than Continuous Touch (rejecting $\mathbf{{H2}}$ ). It is also more physically demanding than Continuous Touch, but less than Spatial Movement and Raycasting. Several participants mentioned that it is excellent for the short word to word or sentence to sentence selection, but not for long text as multiple swipes are required. They also pointed out that performing mode switching with a long-tap of 700 msec was a bit tricky and lost some time there during text selection. Although they got better with it over time, still they are uncertain to do it successfully in one attempt.
|
| 212 |
+
|
| 213 |
+
Finally, contrary to our expectation, Spatial Movement was not the most physically demanding technique, as it was less demanding than Raycasting (but more than Continuous Touch and Discrete Touch). It was also less mentally demanding than Raycasting and led to less frustration. However, it led to more frustration and participants were less satisfied with their performance with this technique than with Continuous Touch. According to participants, with this technique, moving the forearm needs physical effort undoubtedly, but they only need to move it for a very short distance which was fine for them. From the user interview, we came to know that they did not use much clutching (less than with Continuous Touch). P13 mentioned, "In Spatial Movement, I completed most of the tasks without using clutching at all".
|
| 214 |
+
|
| 215 |
+
Overall, our results suggest that between Touch and Spatial interactions, it would be better to use Touch for text selection, which confirms findings from Siddhpuria et al. for pointing tasks [49]. Continuous Touch was overall preferred, faster, and less demanding than Discrete Touch, which goes against results from the work by Jain et al. for shape selection [29]. Such difference can be explained by the fact that with text selection, there is a minimum of two levels of discretization (characters and words), which makes it mentally demanding. It can also be explained by the high number of words (and even more characters) in a text, contrary to the number of shapes in Jain et al. experiment. This led to a high number of discrete actions for the selection, and thus, a higher physical demand. However, surprisingly, most of the participants appreciated the idea of Discrete Touch. If a tactile interface is not available on the handheld device, our results suggest to use a spatial interaction technique that uses a relative mapping, as we did with Spatial Movement. We could not find any differences in time, contrary to the work by Campbell et al. [9], but it leads to fewer errors, which confirms what was found by Vogel and Balakrishnan [52]. It is also less physically and mentally demanding and leads to less frustration than an absolute mapping. On the technical side, a spatial interaction technique with a relative mapping can be easily achieved without an external sensor (as it was done for example by Siddhpuria et al. [49]).
|
| 216 |
+
|
| 217 |
+
§ 7 LIMITATIONS
|
| 218 |
+
|
| 219 |
+
There were two major limitations. First, we used an external tracking system which limits us to lab study only. As a result, it is difficult to understand the social acceptability of each technique until we consider the real-world on-the-go situation. However, technical progress in inside-out tracking ${}^{7}$ means that it will be possible, soon, to have smartphones that can track themselves accurately in 3D space. Second, some of our participants had difficulties holding the phone in one-hand because the phone was a bit bigger for their hands. They mentioned that although they were trying to move their thumb faster in continuous touch and discrete touch interactions, they were not able to do it comfortably due to the afraid of dropping the phone. This bigger phone size also influenced their raycasting performance particularly when they need to do a double tap for target confirmation. Hence, using one phone size for all was an important constraint in this experiment.
|
| 220 |
+
|
| 221 |
+
${}^{7}$ https://developers.google.com/ar
|
| 222 |
+
|
| 223 |
+
§ 8 CONCLUSION AND FUTURE WORK
|
| 224 |
+
|
| 225 |
+
In this research, we investigated the use of a smartphone as an eyes-free interactive controller to select text in augmented reality head-mounted display. We proposed four interaction techniques: 2 that use the tactile surface of the smartphone: Continuous Touch and Discrete Touch, and two that track the device in space: Spatial Movement and Raycasting. We evaluated these four techniques in a text selection task study. The results suggested that techniques using the tactile surface of the device are more suited for text selection than spatial one, Continuous Touch being the most efficient. If a tactile surface was not available, it would be better to use a spatial technique (i.e. with the device tracked in space) that uses a relative mapping between the user gesture and the virtual screen, compared to a classic Raycasting technique that uses an absolute mapping.
|
| 226 |
+
|
| 227 |
+
In this work, we have focused on interaction techniques based on smartphone inputs. This allowed us to better understand which approach should be favored in that context. In the future, it would be interesting to explore a more global usage scenario such as a text editing interface in AR-HMD using smartphone-based input where users need to perform other interaction tasks such as text input and commands execution simultaneously. Another direction to future work is to compare phone-based techniques to other input techniques like hand tracking, head/eye gaze, and voice commands. Furthermore, we only considered standing condition, but it would be interesting to study text selection performance while the user is walking.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,248 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Touching 4D Objects with 3D Tactile Feedback
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
This paper introduces a novel interactive system for presenting $4\mathrm{D}$ objects in 4D space through tactile sensation. A user is able to experience 4D space by actively touching $4\mathrm{D}$ objects with the hands, where the hand holding the controller is physically stimulated at a three-dimensional hypersurface of the $4\mathrm{D}$ object. When a hand is placed on the $3\mathrm{D}$ projection of a $4\mathrm{D}$ object, the force generated at the interface between the hand and the object is calculated by referring to the distance from the viewpoint to each point in the frontal surface of the object. The calculated force in the hand is converted to the vibration patterns to be displayed from the tactile glove. The system supplements the $4\mathrm{D}$ information such as tilt or unevenness, which is difficult to be visually recognized.
|
| 8 |
+
|
| 9 |
+
Keywords: 4D space, 4D visualization, 4D interaction, tactile visualization
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing-Visualization-Visualization application domains-Scientific visualization
|
| 12 |
+
|
| 13 |
+
## 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
With the development of computer graphics technology, various methods for displaying 4D objects have been proposed. Furthermore, the VR technology allowed users to observe 4D objects with a higher degree of immersion. Some researchers produced new methods of visualization and interaction [1], [5], [9], being expected to improve the understanding and cognition of $4\mathrm{D}$ space.
|
| 16 |
+
|
| 17 |
+
While most of these approaches use only the visual information by producing 4D space, humans also use auditory and tactile perception to recognize $3\mathrm{D}$ representations. However, there are few studies focusing on these additional sensations for $4\mathrm{D}$ spatial representations. In order to present richer four-dimensional information to users, auditory and tactile information is necessary as well as visual information. Therefore, we develop a novel system for displaying 4D objects using tactile sensations.
|
| 18 |
+
|
| 19 |
+
Needless to say, humans cannot touch objects defined in 4D space, since their bodies are situated in 3D space. The tactile sensations are the information mapped onto their skin, which is arranged in 2D surface. Therefore we can infer that the ones situated in $4\mathrm{D}$ space touch 4D objects with their 3D skin, and the tactile sensation is regarded as the information mapped into 3D hypersurface, if it exists. Moreover, in general, humans' tactile perception appears to be mapped into their actual 3D space in combination with the body arrangement. Our idea is to introduce the experience as if humans touch the $4\mathrm{D}$ objects by getting the $3\mathrm{D}$ -mapped tactile-like information of $4\mathrm{D}$ objects through their skin.
|
| 20 |
+
|
| 21 |
+
In this paper, we propose a novel $4\mathrm{D}$ interaction system with tactile representation. We focus on the aspect of tactile sensation that can represent the unevenness of an object by displaying pressure sensation. We convert the tactile information into vibration patterns for ease of handling. For presenting the touch information, a tactile glove installing vibration actuators is developed.
|
| 22 |
+
|
| 23 |
+
The introduced system works as follows. First, a 4D object is projected to a 3D screen in the VR environment. Second, a hand wearing the tactile glove enters the screen. Third, the system calculates the necessary tactile information. Finally, the hand receives tactile sensations so that each part of the hand corresponds to each part of $3\mathrm{D}$ hypersurface of the $4\mathrm{D}$ hand. A user is able to perceive information of the projected 3D hyperplane such as the slope and unevenness as if he/she touches the 4D objects, in the sense that he/she receives 4D tactile-like information through the tactile sensation.
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
| 27 |
+
Figure 1: Overall view of the system.
|
| 28 |
+
|
| 29 |
+
## 2 RELATED WORKS
|
| 30 |
+
|
| 31 |
+
Several approaches have been introduced for presenting $4\mathrm{D}$ spatial information through tactile or haptic sensations. Hashimoto et al. [2] developed a method that visualized multidimensional data with haptic representation. They use a haptic device capable of 6-DoF input and force feedback to control a pointer floating in the virtual environment. When the pointer overlaps the displayed data, the value corresponding to the location is expressed in torque. The system also allows a user to explore four or more dimensional environment by operating the $3\mathrm{D}$ slice with twisting input. Zhang et al. [7], [9] proposed a method using a haptic device for exploring and manipulating knotted spheres and cloth-like objects in 4D space. In the exploration of the objects, constraining the movement of the device to the projected objects improves the understanding of complex structure. In the manipulation of the objects, haptic feedback is presented by rendering the reaction of pulling force.
|
| 32 |
+
|
| 33 |
+
In the study of tactile presentation, Martinez et al. [3] proposed a haptic display by introducing a vibrotactile glove that enables users to perceive the shape of virtual 3D objects. Ohka et al. [6] developed a multimodal display, which is capable of stimulating muscles and tendons of the forearms and tactile receptors in fingers for presenting the tactile-haptic reality.
|
| 34 |
+
|
| 35 |
+
These approaches, however, are not intended for simulating $4\mathrm{D}$ tactile stimulation.
|
| 36 |
+
|
| 37 |
+
## 3 SYSTEM CONFIGURATION
|
| 38 |
+
|
| 39 |
+
As shown in Figure 1, we develop a 4D visualization system consisting of a personal computer (HP, Intel Core i7-8700 3.20GHz, 8GB RAM, NVIDIA GeForce GTX 1060), a 6-DoF head-mounted display (HMD) with a motion controller (HTC VIVE), a single-board computer (Raspberry Pi Zero), and a tactile glove. A user wears the HMD, and holds the motion controller in the hand wearing the tactile glove. The user observes 3D-projected 4D space in the virtual environment through the HMD. The position of the hand is recognized by the motion controller. When the user touches a 4D object, the tactile glove presents the corresponding tactile sensation to the user's hand. The software is implemented with Unity 2018.3.3f1 and SteamVR Unity Plugin v2.2.0.
|
| 40 |
+
|
| 41 |
+
Figure 2 shows a tactile glove equipped with a total of 30 vibration motors. Five motors are arranged on the palm side of the hand, five on the back side, and two on the front and the back of each finger as shown in Figure 3. The motors are driven by electric current, and the strength of the vibration stimuli is controlled by the single-board computer with pulse-width modulation.
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
Figure 3: Arrangement of vibration motors in tactile glove.
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
Figure 4: Structure of the 4D polytope compared to the 3D polytope.
|
| 50 |
+
|
| 51 |
+
## 4 4D VISUALIZATION SYSTEM
|
| 52 |
+
|
| 53 |
+
In this section, we describe the $4\mathrm{D}$ visualization system which projects 4D objects into 3D virtual environment. The system is based on an algorithm introdued in McIntosh's 4D Blocks [4].
|
| 54 |
+
|
| 55 |
+
### 4.1 Definition of 4D objects
|
| 56 |
+
|
| 57 |
+
In the system, we define 4D objects as 4D convex polytopes. Non-convex polytopes are constructed by combining convex polytopes. As shown in Figure 4, boundaries are composed of 3D objects called cells, and their intersections are divided into 3 different features; vertices, edges, and faces, depending on the number of dimensions.
|
| 58 |
+
|
| 59 |
+
### 4.2 Projection
|
| 60 |
+
|
| 61 |
+
Figure 5 shows the 4D projection model. 4D objects arranged in 4D space are projected to a $3\mathrm{D}$ screen. Positions of $4\mathrm{D}$ objects are described by $4\mathrm{D}$ vectors in the $4\mathrm{D}$ coordinate system, and their orientations are described by $4 \times 4$ orthogonal matrices. The center of the 3D screen is located at the origin of the 4D space. The $3\mathrm{D}$ screen has a dimension of $2 \times 2 \times 2$ , spreading in ${x}_{w}{y}_{w}{z}_{w}$ plane. Data defined in the $4\mathrm{D}$ world-coordinate system ${x}_{w}{y}_{w}{z}_{w}{w}_{w}$ are converted to data in the $3\mathrm{D}$ screen-coordinate system ${x}_{s}{y}_{s}{z}_{s}$ by removing $w$ - coordinated component. 4D objects are orthogonally projected in the 3D screen by this transformation. From the viewpoint of the suitability of tactile presentation, we selected the orthographic projection method. The system also supports perspective projection, which can be occasionally selected by a user.
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
|
| 65 |
+
Figure 5: 3D and 4D projection model.
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
|
| 69 |
+
Figure 6: Examples of drawings with similar drawings in 3D.
|
| 70 |
+
|
| 71 |
+
### 4.3 Drawing
|
| 72 |
+
|
| 73 |
+
In the system, face polygons and edge lines are used for drawing. Polygons and lines do not have normals in $4\mathrm{D}$ space, and are drawn as contours of cells. In order to make it easier to distinguish cells displayed on the 3D screen, the system draws the center of cells with semi-transparent color-coded polygons, in addition to edges and faces. Figure 6 shows the examples of drawings.
|
| 74 |
+
|
| 75 |
+
For accurate drawing, back-cell culling and hidden-hypersurface removal are applied. Occluded cells are determined by taking the dot product of a view direction and cell normals, and faces that belong to visible cells are selected so that they will be used for drawing.
|
| 76 |
+
|
| 77 |
+
As the system doesn't adopt voxel rendering [1], we cannot use depth buffering for hidden-hypersurface removal. Instead, we use clip units, which represents an area where an object obscures other objects behind it. Figure 7 shows an example scene where clip units work. As shown in Figure 8, clip units are identified by contact subsurfaces of front-facing surfaces and back-facing surfaces with respect to the viewer. Each clip unit is a halfspace whose boundary includes the subsurface, and the boundary is orthogonal to the screen. Intersections of clip units are used for clipping. Drawing polygons and lines are clipped by clip units of objects closer to the viewer.
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
Figure 7: An example scene where clip units work. We don't draw the intersection of rectangle $B$ and the area hidden by $A$ .
|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
|
| 85 |
+
Figure 8: Calculating a clip unit in a 3D scene. Note that clip units are dimension-independent concepts. In $n\mathrm{D}$ ,"surfaces" mean $\left( {n - 1}\right) \mathrm{D}$ components, and "subsurfaces" mean $\left( {n - 2}\right) \mathrm{D}$ components.
|
| 86 |
+
|
| 87 |
+
### 4.4 Control
|
| 88 |
+
|
| 89 |
+
4D orientations have six degrees of freedom [8]. A user is able to rotate a 4D object by moving the motion controller while pressing the trigger. To naturally correspond the controller's 6-DoF operation in 3D space with the object's 6-DoF rotation in 4D space, the 3-DoF translational operation (Figure 9(a)) is related with the rotation involving the $w$ -axis, and the 3-DoF rotational operation (Figure 9(b)) is associated with the rotation not involving the $w$ -axis.
|
| 90 |
+
|
| 91 |
+
## 5 TACTILE REPRESENTATION
|
| 92 |
+
|
| 93 |
+
In this section, we introduce a tactile representation method. For explaining the concept, we firstly describe how to express the touching sensation of a 2D-projected 3D object, and then expand it to the 3D projection of a 4D object.
|
| 94 |
+
|
| 95 |
+
### 5.1 Analogy of touching 3D objects in 2D space
|
| 96 |
+
|
| 97 |
+
When humans touch a screen where a 3D object is displayed, they feel the object as if it is situated. For example, suppose that we move the right palm straight towards the front wall. Figure 10 depicts three different situations. When the wall faces straight in front, a uniform pressure will be applied on the palm (Figure 10(a)). Alternatively, when the wall faces a little to the right, the thumb will first be stimulated. When the hand is pressed against the wall as it is, the wrist will bend and the entire palm will touch the wall, but the thumb side will receive a stronger force than the little finger side (Figure 10(b)). When the hand touches a corner of the wall, stronger sensation is perceived at the center of the hand (Figure 10(c)). These differences can be distinguished by the pressure applied to a palm.
|
| 98 |
+
|
| 99 |
+
The calculation of tactile stimulus generation proceeds as follows. We consider 3D space where a 3D object is situated, and the space is displayed in a 2D screen. When a user wearing a tactile glove touches the screen, hand-touched area is projected on the surface of the object. Then the distance between the projected hand and the screen is calculated. Figure 11 shows the three different situations of a wall, related with the cases presented in Figure 10. If the distance to the left is shorter than that to the right, it means the wall faces to the right. In this way, the strength of pressure is calculated by referring to the relative relationship of the distances. Figure 12 shows the results of the calculation.
|
| 100 |
+
|
| 101 |
+

|
| 102 |
+
|
| 103 |
+
Figure 9: Rotating 4D objects by the motion controller operation.
|
| 104 |
+
|
| 105 |
+
Let $\Omega$ be a set of all actuators installed in the tactile glove. First of all, the actuators are projected orthogonally through the screen, and a subset $L \subset \Omega$ of actuators being projected on the object is detected. For each projected points of actuators $i \in L$ , the distance ${l}_{i}$ from the screen is calculated. Then the normalized relative strength ${s}_{i}$ is calculated:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{s}_{i} = \max \left\{ {\alpha \left( {{l}_{min} - {l}_{i}}\right) + 1,0}\right\} , \tag{1}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where ${l}_{\min } = \mathop{\min }\limits_{{k \in L}}{l}_{k}$ and $\alpha$ is a gradient constant. The formula is derived as shown in Figure 13.
|
| 112 |
+
|
| 113 |
+
In general, corners give stronger pressure than a flat wall, and apexes give stronger pressure than corners. In the same way, an uneven wall give stronger pressure than a flat wall. By considering this fact, the strength of the stimulus ${h}_{i}$ is adaptively adjusted according to the degree of sharpness and inclination:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
{h}_{i} = \left( {1 - \beta \frac{{\sum }_{k \in L}{s}_{k}}{N}}\right) {s}_{i}, \tag{2}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where $\beta$ is a reducing constant $\left( {0 \leq \beta \leq 1}\right)$ and $N = \# \left\{ {i \in L \mid {s}_{i} > 0}\right\}$ is the number of active actuators. Figure 14 shows examples of the application of the formula.
|
| 120 |
+
|
| 121 |
+
Finally, the strength ${h}_{i}$ is converted to voltage value ${V}_{i}$ :
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
{V}_{i} = \gamma {h}_{i}, \tag{3}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $\gamma$ is a constant.
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+
Figure 10: Various situations when touching a wall.
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+
Figure 11: Calculation of tactile stimulus generation.
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
Figure 12: Calculation results.
|
| 140 |
+
|
| 141 |
+
### 5.2 Applying to 4D system
|
| 142 |
+
|
| 143 |
+
The idea and the calculation method presented in the previous section can be extended to the 3D projection of 4D objects. In the 3D system, a user receives 2D-mapped stimulus patterns through his/her palm of the hand (Figure 15(a)). In the 4D system, the user receives 3D-mapped data through all the skin of the hand holding the controller (Figure 15(b)). Touching the 2D screen in the 3D system corresponds to putting a hand in the $3\mathrm{D}$ screen in the $4\mathrm{D}$ system. When the hand enters the screen, the stimulus is calculated in the same way as the 2D cases.
|
| 144 |
+
|
| 145 |
+
Figure 16 shows the model of tactile calculation. The VR environment of the system consists of the $3\mathrm{D}$ screen and a rendered user's hand. 4D object is kept at a distance with the screen so that the hand prevents from colliding with the screen. When the hand enters the defined $4\mathrm{D}$ space, the relative position of each actuator from the screen ${d}_{i} = \left( {{x}_{{d}_{i}},{y}_{{d}_{i}},{z}_{{d}_{i}}}\right)$ is detected in the 3D screen coordinate system ${x}_{s}{y}_{s}{z}_{s}$ , and the position is converted to the 4D world coordinate system ${x}_{w}{y}_{w}{z}_{w}{w}_{w}$ as
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
{c}_{i} = \left( {{x}_{{c}_{i}},{y}_{{c}_{i}},{z}_{{c}_{i}},{w}_{{c}_{i}}}\right) = \left( {{x}_{{d}_{i}},{y}_{{d}_{i}},{z}_{{d}_{i}},0}\right) . \tag{4}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
Then the line ${\mathbf{L}}_{i}$ extending from ${c}_{i}$ towards positive direction of ${w}_{w}$ axis is defined:
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
{\mathbf{L}}_{i} = {c}_{i} + \left( {0,0,0, t}\right) \left( {t \geq 0}\right) . \tag{5}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
By clipping ${\mathbf{L}}_{i}$ with the $4\mathrm{D}$ object, the intersection point
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
{q}_{i} = \left( {{x}_{{q}_{i}},{y}_{{q}_{i}},{z}_{{q}_{i}},{w}_{{q}_{i}}}\right) = \left( {{x}_{{c}_{i}},{y}_{{c}_{i}},{z}_{{c}_{i}},{t}^{\prime }}\right) \tag{6}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
of ${\mathbf{L}}_{i}$ and the object is detected. If ${\mathbf{L}}_{i}$ isn’t clipped by the object, the corresponding location of the user's hand is not projected on the object.
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+
Figure 14: Visualization of equation 2. $\left( {\beta = {0.5}}\right)$
|
| 168 |
+
|
| 169 |
+
Here, ${l}_{i}$ is calculated as the distance between ${c}_{i}$ and ${q}_{i}$ :
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
{l}_{i} = \sqrt{{\left( {x}_{{c}_{i}} - {x}_{{q}_{i}}\right) }^{2} + {\left( {y}_{{c}_{i}} - {y}_{{q}_{i}}\right) }^{2} + {\left( {z}_{{c}_{i}} - {z}_{{q}_{i}}\right) }^{2} + {\left( {w}_{{c}_{i}} - {w}_{{q}_{i}}\right) }^{2}}. \tag{7}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
${l}_{i}$ is calculated for all locations, and converted into the strength of the stimulus by referring to the equations (1), (2) and (3).
|
| 176 |
+
|
| 177 |
+
## 6 RESULTS OF TACTILE DISPLAY
|
| 178 |
+
|
| 179 |
+
The system is able to display multiple objects from any viewpoint, however in this section, we deal with a single object situated in the center of the screen for a simple example.
|
| 180 |
+
|
| 181 |
+
We implemented a function to visually display the strength of the stimulus calculated at each location for visually validating the operation. For precise rendering of tactile stimuli, the calculation is conducted at 242 locations arranged in grids situated on the user's hand. As shown in Figure 17 (a), (b), and (c), the locations and the amplitude of each tactile stimulus are imposed on the left side of the 3D screen. The displayed stimuli in a cube show the area to be mapped on a virtual hand, so that the user is able to intuitively recognize the presented tactile sensation given to the hand. Based on the 242 calculated values in the cube, the stimuli at 30 points corresponded to the motor locations, which is colored in cyan, are simultaneously given to the motors for presenting tactile sensation.
|
| 182 |
+
|
| 183 |
+
As shown in Figure 17, when a user touches a hypercube with its cell facing the front, a uniform stimulus is generated. The magnitude of this stimulus is the same, even when only part of the hand is touched. Note that in all the following figures, the hand is in the same orientation.
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
|
| 187 |
+
Figure 15: Tactile system model.
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
|
| 191 |
+
Figure 16: Tactile calculation model.
|
| 192 |
+
|
| 193 |
+
If the cell is tilted slightly to the left, the stimulus becomes greater on the right and smaller on the left (Figure 18(a)). The gradient increases as the slope increases, and eventually the leftmost stimulus disappears (Figure 18(b)).
|
| 194 |
+
|
| 195 |
+
By manipulating the hypercube, the user is able to touch and recognize the shape intuitively. When the face is facing the user, the wall-shaped tactile sensation is presented as shown in Figure 19 (a). If the edge is situated in the front, the user feels line-shaped sensation as presented in Figure (b). The vertex presents an isolated strong stimuli, and the sensation gets gradually weaker in the peripheral area as shown in Figure (c).
|
| 196 |
+
|
| 197 |
+
The user is also able to recognize differences between objects which look the same. Three different objects depicted in Figure 20 (b), (c), (d) are observed as the same from certain directions (Figure 20(a)). The three objects are recognized differently by the touch sensation. Figure 21(a) presents a 4D cone, where the centered area gives strong stimuli, and the stimuli gradually decreases in the perifieral area. When touching flat surface, the stimuli appear uniformly as shown in Figure (b). For the hollow shape, the stimuli in the central area is weaker than the surrounding area as presented in Figure (c).
|
| 198 |
+
|
| 199 |
+
The tactile system was experienced and evaluated by one subject. Tactile stimuli rendered by vibration patterns were correctly presented as designed, and the subject could distinguish the above differences as well. He had a little difficulty to recognize the exact boundary of faces by different stimuli, however it could be compensated by moving the hand appropriately.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+
Figure 17: Putting a hand inside the cube representing the surface of a hypercube.
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
|
| 207 |
+
Figure 18: Touching the hypercube tilted slightly to the left.
|
| 208 |
+
|
| 209 |
+
## 7 CONCLUSION AND FUTURE WORK
|
| 210 |
+
|
| 211 |
+
In this paper, we proposed the system to display the 4D shape by rendering the $4\mathrm{D}$ tactile sensation. $4\mathrm{D}$ cognition can be supplemented by combining the visual information with the tactile information. By increasing subjects, the system should be verified objectively and quantitatively in future work.
|
| 212 |
+
|
| 213 |
+
Possible improvements are considered. Since this system uses simple vibration motors for tactile rendering, the resolution of strength and position is not enough for subjects to recognize more detailed shapes. The choice of high-quality actuators will solve this problem. Moreover, finely-controlled stimuli may be able to express much rich information such as friction and directional force.
|
| 214 |
+
|
| 215 |
+
The 4D space can be enriched by introducing four-dimensional physics. In the actual 3D world, an object moves by collision according to the laws of physics. By combining with 4D physics, the tactile experience will be much realistic.
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+
Figure 19: Touching face, edge and vertex of the hypercube.
|
| 220 |
+
|
| 221 |
+
## REFERENCES
|
| 222 |
+
|
| 223 |
+
[1] A. Chu, C. Fu, A. J. Hanson, and P. Heng. Gl4d: A gpu-based architecture for interactive $4\mathrm{\;d}$ visualization. IEEE Transactions on Visualization and Computer Graphics, 15:1587-1594, Oct. 2009. doi: 10.1109/TVCG. 2009.147
|
| 224 |
+
|
| 225 |
+
[2] W. Hashimoto and H. Iwata. Multi-dimensional data browser with haptic sensation. Transactions of the Virtual Reality Society of Japan, 2(3):9-16, Sept. 1997. doi: 10.18974/tvrsjp.2.3_9
|
| 226 |
+
|
| 227 |
+
[3] J. Martínez, A. García, M. Oliver, J. P. Molina, and P. González. Identifying virtual 3d geometric shapes with a vibrotactile glove. IEEE Computer Graphics and Applications, 36(1):42-51, 2016. doi: 10.1109/ MCG.2014.81
|
| 228 |
+
|
| 229 |
+
[4] J. McIntosh. The four dimensional blocks, 2014. Retrieved June 2020.
|
| 230 |
+
|
| 231 |
+
[5] T. Miwa, Y. Sakai, and S. Hashimoto. Learning 4-d spatial representations through perceptual experience with hypercubes. IEEE Transactions on Cognitive and Developmental Systems, 10(2):250-266, June 2018. doi: 10.1109/TCDS.2017.2710420
|
| 232 |
+
|
| 233 |
+
[6] M. Ohka, K. Kato, T. Fujiwara, and Y. Mitsuya. Virtual object handling using a tactile-haptic display system. In IEEE International Conference Mechatronics and Automation, 2005, vol. 1, pp. 292-297 Vol. 1, 2005. doi: 10.1109/ICMA.2005.1626562
|
| 234 |
+
|
| 235 |
+
[7] J. Weng and H. Zhang. Perceptualization of geometry using intelligent haptic and visual sensing. vol. 8654, Feb. 2013. doi: 10.1117/12/2002536
|
| 236 |
+
|
| 237 |
+
[8] X. Yan, C. Fu, and A. J. Hanson. Multitouching the fourth dimension. Computer, 45(9):80-88, 2012. doi: 10.1109/MC.2012.77
|
| 238 |
+
|
| 239 |
+
[9] H. Zhang and A. J. Hanson. Shadow-driven 4d haptic visualization. IEEE Transactions on Visualization and Computer Graphics, 13(6):1688-1695, Nov. 2007. doi: 10.1109/TVCG.2007.70593
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+
Figure 20: 4D objects and their 3D analogues.
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
|
| 247 |
+
Figure 21: Touching 4D objects that looks the same when seen from the front.
|
| 248 |
+
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/HleC7rJGEkE/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,219 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TOUCHING 4D OBJECTS WITH 3D TACTILE FEEDBACK
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
This paper introduces a novel interactive system for presenting $4\mathrm{D}$ objects in 4D space through tactile sensation. A user is able to experience 4D space by actively touching $4\mathrm{D}$ objects with the hands, where the hand holding the controller is physically stimulated at a three-dimensional hypersurface of the $4\mathrm{D}$ object. When a hand is placed on the $3\mathrm{D}$ projection of a $4\mathrm{D}$ object, the force generated at the interface between the hand and the object is calculated by referring to the distance from the viewpoint to each point in the frontal surface of the object. The calculated force in the hand is converted to the vibration patterns to be displayed from the tactile glove. The system supplements the $4\mathrm{D}$ information such as tilt or unevenness, which is difficult to be visually recognized.
|
| 8 |
+
|
| 9 |
+
Keywords: 4D space, 4D visualization, 4D interaction, tactile visualization
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing-Visualization-Visualization application domains-Scientific visualization
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
With the development of computer graphics technology, various methods for displaying 4D objects have been proposed. Furthermore, the VR technology allowed users to observe 4D objects with a higher degree of immersion. Some researchers produced new methods of visualization and interaction [1], [5], [9], being expected to improve the understanding and cognition of $4\mathrm{D}$ space.
|
| 16 |
+
|
| 17 |
+
While most of these approaches use only the visual information by producing 4D space, humans also use auditory and tactile perception to recognize $3\mathrm{D}$ representations. However, there are few studies focusing on these additional sensations for $4\mathrm{D}$ spatial representations. In order to present richer four-dimensional information to users, auditory and tactile information is necessary as well as visual information. Therefore, we develop a novel system for displaying 4D objects using tactile sensations.
|
| 18 |
+
|
| 19 |
+
Needless to say, humans cannot touch objects defined in 4D space, since their bodies are situated in 3D space. The tactile sensations are the information mapped onto their skin, which is arranged in 2D surface. Therefore we can infer that the ones situated in $4\mathrm{D}$ space touch 4D objects with their 3D skin, and the tactile sensation is regarded as the information mapped into 3D hypersurface, if it exists. Moreover, in general, humans' tactile perception appears to be mapped into their actual 3D space in combination with the body arrangement. Our idea is to introduce the experience as if humans touch the $4\mathrm{D}$ objects by getting the $3\mathrm{D}$ -mapped tactile-like information of $4\mathrm{D}$ objects through their skin.
|
| 20 |
+
|
| 21 |
+
In this paper, we propose a novel $4\mathrm{D}$ interaction system with tactile representation. We focus on the aspect of tactile sensation that can represent the unevenness of an object by displaying pressure sensation. We convert the tactile information into vibration patterns for ease of handling. For presenting the touch information, a tactile glove installing vibration actuators is developed.
|
| 22 |
+
|
| 23 |
+
The introduced system works as follows. First, a 4D object is projected to a 3D screen in the VR environment. Second, a hand wearing the tactile glove enters the screen. Third, the system calculates the necessary tactile information. Finally, the hand receives tactile sensations so that each part of the hand corresponds to each part of $3\mathrm{D}$ hypersurface of the $4\mathrm{D}$ hand. A user is able to perceive information of the projected 3D hyperplane such as the slope and unevenness as if he/she touches the 4D objects, in the sense that he/she receives 4D tactile-like information through the tactile sensation.
|
| 24 |
+
|
| 25 |
+
< g r a p h i c s >
|
| 26 |
+
|
| 27 |
+
Figure 1: Overall view of the system.
|
| 28 |
+
|
| 29 |
+
§ 2 RELATED WORKS
|
| 30 |
+
|
| 31 |
+
Several approaches have been introduced for presenting $4\mathrm{D}$ spatial information through tactile or haptic sensations. Hashimoto et al. [2] developed a method that visualized multidimensional data with haptic representation. They use a haptic device capable of 6-DoF input and force feedback to control a pointer floating in the virtual environment. When the pointer overlaps the displayed data, the value corresponding to the location is expressed in torque. The system also allows a user to explore four or more dimensional environment by operating the $3\mathrm{D}$ slice with twisting input. Zhang et al. [7], [9] proposed a method using a haptic device for exploring and manipulating knotted spheres and cloth-like objects in 4D space. In the exploration of the objects, constraining the movement of the device to the projected objects improves the understanding of complex structure. In the manipulation of the objects, haptic feedback is presented by rendering the reaction of pulling force.
|
| 32 |
+
|
| 33 |
+
In the study of tactile presentation, Martinez et al. [3] proposed a haptic display by introducing a vibrotactile glove that enables users to perceive the shape of virtual 3D objects. Ohka et al. [6] developed a multimodal display, which is capable of stimulating muscles and tendons of the forearms and tactile receptors in fingers for presenting the tactile-haptic reality.
|
| 34 |
+
|
| 35 |
+
These approaches, however, are not intended for simulating $4\mathrm{D}$ tactile stimulation.
|
| 36 |
+
|
| 37 |
+
§ 3 SYSTEM CONFIGURATION
|
| 38 |
+
|
| 39 |
+
As shown in Figure 1, we develop a 4D visualization system consisting of a personal computer (HP, Intel Core i7-8700 3.20GHz, 8GB RAM, NVIDIA GeForce GTX 1060), a 6-DoF head-mounted display (HMD) with a motion controller (HTC VIVE), a single-board computer (Raspberry Pi Zero), and a tactile glove. A user wears the HMD, and holds the motion controller in the hand wearing the tactile glove. The user observes 3D-projected 4D space in the virtual environment through the HMD. The position of the hand is recognized by the motion controller. When the user touches a 4D object, the tactile glove presents the corresponding tactile sensation to the user's hand. The software is implemented with Unity 2018.3.3f1 and SteamVR Unity Plugin v2.2.0.
|
| 40 |
+
|
| 41 |
+
Figure 2 shows a tactile glove equipped with a total of 30 vibration motors. Five motors are arranged on the palm side of the hand, five on the back side, and two on the front and the back of each finger as shown in Figure 3. The motors are driven by electric current, and the strength of the vibration stimuli is controlled by the single-board computer with pulse-width modulation.
|
| 42 |
+
|
| 43 |
+
< g r a p h i c s >
|
| 44 |
+
|
| 45 |
+
Figure 3: Arrangement of vibration motors in tactile glove.
|
| 46 |
+
|
| 47 |
+
< g r a p h i c s >
|
| 48 |
+
|
| 49 |
+
Figure 4: Structure of the 4D polytope compared to the 3D polytope.
|
| 50 |
+
|
| 51 |
+
§ 4 4D VISUALIZATION SYSTEM
|
| 52 |
+
|
| 53 |
+
In this section, we describe the $4\mathrm{D}$ visualization system which projects 4D objects into 3D virtual environment. The system is based on an algorithm introdued in McIntosh's 4D Blocks [4].
|
| 54 |
+
|
| 55 |
+
§ 4.1 DEFINITION OF 4D OBJECTS
|
| 56 |
+
|
| 57 |
+
In the system, we define 4D objects as 4D convex polytopes. Non-convex polytopes are constructed by combining convex polytopes. As shown in Figure 4, boundaries are composed of 3D objects called cells, and their intersections are divided into 3 different features; vertices, edges, and faces, depending on the number of dimensions.
|
| 58 |
+
|
| 59 |
+
§ 4.2 PROJECTION
|
| 60 |
+
|
| 61 |
+
Figure 5 shows the 4D projection model. 4D objects arranged in 4D space are projected to a $3\mathrm{D}$ screen. Positions of $4\mathrm{D}$ objects are described by $4\mathrm{D}$ vectors in the $4\mathrm{D}$ coordinate system, and their orientations are described by $4 \times 4$ orthogonal matrices. The center of the 3D screen is located at the origin of the 4D space. The $3\mathrm{D}$ screen has a dimension of $2 \times 2 \times 2$ , spreading in ${x}_{w}{y}_{w}{z}_{w}$ plane. Data defined in the $4\mathrm{D}$ world-coordinate system ${x}_{w}{y}_{w}{z}_{w}{w}_{w}$ are converted to data in the $3\mathrm{D}$ screen-coordinate system ${x}_{s}{y}_{s}{z}_{s}$ by removing $w$ - coordinated component. 4D objects are orthogonally projected in the 3D screen by this transformation. From the viewpoint of the suitability of tactile presentation, we selected the orthographic projection method. The system also supports perspective projection, which can be occasionally selected by a user.
|
| 62 |
+
|
| 63 |
+
< g r a p h i c s >
|
| 64 |
+
|
| 65 |
+
Figure 5: 3D and 4D projection model.
|
| 66 |
+
|
| 67 |
+
< g r a p h i c s >
|
| 68 |
+
|
| 69 |
+
Figure 6: Examples of drawings with similar drawings in 3D.
|
| 70 |
+
|
| 71 |
+
§ 4.3 DRAWING
|
| 72 |
+
|
| 73 |
+
In the system, face polygons and edge lines are used for drawing. Polygons and lines do not have normals in $4\mathrm{D}$ space, and are drawn as contours of cells. In order to make it easier to distinguish cells displayed on the 3D screen, the system draws the center of cells with semi-transparent color-coded polygons, in addition to edges and faces. Figure 6 shows the examples of drawings.
|
| 74 |
+
|
| 75 |
+
For accurate drawing, back-cell culling and hidden-hypersurface removal are applied. Occluded cells are determined by taking the dot product of a view direction and cell normals, and faces that belong to visible cells are selected so that they will be used for drawing.
|
| 76 |
+
|
| 77 |
+
As the system doesn't adopt voxel rendering [1], we cannot use depth buffering for hidden-hypersurface removal. Instead, we use clip units, which represents an area where an object obscures other objects behind it. Figure 7 shows an example scene where clip units work. As shown in Figure 8, clip units are identified by contact subsurfaces of front-facing surfaces and back-facing surfaces with respect to the viewer. Each clip unit is a halfspace whose boundary includes the subsurface, and the boundary is orthogonal to the screen. Intersections of clip units are used for clipping. Drawing polygons and lines are clipped by clip units of objects closer to the viewer.
|
| 78 |
+
|
| 79 |
+
< g r a p h i c s >
|
| 80 |
+
|
| 81 |
+
Figure 7: An example scene where clip units work. We don't draw the intersection of rectangle $B$ and the area hidden by $A$ .
|
| 82 |
+
|
| 83 |
+
< g r a p h i c s >
|
| 84 |
+
|
| 85 |
+
Figure 8: Calculating a clip unit in a 3D scene. Note that clip units are dimension-independent concepts. In $n\mathrm{D}$ ,"surfaces" mean $\left( {n - 1}\right) \mathrm{D}$ components, and "subsurfaces" mean $\left( {n - 2}\right) \mathrm{D}$ components.
|
| 86 |
+
|
| 87 |
+
§ 4.4 CONTROL
|
| 88 |
+
|
| 89 |
+
4D orientations have six degrees of freedom [8]. A user is able to rotate a 4D object by moving the motion controller while pressing the trigger. To naturally correspond the controller's 6-DoF operation in 3D space with the object's 6-DoF rotation in 4D space, the 3-DoF translational operation (Figure 9(a)) is related with the rotation involving the $w$ -axis, and the 3-DoF rotational operation (Figure 9(b)) is associated with the rotation not involving the $w$ -axis.
|
| 90 |
+
|
| 91 |
+
§ 5 TACTILE REPRESENTATION
|
| 92 |
+
|
| 93 |
+
In this section, we introduce a tactile representation method. For explaining the concept, we firstly describe how to express the touching sensation of a 2D-projected 3D object, and then expand it to the 3D projection of a 4D object.
|
| 94 |
+
|
| 95 |
+
§ 5.1 ANALOGY OF TOUCHING 3D OBJECTS IN 2D SPACE
|
| 96 |
+
|
| 97 |
+
When humans touch a screen where a 3D object is displayed, they feel the object as if it is situated. For example, suppose that we move the right palm straight towards the front wall. Figure 10 depicts three different situations. When the wall faces straight in front, a uniform pressure will be applied on the palm (Figure 10(a)). Alternatively, when the wall faces a little to the right, the thumb will first be stimulated. When the hand is pressed against the wall as it is, the wrist will bend and the entire palm will touch the wall, but the thumb side will receive a stronger force than the little finger side (Figure 10(b)). When the hand touches a corner of the wall, stronger sensation is perceived at the center of the hand (Figure 10(c)). These differences can be distinguished by the pressure applied to a palm.
|
| 98 |
+
|
| 99 |
+
The calculation of tactile stimulus generation proceeds as follows. We consider 3D space where a 3D object is situated, and the space is displayed in a 2D screen. When a user wearing a tactile glove touches the screen, hand-touched area is projected on the surface of the object. Then the distance between the projected hand and the screen is calculated. Figure 11 shows the three different situations of a wall, related with the cases presented in Figure 10. If the distance to the left is shorter than that to the right, it means the wall faces to the right. In this way, the strength of pressure is calculated by referring to the relative relationship of the distances. Figure 12 shows the results of the calculation.
|
| 100 |
+
|
| 101 |
+
< g r a p h i c s >
|
| 102 |
+
|
| 103 |
+
Figure 9: Rotating 4D objects by the motion controller operation.
|
| 104 |
+
|
| 105 |
+
Let $\Omega$ be a set of all actuators installed in the tactile glove. First of all, the actuators are projected orthogonally through the screen, and a subset $L \subset \Omega$ of actuators being projected on the object is detected. For each projected points of actuators $i \in L$ , the distance ${l}_{i}$ from the screen is calculated. Then the normalized relative strength ${s}_{i}$ is calculated:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
{s}_{i} = \max \left\{ {\alpha \left( {{l}_{min} - {l}_{i}}\right) + 1,0}\right\} , \tag{1}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where ${l}_{\min } = \mathop{\min }\limits_{{k \in L}}{l}_{k}$ and $\alpha$ is a gradient constant. The formula is derived as shown in Figure 13.
|
| 112 |
+
|
| 113 |
+
In general, corners give stronger pressure than a flat wall, and apexes give stronger pressure than corners. In the same way, an uneven wall give stronger pressure than a flat wall. By considering this fact, the strength of the stimulus ${h}_{i}$ is adaptively adjusted according to the degree of sharpness and inclination:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
{h}_{i} = \left( {1 - \beta \frac{{\sum }_{k \in L}{s}_{k}}{N}}\right) {s}_{i}, \tag{2}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where $\beta$ is a reducing constant $\left( {0 \leq \beta \leq 1}\right)$ and $N = \# \left\{ {i \in L \mid {s}_{i} > 0}\right\}$ is the number of active actuators. Figure 14 shows examples of the application of the formula.
|
| 120 |
+
|
| 121 |
+
Finally, the strength ${h}_{i}$ is converted to voltage value ${V}_{i}$ :
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
{V}_{i} = \gamma {h}_{i}, \tag{3}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $\gamma$ is a constant.
|
| 128 |
+
|
| 129 |
+
< g r a p h i c s >
|
| 130 |
+
|
| 131 |
+
Figure 10: Various situations when touching a wall.
|
| 132 |
+
|
| 133 |
+
< g r a p h i c s >
|
| 134 |
+
|
| 135 |
+
Figure 11: Calculation of tactile stimulus generation.
|
| 136 |
+
|
| 137 |
+
< g r a p h i c s >
|
| 138 |
+
|
| 139 |
+
Figure 12: Calculation results.
|
| 140 |
+
|
| 141 |
+
§ 5.2 APPLYING TO 4D SYSTEM
|
| 142 |
+
|
| 143 |
+
The idea and the calculation method presented in the previous section can be extended to the 3D projection of 4D objects. In the 3D system, a user receives 2D-mapped stimulus patterns through his/her palm of the hand (Figure 15(a)). In the 4D system, the user receives 3D-mapped data through all the skin of the hand holding the controller (Figure 15(b)). Touching the 2D screen in the 3D system corresponds to putting a hand in the $3\mathrm{D}$ screen in the $4\mathrm{D}$ system. When the hand enters the screen, the stimulus is calculated in the same way as the 2D cases.
|
| 144 |
+
|
| 145 |
+
Figure 16 shows the model of tactile calculation. The VR environment of the system consists of the $3\mathrm{D}$ screen and a rendered user's hand. 4D object is kept at a distance with the screen so that the hand prevents from colliding with the screen. When the hand enters the defined $4\mathrm{D}$ space, the relative position of each actuator from the screen ${d}_{i} = \left( {{x}_{{d}_{i}},{y}_{{d}_{i}},{z}_{{d}_{i}}}\right)$ is detected in the 3D screen coordinate system ${x}_{s}{y}_{s}{z}_{s}$ , and the position is converted to the 4D world coordinate system ${x}_{w}{y}_{w}{z}_{w}{w}_{w}$ as
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
{c}_{i} = \left( {{x}_{{c}_{i}},{y}_{{c}_{i}},{z}_{{c}_{i}},{w}_{{c}_{i}}}\right) = \left( {{x}_{{d}_{i}},{y}_{{d}_{i}},{z}_{{d}_{i}},0}\right) . \tag{4}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
Then the line ${\mathbf{L}}_{i}$ extending from ${c}_{i}$ towards positive direction of ${w}_{w}$ axis is defined:
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
{\mathbf{L}}_{i} = {c}_{i} + \left( {0,0,0,t}\right) \left( {t \geq 0}\right) . \tag{5}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
By clipping ${\mathbf{L}}_{i}$ with the $4\mathrm{D}$ object, the intersection point
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
{q}_{i} = \left( {{x}_{{q}_{i}},{y}_{{q}_{i}},{z}_{{q}_{i}},{w}_{{q}_{i}}}\right) = \left( {{x}_{{c}_{i}},{y}_{{c}_{i}},{z}_{{c}_{i}},{t}^{\prime }}\right) \tag{6}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
of ${\mathbf{L}}_{i}$ and the object is detected. If ${\mathbf{L}}_{i}$ isn’t clipped by the object, the corresponding location of the user's hand is not projected on the object.
|
| 164 |
+
|
| 165 |
+
< g r a p h i c s >
|
| 166 |
+
|
| 167 |
+
Figure 14: Visualization of equation 2. $\left( {\beta = {0.5}}\right)$
|
| 168 |
+
|
| 169 |
+
Here, ${l}_{i}$ is calculated as the distance between ${c}_{i}$ and ${q}_{i}$ :
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
{l}_{i} = \sqrt{{\left( {x}_{{c}_{i}} - {x}_{{q}_{i}}\right) }^{2} + {\left( {y}_{{c}_{i}} - {y}_{{q}_{i}}\right) }^{2} + {\left( {z}_{{c}_{i}} - {z}_{{q}_{i}}\right) }^{2} + {\left( {w}_{{c}_{i}} - {w}_{{q}_{i}}\right) }^{2}}. \tag{7}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
${l}_{i}$ is calculated for all locations, and converted into the strength of the stimulus by referring to the equations (1), (2) and (3).
|
| 176 |
+
|
| 177 |
+
§ 6 RESULTS OF TACTILE DISPLAY
|
| 178 |
+
|
| 179 |
+
The system is able to display multiple objects from any viewpoint, however in this section, we deal with a single object situated in the center of the screen for a simple example.
|
| 180 |
+
|
| 181 |
+
We implemented a function to visually display the strength of the stimulus calculated at each location for visually validating the operation. For precise rendering of tactile stimuli, the calculation is conducted at 242 locations arranged in grids situated on the user's hand. As shown in Figure 17 (a), (b), and (c), the locations and the amplitude of each tactile stimulus are imposed on the left side of the 3D screen. The displayed stimuli in a cube show the area to be mapped on a virtual hand, so that the user is able to intuitively recognize the presented tactile sensation given to the hand. Based on the 242 calculated values in the cube, the stimuli at 30 points corresponded to the motor locations, which is colored in cyan, are simultaneously given to the motors for presenting tactile sensation.
|
| 182 |
+
|
| 183 |
+
As shown in Figure 17, when a user touches a hypercube with its cell facing the front, a uniform stimulus is generated. The magnitude of this stimulus is the same, even when only part of the hand is touched. Note that in all the following figures, the hand is in the same orientation.
|
| 184 |
+
|
| 185 |
+
< g r a p h i c s >
|
| 186 |
+
|
| 187 |
+
Figure 15: Tactile system model.
|
| 188 |
+
|
| 189 |
+
< g r a p h i c s >
|
| 190 |
+
|
| 191 |
+
Figure 16: Tactile calculation model.
|
| 192 |
+
|
| 193 |
+
If the cell is tilted slightly to the left, the stimulus becomes greater on the right and smaller on the left (Figure 18(a)). The gradient increases as the slope increases, and eventually the leftmost stimulus disappears (Figure 18(b)).
|
| 194 |
+
|
| 195 |
+
By manipulating the hypercube, the user is able to touch and recognize the shape intuitively. When the face is facing the user, the wall-shaped tactile sensation is presented as shown in Figure 19 (a). If the edge is situated in the front, the user feels line-shaped sensation as presented in Figure (b). The vertex presents an isolated strong stimuli, and the sensation gets gradually weaker in the peripheral area as shown in Figure (c).
|
| 196 |
+
|
| 197 |
+
The user is also able to recognize differences between objects which look the same. Three different objects depicted in Figure 20 (b), (c), (d) are observed as the same from certain directions (Figure 20(a)). The three objects are recognized differently by the touch sensation. Figure 21(a) presents a 4D cone, where the centered area gives strong stimuli, and the stimuli gradually decreases in the perifieral area. When touching flat surface, the stimuli appear uniformly as shown in Figure (b). For the hollow shape, the stimuli in the central area is weaker than the surrounding area as presented in Figure (c).
|
| 198 |
+
|
| 199 |
+
The tactile system was experienced and evaluated by one subject. Tactile stimuli rendered by vibration patterns were correctly presented as designed, and the subject could distinguish the above differences as well. He had a little difficulty to recognize the exact boundary of faces by different stimuli, however it could be compensated by moving the hand appropriately.
|
| 200 |
+
|
| 201 |
+
< g r a p h i c s >
|
| 202 |
+
|
| 203 |
+
Figure 17: Putting a hand inside the cube representing the surface of a hypercube.
|
| 204 |
+
|
| 205 |
+
< g r a p h i c s >
|
| 206 |
+
|
| 207 |
+
Figure 18: Touching the hypercube tilted slightly to the left.
|
| 208 |
+
|
| 209 |
+
§ 7 CONCLUSION AND FUTURE WORK
|
| 210 |
+
|
| 211 |
+
In this paper, we proposed the system to display the 4D shape by rendering the $4\mathrm{D}$ tactile sensation. $4\mathrm{D}$ cognition can be supplemented by combining the visual information with the tactile information. By increasing subjects, the system should be verified objectively and quantitatively in future work.
|
| 212 |
+
|
| 213 |
+
Possible improvements are considered. Since this system uses simple vibration motors for tactile rendering, the resolution of strength and position is not enough for subjects to recognize more detailed shapes. The choice of high-quality actuators will solve this problem. Moreover, finely-controlled stimuli may be able to express much rich information such as friction and directional force.
|
| 214 |
+
|
| 215 |
+
The 4D space can be enriched by introducing four-dimensional physics. In the actual 3D world, an object moves by collision according to the laws of physics. By combining with 4D physics, the tactile experience will be much realistic.
|
| 216 |
+
|
| 217 |
+
< g r a p h i c s >
|
| 218 |
+
|
| 219 |
+
Figure 19: Touching face, edge and vertex of the hypercube.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,447 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# How Tall is that Bar Chart? Virtual Reality, Distance Compression and Visualizations
|
| 2 |
+
|
| 3 |
+
1st Author Name
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
City, Country
|
| 8 |
+
|
| 9 |
+
e-mail address
|
| 10 |
+
|
| 11 |
+
2nd Author Name
|
| 12 |
+
|
| 13 |
+
Affiliation
|
| 14 |
+
|
| 15 |
+
City, Country
|
| 16 |
+
|
| 17 |
+
e-mail address
|
| 18 |
+
|
| 19 |
+
3rd Author Name
|
| 20 |
+
|
| 21 |
+
Affiliation
|
| 22 |
+
|
| 23 |
+
City, Country
|
| 24 |
+
|
| 25 |
+
e-mail address
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+
Figure 1. The virtual environments used in Study 1, each with differing levels of depth cues. Participants could look around with the HMD in VR and used the mouse to look around in the screen virtual environment conditions.
|
| 30 |
+
|
| 31 |
+
## ABSTRACT
|
| 32 |
+
|
| 33 |
+
As VR technology becomes more available, VR applications will be increasing used to present information visualizations. While data visualization in VR is an interesting topic, there remain questions about how effective or accurate such visualization can be. One known phenomenon with VR environments is that people tend to unconsciously compress or underestimate distances. However, it is unknown if or how this effect will alter the perception of data visualizations in VR. To this end, we replicate portions of Cleveland and McGill's foundational perceptual visualization studies, in VR. Through a series of three studies we find that distance compression does negatively affect estimations of actual lengths (heights of bars), but does not appear to impact relative comparisons. Additionally, by replicating the position-angle experiments, we find that (as with traditional 2D visualizations) people are better at relative length evaluations than relative angles. Finally, by looking at these open questions, we develop a series of best practices for performing data visualization in a VR environment.
|
| 34 |
+
|
| 35 |
+
## Author Keywords
|
| 36 |
+
|
| 37 |
+
Visualization; VR.
|
| 38 |
+
|
| 39 |
+
## ACM Classification Keywords
|
| 40 |
+
|
| 41 |
+
H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous
|
| 42 |
+
|
| 43 |
+
## INTRODUCTION
|
| 44 |
+
|
| 45 |
+
As Virtual Reality (VR) technology continues to be developed and expanded, workplace tasks such as viewing information visualizations, are becoming more likely to be executed in a VR environment. While much of the research around traditional screen-based visualizations likely applies to VR, it is unclear how specific VR-related phenomena might alter how effective or accurate these visualizations are. Of particular note, it has been shown that in VR environments people tend to unconsciously underestimate distances, in a phenomenon called distance compression $\left\lbrack {1,9,{10},{22},{26},{28},{33},{34},{39},{40},{45}}\right\rbrack$ . However, to this point, designers of VR visualizations have not had any guidance about how distance compression will alter visualization effectiveness or user accuracy. For example, even a simple bar chart uses the heights/lengths of the bars to represent data and it is unclear how distance compression will alter one's ability to measure or compare the lengths of the bars.
|
| 46 |
+
|
| 47 |
+
To solve this problem, we looked to foundational work by Cleveland and McGill which looked at graphical perception of paper based visualizations [6] and has also been replicated in a digital context [14]. We performed 3 studies, replicating the position-length and the position-angle experiments in a VR environment. Our first study, using the bar chart position-length experiment, provided bar charts in virtual environments both on the screen and in VR and asked participants to measure actual distances (i.e., the bar is $1\mathrm{\;m}$ tall) and relative distances (i.e., bar $A$ is ${80}\%$ as tall as bar $B$ ). We explored the suggestion that varying degrees of depth cues could reduce distance compression $\left\lbrack {{10},{12}}\right\rbrack$ , as well as bar charts of varying scales. In a second study, rather than depth cues we looked at perspective, providing participants different ways to move around the look at the various scaled charts in VR. Finally, in the last experiment, we implemented the position-angle experiment, looking at how actual lengths/angles and relative lengths/angles were measured in bar charts, scatter plots, and pie charts.
|
| 48 |
+
|
| 49 |
+
Through these studies, we have 5 contributions. First, we confirm the existence of distance compression in VR visualizations, but see that is also applies to similarly to screen-based environments. We show that distance compression does negatively affect actual length/distance measurements, but may only have a small or negligible effect on relative comparisons. Depth cues had no discernable effect on the accuracy of measurements, but do appear to affect one's perception of their ability to be accurate. As-in traditional 2D visualizations, people are better at relative length evaluations than relative angles. Finally, we provide a set of design guidelines for designers to inform the implementation and creation of effective and useful visualizations in VR.
|
| 50 |
+
|
| 51 |
+
## RELATED WORK
|
| 52 |
+
|
| 53 |
+
## VR and Visualization
|
| 54 |
+
|
| 55 |
+
VR visualizations fall under the field of 'Immersive Analytics', though this also refers to augmented reality (AR) and Mixed Reality (MR) Visualizations [7]. In the late 90s people were beginning to talk about VR, and visualization, even though systems of the day (mostly VR Caves) did not quite provide adequate capabilities [5,21,31,32,36]. More recent work provides concrete examples in the environmental $\left\lbrack {{15},{16},{27},{30}}\right\rbrack$ , medical $\left\lbrack {{11},{19},{47}}\right\rbrack$ , and archeology $\left\lbrack {4,{24},{38}}\right\rbrack$ domains. A notable example is ImAxes [8] which is a dynamic system were users draw and connect axis in midair in VR allowing for multiple dynamic chart types.
|
| 56 |
+
|
| 57 |
+
When interacting with a VR visualization, one should keep in mind that multiple views and input modalities may not transfer from the screen to VR [23]. Furthermore, the affordances of an interaction may be different in VR [2]. For example, Simpson et al. found that walking around the dataset in VR was not better than using a controller to rotate it [37]. Cybersickness must also be considered, as certain design choices, such as using a controller to move around, may work on a screen but quickly induce nausea in VR $\left\lbrack {{48},{49}}\right\rbrack$ . There may be many ways to combat this, for example, Cliquet et al. suggest allowing the user to sit [7].
|
| 58 |
+
|
| 59 |
+
## Visualization and Perception
|
| 60 |
+
|
| 61 |
+
Cleveland and McGill's foundational work [6] showed that people are better estimating lengths than areas, and better at estimating areas than volumes. Furthermore, people are better at position estimations (i.e., a scatter plot) than angle estimations (i.e., pie chart). These results have been confirmed and replicated more recently by Heer and Bostock with a Mechanical Turk based study [14].
|
| 62 |
+
|
| 63 |
+
## 2D vs 3D
|
| 64 |
+
|
| 65 |
+
The usefulness and effectiveness of $2\mathrm{D}$ visualizations have often been compared to $3\mathrm{D}.2\mathrm{D}$ is considered generally the best approach $\left\lbrack {{35},{43}}\right\rbrack$ , especially for tasks that require precision [18] and tasks that suffer from perspective distortion (e.g., distance estimation) [17]. However, 3D visualizations can still be useful, particularly when the data has a high levels of detail, structure, and/or complexity (e.g., 3+ dimensions) $\left\lbrack {{17},{18},{25},{44}}\right\rbrack$ or when the task involves exploring 3D representations of the real world (e.g., terrain or other real world objects) $\left\lbrack {{17},{18}}\right\rbrack$ . 3D may also prove useful by providing ways to explore overlap in network graphs $\left\lbrack {{13},{41},{42}}\right\rbrack$ .
|
| 66 |
+
|
| 67 |
+
Now most visualizations can be considered 3D visualizations in VR - even though they might be mapped onto a plane, the ability to look at them from multiple angles might cause occlusion or other problems of perspective. However, we bring up the debate between 2D and 3D graphs because there is some indication that the binocular depth cues provided by modern VR tip the equation in favor of $3\mathrm{D}$ in some situations. When only considering scatterplots, for example, providing binocular cues has shown that $3\mathrm{D}$ visualizations tend win over $2\mathrm{D}\left\lbrack {{25},{29},{44}}\right\rbrack$ , although this is not always the case $\left\lbrack {{35},{43}}\right\rbrack$ .
|
| 68 |
+
|
| 69 |
+
## VR and Perception
|
| 70 |
+
|
| 71 |
+
People tend to underestimate distances in VR using a Head Mounted Display (HMD) [1,9,10,26,28,33,34,39,40,45]. This effect exists even when the VR environment is very similar to, or a recording of the real world [28,34,40], and may be partially caused by the limited field of vision of the HMD [9,22]. Physical factors, such as the weight of the HMD may be also be important [45], especially as the effort perceived to be necessary to walk a particular distance (e.g., if one was wearing a heavy backpack) can have effects perception of that distance [46]. The parameters which affect distance compression have been investigated but are not fully understood. Furthermore, it is unclear how distance compression may affect visualization tasks like comparing two bars in a bar chart.
|
| 72 |
+
|
| 73 |
+
One possible solution might be a lack of realistic depth cues $\left\lbrack {{10},{12}}\right\rbrack$ in VR. Some of these cues, such as light, texture, shape, luminance, linearity of light, object occlusion, motion etc., can be manipulated to be more or less available in a virtual environment. Unlike a 2D screen, VR headsets provide binocular cues because they render a separate image for each eye (although at least one study has suggested that monocular/binocular cues alone are not responsible for distance compression [9]). In fact, current VR headsets, such as the HTC Vive, allow for most depth cues that are available in the real world to be implemented in VR [10]. Our first study looks at how fidelity of depth cues might change distance compression when looking at a visualization.
|
| 74 |
+
|
| 75 |
+
It has also been suggested that this distance compression might be affected by perceptually different distance zones. Armbrüster et al., suggest that distance compression is smaller for objects in peripersonal space $\left( { < 1\mathrm{\;m}}\right)$ where an object is within arm's reach [1]. Cutting calls this zone personal space $\left( { < {1.5}\mathrm{\;m}}\right)$ , splitting up larger distances into action space ( $< {30}\mathrm{\;m}$ , interaction of some sort is feasible) and vista space $({30}\mathrm{\;m} +$ , further than one would expect to be able to act) [10]. To further investigate these categories of distance, the first two studies use three different sizes of charts, roughly corresponding to personal, action, and vista bar heights.
|
| 76 |
+
|
| 77 |
+
## STUDY 1 - DEPTH CUES AND SCALE
|
| 78 |
+
|
| 79 |
+
The chronic underestimation of distances is troublesome for VR, particularly when considering visualizations tend to encode data using absolute or relative distances. Even a simple bar chart uses length to communicate data to the viewer.
|
| 80 |
+
|
| 81 |
+
Our first study aimed to confirm that visualization in VR are compromised by distance underestimation. Since it has been suggested that more depth cues $\left\lbrack {{10},{12}}\right\rbrack$ could lesson underestimation, we designed three virtual environments with differing levels of depth-cues.
|
| 82 |
+
|
| 83 |
+
Few Depth Cues: Objects had a consistent luminance and no texture. Shadows were disabled and the sky was a medium gray. A simple textured floor was provided as floating over a void was nauseating in VR. This condition represented a simple chart without embellishments..
|
| 84 |
+
|
| 85 |
+
Some Depth Cues: Bars now had a slightly crumpled paper texture and responded to the lights in the scene, casting shadows. The floor contained an arbitrary grid, and was also textured slightly. Aerial perspective was applied, allowing distant objects to fade into the sunset sky somewhat. We consider this a 'best practices' chart, with minimal added embellishments, all of which directly contribute depth cues to the environment.
|
| 86 |
+
|
| 87 |
+
Rich Depth Cues: In addition to texture, luminance, and aerial perspective, the scene was augmented with objects that could be used to determine relative sizes. Trees, a light post, some bushes provided general cues about scale. A house, car, and park bench were also in the scene, as these have relatively standard sizes. Similarity, a skyscraper was in the scene, as a floor of a building is also about the same size. These objects were not immediately in view, the participant would have to look at them directly. Grass and flowers on the ground provided cues of relative density. This condition, while being very rich with depth cues, was also a bit extreme; one can imagine that not every visualization has a place for trees, cars, and buildings (Figure 1). However, Bateman et al [3] showed that embellishments that add context to the visualization can improve memorability, and given the prevalence of infographics, it is not impossible that some visualization s might provide relevant, contextual objects (e.g., a visualization about deforestation could contain trees).
|
| 88 |
+
|
| 89 |
+
We were also interested in measuring this effect at multiple scales. The corresponding chart heights and task specific bar lengths can be seen in Table 1. In all conditions, the participant viewed the visualization from $4\mathrm{\;m}$ back.
|
| 90 |
+
|
| 91 |
+
Personal scale: The entire visualization could be seen at one time when looking straight ahead without looking significantly up or down.
|
| 92 |
+
|
| 93 |
+
House Scale: The larger visualization required that the participant need to look up somewhat to see the entire chart.
|
| 94 |
+
|
| 95 |
+
Skyscraper Scale: The visualization was extremely tall, requiring the viewer to tilt their head back and look way up. While a barchart as high as a skyscraper is unlikely to be very useful, we included this scale because for very large or complex visualizations it is possible that a user could to navigate to a view where some of the data is very far away.
|
| 96 |
+
|
| 97 |
+
Finally, we compared VR with an on-screen condition which featured virtual environments with the same scales and depth-cues (without, of course, the binocular depth cues provided be the HMD). This gave us a scale (3) by depth-cues (3) by screen/VR (2) factorial study. We also added a real-world condition, with a simple bar chart contained on a monitor. This was to provide a baseline as it was similar to the foundational work done by Cleveland and McGill [6].
|
| 98 |
+
|
| 99 |
+
## Task
|
| 100 |
+
|
| 101 |
+
Cleveland and McGill [6] provide several tasks for evaluating perception of lengths in a visualization. We chose to mimic their position length experiment, specifically using their Type-1 task (as this had the lowest error). This task provides a 5 value bar chart, with two side by side bars marked with a dot which have percentage differences ranging from 18% to 83%. The participant is asked to evaluate, without explicitly measuring, what percentage the smaller bar is of the larger. Our task mimics theirs, down to the way they chose relevant values for the bars in the task, except that we used a single bar chart instead of two side by side bar charts. We also colored the bars of interest because for dot at the bottom would be insufficient for differentiation when looking way up in the skyscraper scale.
|
| 102 |
+
|
| 103 |
+
Participants completed 7 blocks (depth-cues (3) x screen/VR (2) + 1 real-world), counterbalanced with a Latin square design. In each block they viewed 18 bar charts (126 total), 6 of each scale, in random order, except in the real-world condition where all 18 bar charts fit on the screen at ${30}\mathrm{\;{cm}}$ tall. Using [6]'s template as a guide, we randomly generated sets of 6 tasks; each set containing two tasks with a percentage difference between 10% and 40%, two between 40% and 60% and two between 60% and 90%. For every bar chart, participants were asked to specify the percentage the smaller bar was of the larger, and the absolute height of the smaller bar in the virtual environment (or the real-world height on the screen, in the real-world block). While relative comparisons (bar compared to axis) is indeed the more common visualization task, we also asked about actual heights as this was relevant to the distance compression literature and is a relevant visualization task in situations(e.g., terrain map where $1\mathrm{\;m} = 1\mathrm{\;{km}}$ in the real world). Participants were told not to walk around in VR, but could look around in any direction. In the on-screen virtual environment, participant could not move, but could look around using the mouse. Their 'virtual head' was placed at the same height as their real head had been in VR.
|
| 104 |
+
|
| 105 |
+
Like in the original position-length experiment [6], participants were instructed to not explicitly measure (e.g., using a finger) or explicitly calculate distance. Also, because all participants in the pilot study expressed that the task was too hard and that they had no confidence in their estimations, we included an instruction that the task was supposed to be hard and not to feel discouraged.
|
| 106 |
+
|
| 107 |
+
## Measures
|
| 108 |
+
|
| 109 |
+
Before the study participants filled out a demographics questionnaire asking about VR and game experience. Participants responded verbally when asked for percentages and heights. Answers were recorded and later merged with the logged study data. After the study, they filled out a simulator sickness questionnaire (SSQ)[20] and a questionnaire asking them whether they thought they performed better when estimating percentages or heights, which depth-cues virtual environment they thought they performed best in, whether they were better screen/VR, and finally were asked to rate each of the 7 blocks in terms of how well they thought they performed.
|
| 110 |
+
|
| 111 |
+
Like in the original position-length experiment [6], we took the log error of the percentage estimations
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\text{ LogErrorP } = {\log }_{2}\left( {\left| {{\text{ percent }}_{\text{guessed }} - {\text{ percent }}_{\text{actual }}}\right| + \frac{1}{8}}\right)
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
For the height estimations, we calculated the error as a percentage of the actual height they were estimating. This meant if the bar was ${20}\mathrm{\;m}$ tall, and the participant guessed either ${18}\mathrm{\;m}$ , or ${22}\mathrm{\;m}$ , they had a height error of ${10}\%$ .
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\text{ ErrorH } = \frac{\left| {\text{ height }}_{\text{guessed }} - {\text{ height }}_{\text{actual }}\right| }{{\text{ height }}_{\text{actual }}}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
## Participants
|
| 124 |
+
|
| 125 |
+
Study 1 had 18 participants recruited, with one removed for not following the instructions consistently, and one removed as an outlier with results more than 2 standard deviations from the mean, resulting in data for 16 participants. Details about the participants can be found in Table 1. Participants were remunerated with a ${25}\mathrm{{CAD}}$ gift card.
|
| 126 |
+
|
| 127 |
+
## Results
|
| 128 |
+
|
| 129 |
+
Results were analyzed with two (depth-cues (3) x screen/VR (2) x scale (3)) RM-ANOVAs (the real-world condition was not part of this RM-ANOVA, instead providing a sanity check that our participants performed similarly to [6]. Overall measurement means and standard deviations can be found in Table 1 and charts are in Figure 2.
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
|
| 133 |
+
Figure 2. Log percent error (left) and height estimation error (right) for screen/VR (top row), scale (middle row) and level of depth cues (bottom row) used in Study 1. (Note: Error bars show standard error.)
|
| 134 |
+
|
| 135 |
+
## Percentage Log Error
|
| 136 |
+
|
| 137 |
+
There was a main effect of screen/VR (F(1,15) = 27.63, p < ${0.01})$ . When given the same virtual environments on the screen and in VR, participants had less error in VR (Figure 2). There was no main effect of depth-cues (p=.53). There was a main effect of scale $\left( {\mathrm{F}\left( {2,{30}}\right) = {185},\mathrm{p} < {0.01}}\right)$ (Figure 2). People were significantly worse at larger distances.
|
| 138 |
+
|
| 139 |
+
There was an interaction effect of screen/VR x scale (F(2,30) $= {32.49},\mathrm{p} < {0.01})$ . At the personal scale, screen-based virtual environments has less error than VR, but this reversed at the larger scales.
|
| 140 |
+
|
| 141 |
+
The real-world condition had a mean log percent error of 1.7 (SD: 0.94), which is slightly higher, but very similar to value achieved by the original paper based task [6] which was 1.5.
|
| 142 |
+
|
| 143 |
+
## Height Error
|
| 144 |
+
|
| 145 |
+
As expected, 77.5% of all height evaluations were underestimations. There was a main effect of screen/VR $\left( {\mathrm{F}\left( {1,{15}}\right) = {5.80},\mathrm{p} < {0.05}}\right)$ , with participants having less error in VR. There was a main effect of scale $(\mathrm{F}\left( {2,{30}}\right) = {5.403}$ , p $< {0.05}$ ), with higher error at larger scales. There was no main effect of depth-cues $\left( {\mathrm{p} = {15}}\right)$ and no interaction effects.
|
| 146 |
+
|
| 147 |
+
The real-world condition had a mean height error of ${20}\%$ (SD: 16%).
|
| 148 |
+
|
| 149 |
+
## Subjective Rankings
|
| 150 |
+
|
| 151 |
+
Participants indicated whether they were better in VR (62%), the screen-based virtual environment (19%) or performed equally well on both (19%). Most participants indicated that they performed best in the rich virtual environment (81%), while a few indicated they were best in the some virtual environment (19%). The 7 blocks (depth cues (3) x screen/VR (2) + 1 real-world), sorted by mean participant rank, are: rich/VR (1.6), some/VR (3.0), rich/screen (3.1), some/screen (4.5), real-world (4.7), few/VR (4.9), few/screen (6.1). Although we did not formally record participants with audio or video, we tried to take notes if they commented on the helpfulness (or lack of helpfulness) of a virtual environment during or after the study. Most participants commented that the task was very hard, particularly with heights (e.g., "I don't think I am very good at this", "I have no idea how tall that is"). Every single participant commented at least once about some facet of the rich depth cue conditions as helpful (e.g., "The trees help", "I like the building, I can count the floors", "How big is a house, that chart is as big as a house").
|
| 152 |
+
|
| 153 |
+
## Summary of Results
|
| 154 |
+
|
| 155 |
+
In general, people were quite good at evaluating percentages, but poor at evaluating heights. They were better in VR, perhaps due to binocular depth cues, or due to physical sensations such as tilting one's head back to look up. Scale was as expected, important, with larger distances resulting in more error in both measurements. Depth cues did not seem to be influential in either height or percentage evaluations. This first study confirmed that distance underestimation occurs in a visualization context, in this case bar charts, in VR. However, the results suggest that percentage estimations are not nearly as negatively affected as height estimations. Participants were off by 9.7% on average, which is very small when considering that most answers were given as a multiple of five, introducing an expected error of 2.5%. Furthermore, VR had less error than the equivalent screen-based virtual environments, meaning that VR might be a better option than a screen based visualization where one needs to look around, at least in some situations.
|
| 156 |
+
|
| 157 |
+
Subjectively, people felt they were more accurate at percentages and better in VR. However, even though we did not find a significant difference between the different depth-cue conditions, people collectively felt that they were more accurate in the rich depth cue conditions. This is interesting because it means that while depth cues might be less impactful on task performance, it does seem to be impactful on user comfort and their own perception of competency.
|
| 158 |
+
|
| 159 |
+
## STUDY 2 - MULTIPLE PERSPECTIVES
|
| 160 |
+
|
| 161 |
+
In this study we were interested in how perspective and motion would change perception of distances. In particular, the fixed position near the base of the bar chart used in Study 1 meant that for the larger scales of charts, users would experience significant perspective issues such as foreshortening.
|
| 162 |
+
|
| 163 |
+
Other than providing movement and the ability to take a new perspective, we also made a few changes to Study 1. Since we found no effect of depth-cues on task performance, we fixed this factor at our sunset-like, some depth cues virtual environment which we consider a reasonable best practice. Despite the preference of participants for the park-like rich depth cue environment, we acknowledge that complex context-rich settings full of trees and buildings may not be universally suitable for all visualizations. The scales we used in this study did not change, however since we were interested in letting participants take perspectives that were possibly far away, we made our bar charts ${1.5}\mathrm{x}$ as wide to me more visible from a distance. This meant that we moved the front and center starting position back ${1.5}\mathrm{\;m}$ such that at the personal scale the participant would still see the whole chart. We then calculated, using this front location, two other fixed positions ( ${15}\mathrm{\;m}$ to the left &right), and a variable back position such that the view was elevated and far enough away that both colored bars could be seen in their entirety. This back position was different in every bar chart and provided the participant with perspective that did not require them to look up or down and removed foreshortening effects.
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+
Figure 3. Front platform near personal scale bar chart. Players would view the world from the center of the platform. In some conditions platforms functioned as elevators.
|
| 168 |
+
|
| 169 |
+
Unlike the first study, where participants stood on the ground, here participants stood on virtual platforms (Figure 3). These hexagonal platforms featured transparent glass railings, interaction instructions, and lights that would light up when the participant stood on a centrally located pressure plate. (Participants were asked to return to the center between tasks). The width of the platform (about $2\mathrm{\;m}$ ) corresponded to the maximum walkable space as calibrated by the HTC Vive. This meant that participants could always walk around on a platform and even lean over the railings safely. Other than being functional in terms of movement, the consistently sized platforms could be used for relative sizing, like the objects in the rich depth cues virtual environment.
|
| 170 |
+
|
| 171 |
+
Participants were always on a platform, however, we provided the following movement modes.
|
| 172 |
+
|
| 173 |
+
Front Platform Only: Like our naïve perspective chosen in the first study, the participant was stuck at the front and bottom of the chart. The participant could not teleport to a new location or move the platform up or down. However, unlike the first study, participants could walk around on the platform.
|
| 174 |
+
|
| 175 |
+
Table 1: Study Details
|
| 176 |
+
|
| 177 |
+
Study 1 Study 2 Study 3
|
| 178 |
+
|
| 179 |
+
<table><tr><td colspan="6">Study 1Study 2Study 3</td></tr><tr><td>Task Type</td><td/><td>Position-Length [4]</td><td>Position-Length[4]</td><td colspan="2">Position-Angle[4]</td></tr><tr><td rowspan="7">Scale Personal House Skyscraper</td><td/><td/><td/><td>Bar/Scatter</td><td>Pie</td></tr><tr><td>Height</td><td>3m</td><td>$3\mathrm{\;m}$</td><td>13m</td><td>5m</td></tr><tr><td>Task Bar Height</td><td><1.7m</td><td><1.7m</td><td>< 5m</td><td>-</td></tr><tr><td>Height</td><td>30m</td><td>30m</td><td>30m</td><td>12m</td></tr><tr><td>Task Bar Height</td><td><17m</td><td><17m</td><td>< 12m</td><td>-</td></tr><tr><td>Height</td><td>180m</td><td>180m</td><td>-</td><td>-</td></tr><tr><td>Task Bar Height</td><td>< 100m</td><td><100m</td><td>-</td><td>-</td></tr><tr><td rowspan="4">Participants</td><td>Age</td><td>M 35.0. SD: 10.7</td><td>M: 34.5. SD: 11.0</td><td colspan="2">M: 28.5. SD: 4.3</td></tr><tr><td>$N$</td><td>16 (4 female)</td><td>10 (2 female)</td><td colspan="2">10 (3 female)</td></tr><tr><td>Measurement</td><td>4 used metric</td><td>5 used metric</td><td colspan="2">6 used metric</td></tr><tr><td>VR Experience</td><td>11 tried before, 3 very familiar, 2 experts</td><td>3 no experience, 2 tried before, 5 very familiar</td><td colspan="2">7 tried before, 2 very familiar, 1 expert</td></tr><tr><td rowspan="5">Measures</td><td>Log Percent Error</td><td>$\mathrm{M};{2.67},\mathrm{{SD}};{1.37}$</td><td>M: 1.96, SD: 1.18</td><td colspan="2">M: 1.85. SD: 1.67</td></tr><tr><td>Height Error</td><td>M: 32%. SD: 23%</td><td>M: 34%. SD: 22%</td><td colspan="2">M: 34%. SD: 27%</td></tr><tr><td>Angle Error</td><td>-</td><td>-</td><td colspan="2">M: 20%, SD: 16%</td></tr><tr><td>Simulator Sickness</td><td>M: 5.1. SD: 4.5</td><td>M: 10.1. SD: 6.2</td><td colspan="2">M: 6.2. SD: 4.8</td></tr><tr><td>Performed Better At</td><td>percents (88%), heights (0%), both (12%)</td><td>percents (90%), heights (0%), both (10%)</td><td colspan="2">percents (50%), heights (10%), both (40%)</td></tr></table>
|
| 180 |
+
|
| 181 |
+
Back Platform Only: The participant started on a variable location back platform and could walk around but not teleport or use the platform elevator.
|
| 182 |
+
|
| 183 |
+
Teleport Anywhere: Participants started at the front location and could teleport/move the platform they were on by pressing the trigger, aiming a visible arc pointer to a valid location (marked by a repeating blue pattern) and then releasing the trigger. Participants could teleport anywhere in a ${100}\mathrm{\;m}$ square centered on the chart. Participants could not move so close to the chart that they intersected it (invalid area was marked in a red). The elevator platform could be moved up and down by using a diegetic interface on the touchpad.
|
| 184 |
+
|
| 185 |
+
Teleport 4 Platforms: Participants could teleport to front, left, and right, fixed location, elevator platforms as well as the variable location back platform. Additional platforms are only seen when teleporting so they do not block the chart. Platforms were selected by aiming the arc pointer directly at, near, or in the general direction of a platform.
|
| 186 |
+
|
| 187 |
+
Teleport Front/Back: Participants could teleport like in the Four Platform condition, but could only access the front and back elevator platforms.
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
|
| 191 |
+
Figure 4. Types of movement allowed in Study 2.
|
| 192 |
+
|
| 193 |
+
## Task
|
| 194 |
+
|
| 195 |
+
Tasks were generated the same as they were in Study 1. Participants completed 5 counterbalanced blocks, one for each movement type. Each block was introduced in a training mode where participants could try out the relevant movement interactions. During the study tasks, participants were asked to teleport at least once before giving their answers (if applicable). They were asked to return to the center of the platform in between each of the 90 tasks.
|
| 196 |
+
|
| 197 |
+
## Measures
|
| 198 |
+
|
| 199 |
+
The measures employed were similar to Study 1, except that the final questionnaire asked them to rank their performance with the 5 movement types.
|
| 200 |
+
|
| 201 |
+
## Participants
|
| 202 |
+
|
| 203 |
+
Study 2 had 11 participants, one was excluded as they were a clear outlier (more than two standard deviations from the mean). All participant details can be found in Table 1. The study took one hour and participants were remunerated with a 25 CAD gift card.
|
| 204 |
+
|
| 205 |
+
## Results
|
| 206 |
+
|
| 207 |
+
We performed two (movement-type (5) x scale (3)) RM-ANOVAs. Overall measurement means and standard deviations are in Table 1, and charts are in Figure 5.
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
|
| 211 |
+
Figure 5. Log percent error (left) and height estimation error (right) for each movement type (top row) and scale (bottom row) used in Study 2. (Note: error bars show standard error.)
|
| 212 |
+
|
| 213 |
+
## Percentage Log Error
|
| 214 |
+
|
| 215 |
+
There was a main effect of movement-type $(\mathrm{\;F}\left( {4,{36}}\right) = {51.6}$ , p $< {0.001})$ . Post-hoc tests showed that Front Platform Only was significantly worse than all other conditions. There was no effect of scale $\left( {\mathrm{p} = {0.12}}\right)$ and no interaction effects. Overall measure averages can be found in Table 1.
|
| 216 |
+
|
| 217 |
+
## Height Error
|
| 218 |
+
|
| 219 |
+
There was no main effect of movement-type $\left( {\mathrm{p} = {.31}}\right)$ but there was an effect of scale $\left( {\mathrm{F}\left( {2,{18}}\right) = {6.7},\mathrm{p} < {0.01}}\right)$ . Post hoc tests showed that the largest scale lead to significantly higher error than the smallest and medium scale $\left( {\mathrm{p} < {0.05}}\right)$ .
|
| 220 |
+
|
| 221 |
+
## Subjective Rankings
|
| 222 |
+
|
| 223 |
+
The movement-types, sorted by mean rank, are: continuous-teleport (1.8), teleport-front/back (2.0), teleport-4-platforms (2.1), back-platform-only (4.1), front-platform-only (4.7).
|
| 224 |
+
|
| 225 |
+
## Summary of Results
|
| 226 |
+
|
| 227 |
+
Adding movement/different perspectives to the viewpoint used in the first study always resulted in improvements to percentage estimations. The log percentage error of these improved conditions is about 1.7 which is very close to the 1.5 log percentage error achieved by Cleveland and McGill's position-length type 1 task which we modelled our study after. Furthermore, the effect of scale on percentage estimations was essentially eliminated when participants had the opportunity to view the visualization from far away and view the entire set of bars at once. Thus, it appears that relative distances tasks, like the percentage estimation we used here, are robust to the perceptual effects of VR if one can view the chart from different perspectives.
|
| 228 |
+
|
| 229 |
+
On the other hand, directly estimating the height was still problematic when movement and perspective were added. There was no condition which improved people's height estimations and larger scales were still more difficult. Thus movement/perspective was not successful at improving people's ability to estimate heights.
|
| 230 |
+
|
| 231 |
+
## STUDY 3 - OTHER CHART TYPES
|
| 232 |
+
|
| 233 |
+
Now that we have established, that at least when it comes to bar charts, estimating relative distances may be robust to the effects of distance compression in VR, we were interested in looking at this effect in other chart types. To this end we ran a third study using bar charts, pie charts and scatter plots (Figure 6). We used the sunset environment from study 1 and the continuous teleport movement from study 2 as it was ranked the highest. Also, because the first two studies had confirmed that people are bad at estimating the size of skyscrapers, we had 9 of the 18 charts in each condition fit all data directly ahead without needing to look up and the other 9 about twice as tall.
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+
Figure 6. Chart types used in Study 3.
|
| 238 |
+
|
| 239 |
+
Bar Chart: Like with the bar chart in the previous studies, participants were asked to make percentage estimations between two colored bars and to estimate the height of the smaller colored bar. Colored bars were always consecutive but their order was randomized.
|
| 240 |
+
|
| 241 |
+
Scatter plot: Instead of bars, this chart used spherical markers. The markers were spread out horizontally on the x-axis such that they were all between ${0.5}\mathrm{\;m}$ and ${2.5}\mathrm{\;m}$ apart, however, the colored markers were always consecutive and ${1.5}\mathrm{\;m}$ apart. Participants were asked to make percentage estimations between the colored markers with respect to height (y-axis) and to estimate the height of the shorter colored marker.
|
| 242 |
+
|
| 243 |
+
Pie Chart: This five-section pie chart had two colored segments and three distinctly colored white sections. Colored segments were always in a random consecutive position and were assigned either dark or light purple randomly. Like the Cleveland and McGill's [6] position-angle experiment, participants were asked to make percentage estimations between the colored segments. However, as a pie chart uses angles rather than heights to encode data, participants were instructed to estimate the angle of the smaller segment.
|
| 244 |
+
|
| 245 |
+
## Task
|
| 246 |
+
|
| 247 |
+
Tasks were generated similar to Cleveland and McGill's [6] position-angle experiment. This meant that 5 numbers summing to 100 were generated, with percentage differences ranging from 10% to 97%. Participants completed 3 counterbalanced blocks, one for each chart type. Each block was introduced in a training mode where the researcher walked them through the exact questions used in this task. During the study tasks, participants were asked to teleport, walk around, or use the elevator at least once before giving their answers. They were asked to return to the center of the platform in between each of the 54 tasks.
|
| 248 |
+
|
| 249 |
+
## Measures
|
| 250 |
+
|
| 251 |
+
The measures employed were similar to Study 1 and 2, except that the final questionnaire asked them to rank their performance with each chart.
|
| 252 |
+
|
| 253 |
+
Additionally, to compare participant's angle estimations to height estimations, we used a very similar formula to calculate the angle estimation error as a percentage of the actual angle they were estimating.
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
\text{ ErrorA } = \frac{\left| {\text{ angle }}_{\text{guessed }} - {\text{ angle }}_{\text{actual }}\right| }{{\text{ angle }}_{\text{actual }}}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
## Participants
|
| 260 |
+
|
| 261 |
+
Study 3 had 10 participants. All participant details can be found in Table 1. The study took 40 minutes and participants were remunerated with a ${25}\mathrm{{CAD}}$ gift card.
|
| 262 |
+
|
| 263 |
+
## Results
|
| 264 |
+
|
| 265 |
+
We performed a (chart-type (5) x scale (2)) RM-ANOVA. Overall measurement means and standard deviations can be found in Table 1 and charts are in Figure 7.
|
| 266 |
+
|
| 267 |
+
## Percentage Log Error
|
| 268 |
+
|
| 269 |
+
There was a main effect of chart-type $(\mathrm{F}\left( {2,{18}}\right) = {11.06},\mathrm{p} <$ 0.001). Post-hoc tests showed that Pie Charts were significantly worse than all other condition (p<0.05). There was no effect of scale $\left( {\mathrm{p} = {0.30}}\right)$ and no interaction effects.
|
| 270 |
+
|
| 271 |
+
## Height and Angle Error
|
| 272 |
+
|
| 273 |
+
There was a main effect of chart-type $(\mathrm{\;F}\left( {2,{18}}\right) = {11.39},\mathrm{p} <$ 0.01 ). Post hoc tests showed that angle estimations had significantly less error than heights (p<0.01). There was no effect of scale $\left( {\mathrm{p} = {0.65}}\right)$ and no interaction effects.
|
| 274 |
+
|
| 275 |
+
## Subjective Rankings
|
| 276 |
+
|
| 277 |
+
The chart-types, sorted by mean rank, are: bar (1.4), pie (2), scatter (2.7).
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+
Figure 7. Log percent error (left) and height estimation error (right) for each chart type in Study 3. (Note: error bars show standard error.)
|
| 282 |
+
|
| 283 |
+
## Summary of Results
|
| 284 |
+
|
| 285 |
+
When it comes to percentage estimations, our results mirror Cleveland and McGills [6] results: people are better at lengths than they are at angles, by a factor of 2.1 (1.96 in the original study). This result suggests that for percentage estimations, other perceptual tasks (e.g., area) should follow the same patterns in VR as the original work.
|
| 286 |
+
|
| 287 |
+
However, when looking at angle/height estimations, participants were better at angles. This could be because the on only needs to look at the innermost pie chart point to do this estimation (as opposed to looking at the bottom and top of a large bar chart), because all angles are bounded by 360 (max angle was 100 degrees in our study), or because the angles were relatively small. Future work should investigate this more closely, especially at larger and smaller scales.
|
| 288 |
+
|
| 289 |
+
## DISCUSSION
|
| 290 |
+
|
| 291 |
+
This work suggests that the distance compression problem that occurs in VR does alter one's perception of data visualizations. However, relative distance tasks, like the percentage estimation tasks in these three studies, appear to be robust to this distance compression, particularly when participants can reach a perspective where they can view the chart from far back.
|
| 292 |
+
|
| 293 |
+
## Design Guidelines
|
| 294 |
+
|
| 295 |
+
## VR is Good for Virtual Environments
|
| 296 |
+
|
| 297 |
+
In Study 1, VR had less error than the equivalent screen-based virtual environment for both percentage and height estimations. While VR may not be better in all circumstances (e.g., flat screen image), when the visualization exists inside a virtual environment, VR can be a good choice for immersive analytics.
|
| 298 |
+
|
| 299 |
+
## Use Movement Modes that Avoid Nausea
|
| 300 |
+
|
| 301 |
+
Unity's practitioner guidelines [49] recommends that one avoids user simulator sickness by designing to avoid vection. Vection occurs in VR when the user's vestibular system is receiving different signals than their eyes and ears. This occurs most often in VR when the movement modality causes the player to experience motion in VR that their body does not (e.g., mapping movement in VR to a thumbstick on a controller), or when experienced bodily motion has no effect in VR (e.g., not updating the VR environment when the user turns their head).
|
| 302 |
+
|
| 303 |
+
Although we were not specifically investigating or avoiding nausea, we did find some techniques we used successful. We used a fade in/out teleport mechanism and platforms that matched the safely walkable space in the real world in Study 2 and Study 3 to avoid vection. The elevator feature was unfortunately vection inducing because it was activated with the touchpad instead of equivalent bodily motion. We combated vection-related nausea by severely limiting the elevator speed and used an easing function to prevent sudden stops. A future implementation of an elevator might have 'floors' that can be accessed with a teleport-like fade effect.
|
| 304 |
+
|
| 305 |
+
## Encode Data with Relative Distances
|
| 306 |
+
|
| 307 |
+
One should not have any requirements or expectations that a user can estimate a distance in VR. In all three studies, participants were underestimating heights on average by ${33}\%$ ,(i.e., $2/3$ their actual value). However, across all three studies, participants provided extremely low-ball heights 15% of the time, underestimating by more than 50%. Conversely participants were much more accurate in the percentage estimation task, off only by 6-10% on average across all three studies.
|
| 308 |
+
|
| 309 |
+
Therefore, designers should encode data in ways that allows users to compare two distances/lengths rather than expecting them to measure a distance directly. In a simple graph like a bar chart or scatter plot, this can be as simple as providing a labelled axis immediately beside the data. Avoid, say, providing a scale for a map requiring that one can estimate a distance (e.g.,1inch $= 1$ mile). We also recommend that where a measurement of distance is necessary, one should provide a tool which can be used for comparison, like a ruler, or a measuring tape.
|
| 310 |
+
|
| 311 |
+
## Consider Maximum Scale
|
| 312 |
+
|
| 313 |
+
It is important to consider the most extreme perspective that the user can navigate into, or, rather to consider the maximum distance from themselves that a user might be asked to evaluate. In Study 1 participants were pretty good at the personal and house scale, and were predictably bad at the skyscraper scale. Therefore, we recommend that one create situations that expect users to be evaluating distances less than ${15}\mathrm{\;m}$ , though this is just a rough estimation based on the particular distances we used in the study. Future work is needed to provide a better guideline.
|
| 314 |
+
|
| 315 |
+
## Provide an Overview Perspective
|
| 316 |
+
|
| 317 |
+
In Study 1, larger scales meant higher error in both percentage and height estimations. However, when the ability to view the data from an overview perspective was added in Study 2, this effect disappeared for percentage estimations. If one does use large distances, or data that is far away from the user, providing a way for the user to make a quick overview of relevant data can remove the negative effects of large distances by removing problems like foreshortening. It should be noted that in Study 2, the teleport-anywhere condition where one could move their view to an overview perspective was not significantly better than the back-platform-only condition which automatically provided an overview perspective. Therefore it may be enough to simply provide a generated overview of your visualization, or one could provide freeform movement options like teleport-anywhere depending on your needs.
|
| 318 |
+
|
| 319 |
+
## Users Appreciate Additional Depth Cues
|
| 320 |
+
|
| 321 |
+
Given that every participant mentioned how helpful additional depth cues were in Study 1, despite no change in task performance, we would recommend providing as many depth cues as possible to improve user's comfort and perceived competency with the task. This could mean simple things like adding texture to an object, or more complicated, fully developed, contextually relevant environments with relatively sized objects.
|
| 322 |
+
|
| 323 |
+
## CONCLUSION
|
| 324 |
+
|
| 325 |
+
In this paper we investigated how the phenomenon of distance compression alters perception of visualizations in VR. Through three studies that replicate foundational work around the perception of visualizations we found that estimations of actual lengths, in this case the heights of bars in a bar chart, are negatively impacted by distance compression, but relative distances are not. Furthermore, as with traditional visualizations, people can better estimate relative lengths over relative angles, suggesting that much of the existing perceptual research on visualizations may still apply. Finally, we provide set of design guidelines for designers wishing to develop VR visualizations that limit the negative effects of distance compression.
|
| 326 |
+
|
| 327 |
+
## REFERENCES
|
| 328 |
+
|
| 329 |
+
1. C. Armbrüster, M. Wolter, T. Kuhlen, W. Spijkers, and B. Fimm. 2008. Depth Perception in Virtual Reality: Distance Estimations in Peri- and Extrapersonal Space. CyberPsychology & Behavior 11, 1: 9-15. https://doi.org/10.1089/cpb.2007.9935
|
| 330 |
+
|
| 331 |
+
2. Sriram Karthik Badam, Arjun Srinivasan, Niklas Elmqvist, and John Stasko. Affordances of Input Modalities for Visual Data Exploration in Immersive Environments.
|
| 332 |
+
|
| 333 |
+
3. Scott Bateman, Regan L Mandryk, Carl Gutwin, Aaron Genest, David McDine, and Christopher Brooks. 2010. Useful junk?: the effects of visual embellishment on comprehension and memorability of charts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2573-2582.
|
| 334 |
+
|
| 335 |
+
4. R. Bennett, D. J. Zielinski, and R. Kopper. 2014. Comparison of interactive environments for the archaeological exploration of 3D landscape data. In 2014 IEEE VIS International Workshop on 3DVis (3DVis), 67-71.
|
| 336 |
+
|
| 337 |
+
https://doi.org/10.1109/3DVis.2014.7160103
|
| 338 |
+
|
| 339 |
+
5. Steve Bryson. 1996. Virtual Reality in Scientific Visualization. Commun. ACM 39, 5: 62-71. https://doi.org/10.1145/229459.229467
|
| 340 |
+
|
| 341 |
+
6. William S. Cleveland and Robert McGill. 1984. Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods. Journal of the American Statistical Association 79, 387: 531-554. https://doi.org/10.1080/01621459.1984.10478080
|
| 342 |
+
|
| 343 |
+
7. Grégoire Cliquet, Matthieu Perreira, Fabien Picarougne, Yannick Prié, and Toinon Vigier. 2017. Towards HMD-based Immersive Analytics. In Immersive analytics Workshop, IEEE VIS 2017. Retrieved April 6, 2018 from https://hal.archives-ouvertes.fr/hal-01631306
|
| 344 |
+
|
| 345 |
+
8. Maxime Cordeil, Andrew Cunningham, Tim Dwyer, Bruce H. Thomas, and Kim Marriott. 2017. ImAxes: Immersive Axes As Embodied Affordances for Interactive Multivariate Data Visualisation. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST '17), 71-83. https://doi.org/10.1145/3126594.3126613
|
| 346 |
+
|
| 347 |
+
9. Sarah H Creem-Regehr, Peter Willemsen, Amy A Gooch, and William B Thompson. 2005. The Influence of Restricted Viewing Conditions on Egocentric Distance Perception: Implications for Real and Virtual Indoor Environments. Perception 34, 2: 191-204. https://doi.org/10.1068/p5144
|
| 348 |
+
|
| 349 |
+
10. James E. Cutting. 1997. How the eye measures reality and virtual reality. Behavior Research Methods, Instruments, & Computers 29, 1: 27-36. https://doi.org/10.3758/BF03200563
|
| 350 |
+
|
| 351 |
+
11. Henry Fuchs, Mark A. Livingston, Ramesh Raskar, D'nardo Colucci, Kurtis Keller, Andrei State, Jessica R. Crawford, Paul Rademacher, Samuel H. Drake, and Anthony A. Meyer. 1998. Augmented reality visualization for laparoscopic surgery. In Medical Image Computing and Computer-Assisted Intervention - MICCAI'98 (Lecture Notes in Computer Science), 934- 943. https://doi.org/10.1007/BFb0056282
|
| 352 |
+
|
| 353 |
+
12. Michael Glueck and Azam Khan. 2011. Considering multiscale scenes to elucidate problems encumbering three-dimensional intellection and navigation. AI EDAM 25, 4: 393-407.
|
| 354 |
+
|
| 355 |
+
https://doi.org/10.1017/S0890060411000230
|
| 356 |
+
|
| 357 |
+
13. Nicolas Greffard, Fabien Picarougne, and Pascale Kuntz. 2011. Visual Community Detection: An Evaluation of 2D, 3D Perspective and 3D Stereoscopic Displays. In Graph Drawing (Lecture Notes in Computer Science), 215-225. https://doi.org/10.1007/978-3-642-25878-7_21
|
| 358 |
+
|
| 359 |
+
14. Jeffrey Heer and Michael Bostock. 2010. Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10), 203-212. https://doi.org/10.1145/1753326.1753357
|
| 360 |
+
|
| 361 |
+
15. Carolin Helbig, Hans-Stefan Bauer, Karsten Rink, Volker Wulfmeyer, Michael Frank, and Olaf Kolditz. 2014. Concept and workflow for $3\mathrm{D}$ visualization of atmospheric data in a virtual reality environment for analytical approaches. Environmental Earth Sciences 72, 10: 3767-3780. https://doi.org/10.1007/s12665-014- 3136-6
|
| 362 |
+
|
| 363 |
+
16. Tung-Ju Hsieh, Yang-Lang Chang, and Bormin Huang. 2011. Visual analytics of terrestrial lidar data for cliff erosion assessment on large displays. In Satellite Data Compression, Communications, and Processing VII, 81570D. https://doi.org/10.1117/12.895437
|
| 364 |
+
|
| 365 |
+
17. Mark St John, Michael B. Cowen, Harvey S. Smallman, and Heather M. Oonk. 2001. The Use of 2D and 3D Displays for Shape-Understanding versus Relative-Position Tasks. Human Factors 43, 1: 79-98. https://doi.org/10.1518/001872001775992534
|
| 366 |
+
|
| 367 |
+
18. St Mark John, Harvey S. Smallman, Timothy E. Bank, and Michael B. Cowen. 2001. Tactical Routing Using Two-Dimensional and Three-Dimensional Views of Terrain. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 45, 18: 1409-1413. https://doi.org/10.1177/154193120104501820
|
| 368 |
+
|
| 369 |
+
19. Johnson JG Keiriz, Olusola Ajilore, Alex D Leow, and Angus G Forbes. Immersive Analytics for Clinical Neuroscience.
|
| 370 |
+
|
| 371 |
+
20. Robert S. Kennedy, Norman E. Lane, Kevin S. Berbaum, and Michael G. Lilienthal. 1993. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. The International Journal of Aviation Psychology 3, 3: 203-220. https://doi.org/10.1207/s15327108ijap0303_3
|
| 372 |
+
|
| 373 |
+
21. Tereza G. Kirner and Valéria F. Martins. 2000. Development of an Information Visualization Tool Using Virtual Reality. In Proceedings of the 2000 ACM Symposium on Applied Computing - Volume 2 (SAC ’00), 604-606. https://doi.org/10.1145/338407.338515
|
| 374 |
+
|
| 375 |
+
22. Joshua M. Knapp and Jack M. Loomis. 2004. Limited Field of View of Head-Mounted Displays Is Not the Cause of Distance Underestimation in Virtual Environments. Presence: Teleoperators and Virtual Environments 13, 5: 572-577. https://doi.org/10.1162/1054746042545238
|
| 376 |
+
|
| 377 |
+
23. Søren Knudsen and Sheelagh Carpendale. Multiple Views in Immersive Analytics.
|
| 378 |
+
|
| 379 |
+
24. Gregorij Kurillo and Maurizio Forte. 2012. Telearch-Integrated Visual Simulation Environment for Collaborative Virtual Archaeology. Mediterranean Archaeology and Archaeometry 12, 1: 11-20.
|
| 380 |
+
|
| 381 |
+
25. Jong Min Lee, James MacLachlan, and William A. Wallace. 1986. The Effects of 3D Imagery on Managerial Data Interpretation. MIS Quarterly 10, 3: 257-269. https://doi.org/10.2307/249259
|
| 382 |
+
|
| 383 |
+
26. Jack M. Loomis and Joshua M. Knapp. 2003. Visual Perception of Egocentric Distance in Real and Virtual
|
| 384 |
+
|
| 385 |
+
Environments. In Virtual and Adaptive Environments: Applications, Implications, and Human Performance Issues. CRC Press, 21-46.
|
| 386 |
+
|
| 387 |
+
27. Arif Masrur, Jiayan Zhao, Jan Oliver Wallgrün, Peter LaFemina, and Alexander Klippel. Immersive Applications for Informal and Interactive Learning for Earth Sciences.
|
| 388 |
+
|
| 389 |
+
28. Ross Messing and Frank H. Durgin. 2005. Distance Perception and the Visual Horizon in Head-Mounted Displays. ACM Trans. Appl. Percept. 2, 3: 234-250. https://doi.org/10.1145/1077399.1077403
|
| 390 |
+
|
| 391 |
+
29.Laura Nelson, Dianne Cook, and Carolina Cruz-Neira. 1999. Xgobi vs the c2: Results of an experiment comparing data visualization in a 3-d immersive virtual reality environment with a 2-d workstation display. Computational Statistics 14, 1: 39-52.
|
| 392 |
+
|
| 393 |
+
30. Matthew Ready, Tim Dwyer, and Jason H Haga. Immersive Visualisation of Big Data for River Disaster Management.
|
| 394 |
+
|
| 395 |
+
31.Jun Rekimoto and Mark Green. 1993. The information cube: Using transparency in $3\mathrm{\;d}$ information visualization. In Proceedings of the Third Annual Workshop on Information Technologies & Systems (WITS'93), 125-132.
|
| 396 |
+
|
| 397 |
+
32. W. Ribarsky, J. Bolter, A. Op den Bosch, and R. van Teylingen. 1994. Visualization and analysis using virtual reality. IEEE Computer Graphics and Applications 14, 1: 10-12. https://doi.org/10.1109/38.250911
|
| 398 |
+
|
| 399 |
+
33. Adam R. Richardson and David Waller. 2005. The effect of feedback training on distance estimation in virtual environments. Applied Cognitive Psychology 19, 8: 1089-1108. https://doi.org/10.1002/acp.1140
|
| 400 |
+
|
| 401 |
+
34. Cynthia S. Sahm, Sarah H. Creem-Regehr, William B. Thompson, and Peter Willemsen. 2005. Throwing Versus Walking As Indicators of Distance Perception in Similar Real and Virtual Environments. ACM Trans. Appl. Percept. 2, 1: 35-45. https://doi.org/10.1145/1048687.1048690
|
| 402 |
+
|
| 403 |
+
35.M. Sedlmair, T. Munzner, and M. Tory. 2013. Empirical Guidance on Scatterplot and Dimension Reduction Technique Choices. IEEE Transactions on Visualization and Computer Graphics 19, 12: 2634-2643. https://doi.org/10.1109/TVCG.2013.153
|
| 404 |
+
|
| 405 |
+
36.B. Shneiderman. 2003. Why not make interfaces better than $3\mathrm{\;d}$ reality? IEEE Computer Graphics and Applications 23, 6: 12-15.
|
| 406 |
+
|
| 407 |
+
37.Mark Simpson, Jiayan Zhao, and Alexander Klippel. Take a Walk: Evaluating Movement Types for Data Visualization in Immersive Virtual Reality.
|
| 408 |
+
|
| 409 |
+
38. N. G. Smith, K. Knabb, C. DeFanti, P. Weber, J. Schulze, A. Prudhomme, F. Kuester, T. E. Levy, and T. A. DeFanti. 2013. ArtifactVis2: Managing real-time archaeological data in immersive $3\mathrm{D}$ environments. In 2013 Digital Heritage International Congress (DigitalHeritage), 363-370.
|
| 410 |
+
|
| 411 |
+
https://doi.org/10.1109/DigitalHeritage.2013.6743761
|
| 412 |
+
|
| 413 |
+
39. J. E. Swan, A. Jones, E. Kolstad, M. A. Livingston, and H. S. Smallman. 2007. Egocentric depth judgments in optical, see-through augmented reality. IEEE Transactions on Visualization and Computer Graphics 13, 3: 429-442.
|
| 414 |
+
|
| 415 |
+
https://doi.org/10.1109/TVCG.2007.1035
|
| 416 |
+
|
| 417 |
+
40. William B. Thompson, Peter Willemsen, Amy A. Gooch, Sarah H. Creem-Regehr, Jack M. Loomis, and Andrew C. Beall. 2004. Does the Quality of the Computer Graphics Matter when Judging Distances in Visually Immersive Environments? Presence: Teleoperators and Virtual Environments 13, 5: 560-571. https://doi.org/10.1162/1054746042545292
|
| 418 |
+
|
| 419 |
+
41.C. Ware and G. Franck. 1994. Viewing a graph in a virtual reality display is three times as good as a $2\mathrm{D}$ diagram. In Proceedings of 1994 IEEE Symposium on Visual Languages, 182-183. https://doi.org/10.1109/VL.1994.363621
|
| 420 |
+
|
| 421 |
+
42. Colin Ware and Peter Mitchell. 2005. Reevaluating Stereo and Motion Cues for Visualizing Graphs in Three Dimensions. In Proceedings of the 2Nd Symposium on Applied Perception in Graphics and Visualization (APGV '05), 51-58.
|
| 422 |
+
|
| 423 |
+
https://doi.org/10.1145/1080402.1080411
|
| 424 |
+
|
| 425 |
+
43. S. J Westerman and T Cribbin. 2000. Mapping semantic information in virtual space: dimensions, variance and individual differences. International Journal of Human-Computer Studies 53, 5: 765-787. https://doi.org/10.1006/ijhc.2000.0417
|
| 426 |
+
|
| 427 |
+
44. Christopher D. Wickens, David H. Merwin, and Emilie L. Lin. 1994. Implications of Graphics Enhancements for the Visualization of Scientific Data: Dimensional Integrality, Stereopsis, Motion, and Mesh. Human Factors 36, 1: 44-61.
|
| 428 |
+
|
| 429 |
+
https://doi.org/10.1177/001872089403600103
|
| 430 |
+
|
| 431 |
+
45. Peter Willemsen, Mark B. Colton, Sarah H. Creem-Regehr, and William B. Thompson. 2004. The effects of head-mounted display mechanics on distance judgments in virtual environments. In Ist Symposium on Applied Perception in Graphics and Visualization, APGV 2004, 35-38. Retrieved September 20, 2018 from https://utah.pure.elsevier.com/en/publications/the-effects-of-head-mounted-display-mechanics-on-distance-judgmen
|
| 432 |
+
|
| 433 |
+
46. Jessica K Witt, Dennis R Proffitt, and William Epstein. 2004. Perceiving Distance: A Role of Effort and Intent. Perception 33, 5: 577-590.
|
| 434 |
+
|
| 435 |
+
https://doi.org/10.1068/p5090
|
| 436 |
+
|
| 437 |
+
47. S. Zhang, C. Demiralp, D. F. Keefe, M. DaSilva, D. H. Laidlaw, B. D. Greenberg, P. J. Basser, C. Pierpaoli, E. A. Chiocca, and T. S. Deisboeck. 2001. An immersive virtual environment for DT-MRI volume visualization applications: a case study. In Visualization, 2001. VIS '01. Proceedings, 437-584.
|
| 438 |
+
|
| 439 |
+
https://doi.org/10.1109/VISUAL.2001.964545
|
| 440 |
+
|
| 441 |
+
48. Daniel Zielasko, Martin Bellgardt, Alexander Mei\\s sner, Maliheh Haghgoo, Bernd Hentschel, Benjamin
|
| 442 |
+
|
| 443 |
+
Weyers, and Torsten W Kuhlen. buenoSDIAs: Supporting Desktop Immersive Analytics While Actively Preventing Cybersickness.
|
| 444 |
+
|
| 445 |
+
49. Unity Tutorial: Movement in VR. Unity. Retrieved September 20, 2018 from
|
| 446 |
+
|
| 447 |
+
https://unity3d.com/learn/tutorials/topics/virtual-reality/movement-vr
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/IW70F9A__z/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,380 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ HOW TALL IS THAT BAR CHART? VIRTUAL REALITY, DISTANCE COMPRESSION AND VISUALIZATIONS
|
| 2 |
+
|
| 3 |
+
1st Author Name
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
City, Country
|
| 8 |
+
|
| 9 |
+
e-mail address
|
| 10 |
+
|
| 11 |
+
2nd Author Name
|
| 12 |
+
|
| 13 |
+
Affiliation
|
| 14 |
+
|
| 15 |
+
City, Country
|
| 16 |
+
|
| 17 |
+
e-mail address
|
| 18 |
+
|
| 19 |
+
3rd Author Name
|
| 20 |
+
|
| 21 |
+
Affiliation
|
| 22 |
+
|
| 23 |
+
City, Country
|
| 24 |
+
|
| 25 |
+
e-mail address
|
| 26 |
+
|
| 27 |
+
< g r a p h i c s >
|
| 28 |
+
|
| 29 |
+
Figure 1. The virtual environments used in Study 1, each with differing levels of depth cues. Participants could look around with the HMD in VR and used the mouse to look around in the screen virtual environment conditions.
|
| 30 |
+
|
| 31 |
+
§ ABSTRACT
|
| 32 |
+
|
| 33 |
+
As VR technology becomes more available, VR applications will be increasing used to present information visualizations. While data visualization in VR is an interesting topic, there remain questions about how effective or accurate such visualization can be. One known phenomenon with VR environments is that people tend to unconsciously compress or underestimate distances. However, it is unknown if or how this effect will alter the perception of data visualizations in VR. To this end, we replicate portions of Cleveland and McGill's foundational perceptual visualization studies, in VR. Through a series of three studies we find that distance compression does negatively affect estimations of actual lengths (heights of bars), but does not appear to impact relative comparisons. Additionally, by replicating the position-angle experiments, we find that (as with traditional 2D visualizations) people are better at relative length evaluations than relative angles. Finally, by looking at these open questions, we develop a series of best practices for performing data visualization in a VR environment.
|
| 34 |
+
|
| 35 |
+
§ AUTHOR KEYWORDS
|
| 36 |
+
|
| 37 |
+
Visualization; VR.
|
| 38 |
+
|
| 39 |
+
§ ACM CLASSIFICATION KEYWORDS
|
| 40 |
+
|
| 41 |
+
H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous
|
| 42 |
+
|
| 43 |
+
§ INTRODUCTION
|
| 44 |
+
|
| 45 |
+
As Virtual Reality (VR) technology continues to be developed and expanded, workplace tasks such as viewing information visualizations, are becoming more likely to be executed in a VR environment. While much of the research around traditional screen-based visualizations likely applies to VR, it is unclear how specific VR-related phenomena might alter how effective or accurate these visualizations are. Of particular note, it has been shown that in VR environments people tend to unconsciously underestimate distances, in a phenomenon called distance compression $\left\lbrack {1,9,{10},{22},{26},{28},{33},{34},{39},{40},{45}}\right\rbrack$ . However, to this point, designers of VR visualizations have not had any guidance about how distance compression will alter visualization effectiveness or user accuracy. For example, even a simple bar chart uses the heights/lengths of the bars to represent data and it is unclear how distance compression will alter one's ability to measure or compare the lengths of the bars.
|
| 46 |
+
|
| 47 |
+
To solve this problem, we looked to foundational work by Cleveland and McGill which looked at graphical perception of paper based visualizations [6] and has also been replicated in a digital context [14]. We performed 3 studies, replicating the position-length and the position-angle experiments in a VR environment. Our first study, using the bar chart position-length experiment, provided bar charts in virtual environments both on the screen and in VR and asked participants to measure actual distances (i.e., the bar is $1\mathrm{\;m}$ tall) and relative distances (i.e., bar $A$ is ${80}\%$ as tall as bar $B$ ). We explored the suggestion that varying degrees of depth cues could reduce distance compression $\left\lbrack {{10},{12}}\right\rbrack$ , as well as bar charts of varying scales. In a second study, rather than depth cues we looked at perspective, providing participants different ways to move around the look at the various scaled charts in VR. Finally, in the last experiment, we implemented the position-angle experiment, looking at how actual lengths/angles and relative lengths/angles were measured in bar charts, scatter plots, and pie charts.
|
| 48 |
+
|
| 49 |
+
Through these studies, we have 5 contributions. First, we confirm the existence of distance compression in VR visualizations, but see that is also applies to similarly to screen-based environments. We show that distance compression does negatively affect actual length/distance measurements, but may only have a small or negligible effect on relative comparisons. Depth cues had no discernable effect on the accuracy of measurements, but do appear to affect one's perception of their ability to be accurate. As-in traditional 2D visualizations, people are better at relative length evaluations than relative angles. Finally, we provide a set of design guidelines for designers to inform the implementation and creation of effective and useful visualizations in VR.
|
| 50 |
+
|
| 51 |
+
§ RELATED WORK
|
| 52 |
+
|
| 53 |
+
§ VR AND VISUALIZATION
|
| 54 |
+
|
| 55 |
+
VR visualizations fall under the field of 'Immersive Analytics', though this also refers to augmented reality (AR) and Mixed Reality (MR) Visualizations [7]. In the late 90s people were beginning to talk about VR, and visualization, even though systems of the day (mostly VR Caves) did not quite provide adequate capabilities [5,21,31,32,36]. More recent work provides concrete examples in the environmental $\left\lbrack {{15},{16},{27},{30}}\right\rbrack$ , medical $\left\lbrack {{11},{19},{47}}\right\rbrack$ , and archeology $\left\lbrack {4,{24},{38}}\right\rbrack$ domains. A notable example is ImAxes [8] which is a dynamic system were users draw and connect axis in midair in VR allowing for multiple dynamic chart types.
|
| 56 |
+
|
| 57 |
+
When interacting with a VR visualization, one should keep in mind that multiple views and input modalities may not transfer from the screen to VR [23]. Furthermore, the affordances of an interaction may be different in VR [2]. For example, Simpson et al. found that walking around the dataset in VR was not better than using a controller to rotate it [37]. Cybersickness must also be considered, as certain design choices, such as using a controller to move around, may work on a screen but quickly induce nausea in VR $\left\lbrack {{48},{49}}\right\rbrack$ . There may be many ways to combat this, for example, Cliquet et al. suggest allowing the user to sit [7].
|
| 58 |
+
|
| 59 |
+
§ VISUALIZATION AND PERCEPTION
|
| 60 |
+
|
| 61 |
+
Cleveland and McGill's foundational work [6] showed that people are better estimating lengths than areas, and better at estimating areas than volumes. Furthermore, people are better at position estimations (i.e., a scatter plot) than angle estimations (i.e., pie chart). These results have been confirmed and replicated more recently by Heer and Bostock with a Mechanical Turk based study [14].
|
| 62 |
+
|
| 63 |
+
§ 2D VS 3D
|
| 64 |
+
|
| 65 |
+
The usefulness and effectiveness of $2\mathrm{D}$ visualizations have often been compared to $3\mathrm{D}.2\mathrm{D}$ is considered generally the best approach $\left\lbrack {{35},{43}}\right\rbrack$ , especially for tasks that require precision [18] and tasks that suffer from perspective distortion (e.g., distance estimation) [17]. However, 3D visualizations can still be useful, particularly when the data has a high levels of detail, structure, and/or complexity (e.g., 3+ dimensions) $\left\lbrack {{17},{18},{25},{44}}\right\rbrack$ or when the task involves exploring 3D representations of the real world (e.g., terrain or other real world objects) $\left\lbrack {{17},{18}}\right\rbrack$ . 3D may also prove useful by providing ways to explore overlap in network graphs $\left\lbrack {{13},{41},{42}}\right\rbrack$ .
|
| 66 |
+
|
| 67 |
+
Now most visualizations can be considered 3D visualizations in VR - even though they might be mapped onto a plane, the ability to look at them from multiple angles might cause occlusion or other problems of perspective. However, we bring up the debate between 2D and 3D graphs because there is some indication that the binocular depth cues provided by modern VR tip the equation in favor of $3\mathrm{D}$ in some situations. When only considering scatterplots, for example, providing binocular cues has shown that $3\mathrm{D}$ visualizations tend win over $2\mathrm{D}\left\lbrack {{25},{29},{44}}\right\rbrack$ , although this is not always the case $\left\lbrack {{35},{43}}\right\rbrack$ .
|
| 68 |
+
|
| 69 |
+
§ VR AND PERCEPTION
|
| 70 |
+
|
| 71 |
+
People tend to underestimate distances in VR using a Head Mounted Display (HMD) [1,9,10,26,28,33,34,39,40,45]. This effect exists even when the VR environment is very similar to, or a recording of the real world [28,34,40], and may be partially caused by the limited field of vision of the HMD [9,22]. Physical factors, such as the weight of the HMD may be also be important [45], especially as the effort perceived to be necessary to walk a particular distance (e.g., if one was wearing a heavy backpack) can have effects perception of that distance [46]. The parameters which affect distance compression have been investigated but are not fully understood. Furthermore, it is unclear how distance compression may affect visualization tasks like comparing two bars in a bar chart.
|
| 72 |
+
|
| 73 |
+
One possible solution might be a lack of realistic depth cues $\left\lbrack {{10},{12}}\right\rbrack$ in VR. Some of these cues, such as light, texture, shape, luminance, linearity of light, object occlusion, motion etc., can be manipulated to be more or less available in a virtual environment. Unlike a 2D screen, VR headsets provide binocular cues because they render a separate image for each eye (although at least one study has suggested that monocular/binocular cues alone are not responsible for distance compression [9]). In fact, current VR headsets, such as the HTC Vive, allow for most depth cues that are available in the real world to be implemented in VR [10]. Our first study looks at how fidelity of depth cues might change distance compression when looking at a visualization.
|
| 74 |
+
|
| 75 |
+
It has also been suggested that this distance compression might be affected by perceptually different distance zones. Armbrüster et al., suggest that distance compression is smaller for objects in peripersonal space $\left( { < 1\mathrm{\;m}}\right)$ where an object is within arm's reach [1]. Cutting calls this zone personal space $\left( { < {1.5}\mathrm{\;m}}\right)$ , splitting up larger distances into action space ( $< {30}\mathrm{\;m}$ , interaction of some sort is feasible) and vista space $({30}\mathrm{\;m} +$ , further than one would expect to be able to act) [10]. To further investigate these categories of distance, the first two studies use three different sizes of charts, roughly corresponding to personal, action, and vista bar heights.
|
| 76 |
+
|
| 77 |
+
§ STUDY 1 - DEPTH CUES AND SCALE
|
| 78 |
+
|
| 79 |
+
The chronic underestimation of distances is troublesome for VR, particularly when considering visualizations tend to encode data using absolute or relative distances. Even a simple bar chart uses length to communicate data to the viewer.
|
| 80 |
+
|
| 81 |
+
Our first study aimed to confirm that visualization in VR are compromised by distance underestimation. Since it has been suggested that more depth cues $\left\lbrack {{10},{12}}\right\rbrack$ could lesson underestimation, we designed three virtual environments with differing levels of depth-cues.
|
| 82 |
+
|
| 83 |
+
Few Depth Cues: Objects had a consistent luminance and no texture. Shadows were disabled and the sky was a medium gray. A simple textured floor was provided as floating over a void was nauseating in VR. This condition represented a simple chart without embellishments..
|
| 84 |
+
|
| 85 |
+
Some Depth Cues: Bars now had a slightly crumpled paper texture and responded to the lights in the scene, casting shadows. The floor contained an arbitrary grid, and was also textured slightly. Aerial perspective was applied, allowing distant objects to fade into the sunset sky somewhat. We consider this a 'best practices' chart, with minimal added embellishments, all of which directly contribute depth cues to the environment.
|
| 86 |
+
|
| 87 |
+
Rich Depth Cues: In addition to texture, luminance, and aerial perspective, the scene was augmented with objects that could be used to determine relative sizes. Trees, a light post, some bushes provided general cues about scale. A house, car, and park bench were also in the scene, as these have relatively standard sizes. Similarity, a skyscraper was in the scene, as a floor of a building is also about the same size. These objects were not immediately in view, the participant would have to look at them directly. Grass and flowers on the ground provided cues of relative density. This condition, while being very rich with depth cues, was also a bit extreme; one can imagine that not every visualization has a place for trees, cars, and buildings (Figure 1). However, Bateman et al [3] showed that embellishments that add context to the visualization can improve memorability, and given the prevalence of infographics, it is not impossible that some visualization s might provide relevant, contextual objects (e.g., a visualization about deforestation could contain trees).
|
| 88 |
+
|
| 89 |
+
We were also interested in measuring this effect at multiple scales. The corresponding chart heights and task specific bar lengths can be seen in Table 1. In all conditions, the participant viewed the visualization from $4\mathrm{\;m}$ back.
|
| 90 |
+
|
| 91 |
+
Personal scale: The entire visualization could be seen at one time when looking straight ahead without looking significantly up or down.
|
| 92 |
+
|
| 93 |
+
House Scale: The larger visualization required that the participant need to look up somewhat to see the entire chart.
|
| 94 |
+
|
| 95 |
+
Skyscraper Scale: The visualization was extremely tall, requiring the viewer to tilt their head back and look way up. While a barchart as high as a skyscraper is unlikely to be very useful, we included this scale because for very large or complex visualizations it is possible that a user could to navigate to a view where some of the data is very far away.
|
| 96 |
+
|
| 97 |
+
Finally, we compared VR with an on-screen condition which featured virtual environments with the same scales and depth-cues (without, of course, the binocular depth cues provided be the HMD). This gave us a scale (3) by depth-cues (3) by screen/VR (2) factorial study. We also added a real-world condition, with a simple bar chart contained on a monitor. This was to provide a baseline as it was similar to the foundational work done by Cleveland and McGill [6].
|
| 98 |
+
|
| 99 |
+
§ TASK
|
| 100 |
+
|
| 101 |
+
Cleveland and McGill [6] provide several tasks for evaluating perception of lengths in a visualization. We chose to mimic their position length experiment, specifically using their Type-1 task (as this had the lowest error). This task provides a 5 value bar chart, with two side by side bars marked with a dot which have percentage differences ranging from 18% to 83%. The participant is asked to evaluate, without explicitly measuring, what percentage the smaller bar is of the larger. Our task mimics theirs, down to the way they chose relevant values for the bars in the task, except that we used a single bar chart instead of two side by side bar charts. We also colored the bars of interest because for dot at the bottom would be insufficient for differentiation when looking way up in the skyscraper scale.
|
| 102 |
+
|
| 103 |
+
Participants completed 7 blocks (depth-cues (3) x screen/VR (2) + 1 real-world), counterbalanced with a Latin square design. In each block they viewed 18 bar charts (126 total), 6 of each scale, in random order, except in the real-world condition where all 18 bar charts fit on the screen at ${30}\mathrm{\;{cm}}$ tall. Using [6]'s template as a guide, we randomly generated sets of 6 tasks; each set containing two tasks with a percentage difference between 10% and 40%, two between 40% and 60% and two between 60% and 90%. For every bar chart, participants were asked to specify the percentage the smaller bar was of the larger, and the absolute height of the smaller bar in the virtual environment (or the real-world height on the screen, in the real-world block). While relative comparisons (bar compared to axis) is indeed the more common visualization task, we also asked about actual heights as this was relevant to the distance compression literature and is a relevant visualization task in situations(e.g., terrain map where $1\mathrm{\;m} = 1\mathrm{\;{km}}$ in the real world). Participants were told not to walk around in VR, but could look around in any direction. In the on-screen virtual environment, participant could not move, but could look around using the mouse. Their 'virtual head' was placed at the same height as their real head had been in VR.
|
| 104 |
+
|
| 105 |
+
Like in the original position-length experiment [6], participants were instructed to not explicitly measure (e.g., using a finger) or explicitly calculate distance. Also, because all participants in the pilot study expressed that the task was too hard and that they had no confidence in their estimations, we included an instruction that the task was supposed to be hard and not to feel discouraged.
|
| 106 |
+
|
| 107 |
+
§ MEASURES
|
| 108 |
+
|
| 109 |
+
Before the study participants filled out a demographics questionnaire asking about VR and game experience. Participants responded verbally when asked for percentages and heights. Answers were recorded and later merged with the logged study data. After the study, they filled out a simulator sickness questionnaire (SSQ)[20] and a questionnaire asking them whether they thought they performed better when estimating percentages or heights, which depth-cues virtual environment they thought they performed best in, whether they were better screen/VR, and finally were asked to rate each of the 7 blocks in terms of how well they thought they performed.
|
| 110 |
+
|
| 111 |
+
Like in the original position-length experiment [6], we took the log error of the percentage estimations
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\text{ LogErrorP } = {\log }_{2}\left( {\left| {{\text{ percent }}_{\text{ guessed }} - {\text{ percent }}_{\text{ actual }}}\right| + \frac{1}{8}}\right)
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
For the height estimations, we calculated the error as a percentage of the actual height they were estimating. This meant if the bar was ${20}\mathrm{\;m}$ tall, and the participant guessed either ${18}\mathrm{\;m}$ , or ${22}\mathrm{\;m}$ , they had a height error of ${10}\%$ .
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\text{ ErrorH } = \frac{\left| {\text{ height }}_{\text{ guessed }} - {\text{ height }}_{\text{ actual }}\right| }{{\text{ height }}_{\text{ actual }}}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
§ PARTICIPANTS
|
| 124 |
+
|
| 125 |
+
Study 1 had 18 participants recruited, with one removed for not following the instructions consistently, and one removed as an outlier with results more than 2 standard deviations from the mean, resulting in data for 16 participants. Details about the participants can be found in Table 1. Participants were remunerated with a ${25}\mathrm{{CAD}}$ gift card.
|
| 126 |
+
|
| 127 |
+
§ RESULTS
|
| 128 |
+
|
| 129 |
+
Results were analyzed with two (depth-cues (3) x screen/VR (2) x scale (3)) RM-ANOVAs (the real-world condition was not part of this RM-ANOVA, instead providing a sanity check that our participants performed similarly to [6]. Overall measurement means and standard deviations can be found in Table 1 and charts are in Figure 2.
|
| 130 |
+
|
| 131 |
+
< g r a p h i c s >
|
| 132 |
+
|
| 133 |
+
Figure 2. Log percent error (left) and height estimation error (right) for screen/VR (top row), scale (middle row) and level of depth cues (bottom row) used in Study 1. (Note: Error bars show standard error.)
|
| 134 |
+
|
| 135 |
+
§ PERCENTAGE LOG ERROR
|
| 136 |
+
|
| 137 |
+
There was a main effect of screen/VR (F(1,15) = 27.63, p < ${0.01})$ . When given the same virtual environments on the screen and in VR, participants had less error in VR (Figure 2). There was no main effect of depth-cues (p=.53). There was a main effect of scale $\left( {\mathrm{F}\left( {2,{30}}\right) = {185},\mathrm{p} < {0.01}}\right)$ (Figure 2). People were significantly worse at larger distances.
|
| 138 |
+
|
| 139 |
+
There was an interaction effect of screen/VR x scale (F(2,30) $= {32.49},\mathrm{p} < {0.01})$ . At the personal scale, screen-based virtual environments has less error than VR, but this reversed at the larger scales.
|
| 140 |
+
|
| 141 |
+
The real-world condition had a mean log percent error of 1.7 (SD: 0.94), which is slightly higher, but very similar to value achieved by the original paper based task [6] which was 1.5.
|
| 142 |
+
|
| 143 |
+
§ HEIGHT ERROR
|
| 144 |
+
|
| 145 |
+
As expected, 77.5% of all height evaluations were underestimations. There was a main effect of screen/VR $\left( {\mathrm{F}\left( {1,{15}}\right) = {5.80},\mathrm{p} < {0.05}}\right)$ , with participants having less error in VR. There was a main effect of scale $(\mathrm{F}\left( {2,{30}}\right) = {5.403}$ , p $< {0.05}$ ), with higher error at larger scales. There was no main effect of depth-cues $\left( {\mathrm{p} = {15}}\right)$ and no interaction effects.
|
| 146 |
+
|
| 147 |
+
The real-world condition had a mean height error of ${20}\%$ (SD: 16%).
|
| 148 |
+
|
| 149 |
+
§ SUBJECTIVE RANKINGS
|
| 150 |
+
|
| 151 |
+
Participants indicated whether they were better in VR (62%), the screen-based virtual environment (19%) or performed equally well on both (19%). Most participants indicated that they performed best in the rich virtual environment (81%), while a few indicated they were best in the some virtual environment (19%). The 7 blocks (depth cues (3) x screen/VR (2) + 1 real-world), sorted by mean participant rank, are: rich/VR (1.6), some/VR (3.0), rich/screen (3.1), some/screen (4.5), real-world (4.7), few/VR (4.9), few/screen (6.1). Although we did not formally record participants with audio or video, we tried to take notes if they commented on the helpfulness (or lack of helpfulness) of a virtual environment during or after the study. Most participants commented that the task was very hard, particularly with heights (e.g., "I don't think I am very good at this", "I have no idea how tall that is"). Every single participant commented at least once about some facet of the rich depth cue conditions as helpful (e.g., "The trees help", "I like the building, I can count the floors", "How big is a house, that chart is as big as a house").
|
| 152 |
+
|
| 153 |
+
§ SUMMARY OF RESULTS
|
| 154 |
+
|
| 155 |
+
In general, people were quite good at evaluating percentages, but poor at evaluating heights. They were better in VR, perhaps due to binocular depth cues, or due to physical sensations such as tilting one's head back to look up. Scale was as expected, important, with larger distances resulting in more error in both measurements. Depth cues did not seem to be influential in either height or percentage evaluations. This first study confirmed that distance underestimation occurs in a visualization context, in this case bar charts, in VR. However, the results suggest that percentage estimations are not nearly as negatively affected as height estimations. Participants were off by 9.7% on average, which is very small when considering that most answers were given as a multiple of five, introducing an expected error of 2.5%. Furthermore, VR had less error than the equivalent screen-based virtual environments, meaning that VR might be a better option than a screen based visualization where one needs to look around, at least in some situations.
|
| 156 |
+
|
| 157 |
+
Subjectively, people felt they were more accurate at percentages and better in VR. However, even though we did not find a significant difference between the different depth-cue conditions, people collectively felt that they were more accurate in the rich depth cue conditions. This is interesting because it means that while depth cues might be less impactful on task performance, it does seem to be impactful on user comfort and their own perception of competency.
|
| 158 |
+
|
| 159 |
+
§ STUDY 2 - MULTIPLE PERSPECTIVES
|
| 160 |
+
|
| 161 |
+
In this study we were interested in how perspective and motion would change perception of distances. In particular, the fixed position near the base of the bar chart used in Study 1 meant that for the larger scales of charts, users would experience significant perspective issues such as foreshortening.
|
| 162 |
+
|
| 163 |
+
Other than providing movement and the ability to take a new perspective, we also made a few changes to Study 1. Since we found no effect of depth-cues on task performance, we fixed this factor at our sunset-like, some depth cues virtual environment which we consider a reasonable best practice. Despite the preference of participants for the park-like rich depth cue environment, we acknowledge that complex context-rich settings full of trees and buildings may not be universally suitable for all visualizations. The scales we used in this study did not change, however since we were interested in letting participants take perspectives that were possibly far away, we made our bar charts ${1.5}\mathrm{x}$ as wide to me more visible from a distance. This meant that we moved the front and center starting position back ${1.5}\mathrm{\;m}$ such that at the personal scale the participant would still see the whole chart. We then calculated, using this front location, two other fixed positions ( ${15}\mathrm{\;m}$ to the left &right), and a variable back position such that the view was elevated and far enough away that both colored bars could be seen in their entirety. This back position was different in every bar chart and provided the participant with perspective that did not require them to look up or down and removed foreshortening effects.
|
| 164 |
+
|
| 165 |
+
< g r a p h i c s >
|
| 166 |
+
|
| 167 |
+
Figure 3. Front platform near personal scale bar chart. Players would view the world from the center of the platform. In some conditions platforms functioned as elevators.
|
| 168 |
+
|
| 169 |
+
Unlike the first study, where participants stood on the ground, here participants stood on virtual platforms (Figure 3). These hexagonal platforms featured transparent glass railings, interaction instructions, and lights that would light up when the participant stood on a centrally located pressure plate. (Participants were asked to return to the center between tasks). The width of the platform (about $2\mathrm{\;m}$ ) corresponded to the maximum walkable space as calibrated by the HTC Vive. This meant that participants could always walk around on a platform and even lean over the railings safely. Other than being functional in terms of movement, the consistently sized platforms could be used for relative sizing, like the objects in the rich depth cues virtual environment.
|
| 170 |
+
|
| 171 |
+
Participants were always on a platform, however, we provided the following movement modes.
|
| 172 |
+
|
| 173 |
+
Front Platform Only: Like our naïve perspective chosen in the first study, the participant was stuck at the front and bottom of the chart. The participant could not teleport to a new location or move the platform up or down. However, unlike the first study, participants could walk around on the platform.
|
| 174 |
+
|
| 175 |
+
Table 1: Study Details
|
| 176 |
+
|
| 177 |
+
Study 1 Study 2 Study 3
|
| 178 |
+
|
| 179 |
+
max width=
|
| 180 |
+
|
| 181 |
+
6|c|Study 1Study 2Study 3
|
| 182 |
+
|
| 183 |
+
1-6
|
| 184 |
+
Task Type X Position-Length [4] Position-Length[4] 2|c|Position-Angle[4]
|
| 185 |
+
|
| 186 |
+
1-6
|
| 187 |
+
7*Scale Personal House Skyscraper X X X Bar/Scatter Pie
|
| 188 |
+
|
| 189 |
+
2-6
|
| 190 |
+
Height 3m $3\mathrm{\;m}$ 13m 5m
|
| 191 |
+
|
| 192 |
+
2-6
|
| 193 |
+
Task Bar Height <1.7m <1.7m < 5m -
|
| 194 |
+
|
| 195 |
+
2-6
|
| 196 |
+
Height 30m 30m 30m 12m
|
| 197 |
+
|
| 198 |
+
2-6
|
| 199 |
+
Task Bar Height <17m <17m < 12m -
|
| 200 |
+
|
| 201 |
+
2-6
|
| 202 |
+
Height 180m 180m - -
|
| 203 |
+
|
| 204 |
+
2-6
|
| 205 |
+
Task Bar Height < 100m <100m - -
|
| 206 |
+
|
| 207 |
+
1-6
|
| 208 |
+
4*Participants Age M 35.0. SD: 10.7 M: 34.5. SD: 11.0 2|c|M: 28.5. SD: 4.3
|
| 209 |
+
|
| 210 |
+
2-6
|
| 211 |
+
$N$ 16 (4 female) 10 (2 female) 2|c|10 (3 female)
|
| 212 |
+
|
| 213 |
+
2-6
|
| 214 |
+
Measurement 4 used metric 5 used metric 2|c|6 used metric
|
| 215 |
+
|
| 216 |
+
2-6
|
| 217 |
+
VR Experience 11 tried before, 3 very familiar, 2 experts 3 no experience, 2 tried before, 5 very familiar 2|c|7 tried before, 2 very familiar, 1 expert
|
| 218 |
+
|
| 219 |
+
1-6
|
| 220 |
+
5*Measures Log Percent Error $\mathrm{M};{2.67},\mathrm{{SD}};{1.37}$ M: 1.96, SD: 1.18 2|c|M: 1.85. SD: 1.67
|
| 221 |
+
|
| 222 |
+
2-6
|
| 223 |
+
Height Error M: 32%. SD: 23% M: 34%. SD: 22% 2|c|M: 34%. SD: 27%
|
| 224 |
+
|
| 225 |
+
2-6
|
| 226 |
+
Angle Error - - 2|c|M: 20%, SD: 16%
|
| 227 |
+
|
| 228 |
+
2-6
|
| 229 |
+
Simulator Sickness M: 5.1. SD: 4.5 M: 10.1. SD: 6.2 2|c|M: 6.2. SD: 4.8
|
| 230 |
+
|
| 231 |
+
2-6
|
| 232 |
+
Performed Better At percents (88%), heights (0%), both (12%) percents (90%), heights (0%), both (10%) 2|c|percents (50%), heights (10%), both (40%)
|
| 233 |
+
|
| 234 |
+
1-6
|
| 235 |
+
|
| 236 |
+
Back Platform Only: The participant started on a variable location back platform and could walk around but not teleport or use the platform elevator.
|
| 237 |
+
|
| 238 |
+
Teleport Anywhere: Participants started at the front location and could teleport/move the platform they were on by pressing the trigger, aiming a visible arc pointer to a valid location (marked by a repeating blue pattern) and then releasing the trigger. Participants could teleport anywhere in a ${100}\mathrm{\;m}$ square centered on the chart. Participants could not move so close to the chart that they intersected it (invalid area was marked in a red). The elevator platform could be moved up and down by using a diegetic interface on the touchpad.
|
| 239 |
+
|
| 240 |
+
Teleport 4 Platforms: Participants could teleport to front, left, and right, fixed location, elevator platforms as well as the variable location back platform. Additional platforms are only seen when teleporting so they do not block the chart. Platforms were selected by aiming the arc pointer directly at, near, or in the general direction of a platform.
|
| 241 |
+
|
| 242 |
+
Teleport Front/Back: Participants could teleport like in the Four Platform condition, but could only access the front and back elevator platforms.
|
| 243 |
+
|
| 244 |
+
< g r a p h i c s >
|
| 245 |
+
|
| 246 |
+
Figure 4. Types of movement allowed in Study 2.
|
| 247 |
+
|
| 248 |
+
§ TASK
|
| 249 |
+
|
| 250 |
+
Tasks were generated the same as they were in Study 1. Participants completed 5 counterbalanced blocks, one for each movement type. Each block was introduced in a training mode where participants could try out the relevant movement interactions. During the study tasks, participants were asked to teleport at least once before giving their answers (if applicable). They were asked to return to the center of the platform in between each of the 90 tasks.
|
| 251 |
+
|
| 252 |
+
§ MEASURES
|
| 253 |
+
|
| 254 |
+
The measures employed were similar to Study 1, except that the final questionnaire asked them to rank their performance with the 5 movement types.
|
| 255 |
+
|
| 256 |
+
§ PARTICIPANTS
|
| 257 |
+
|
| 258 |
+
Study 2 had 11 participants, one was excluded as they were a clear outlier (more than two standard deviations from the mean). All participant details can be found in Table 1. The study took one hour and participants were remunerated with a 25 CAD gift card.
|
| 259 |
+
|
| 260 |
+
§ RESULTS
|
| 261 |
+
|
| 262 |
+
We performed two (movement-type (5) x scale (3)) RM-ANOVAs. Overall measurement means and standard deviations are in Table 1, and charts are in Figure 5.
|
| 263 |
+
|
| 264 |
+
< g r a p h i c s >
|
| 265 |
+
|
| 266 |
+
Figure 5. Log percent error (left) and height estimation error (right) for each movement type (top row) and scale (bottom row) used in Study 2. (Note: error bars show standard error.)
|
| 267 |
+
|
| 268 |
+
§ PERCENTAGE LOG ERROR
|
| 269 |
+
|
| 270 |
+
There was a main effect of movement-type $(\mathrm{\;F}\left( {4,{36}}\right) = {51.6}$ , p $< {0.001})$ . Post-hoc tests showed that Front Platform Only was significantly worse than all other conditions. There was no effect of scale $\left( {\mathrm{p} = {0.12}}\right)$ and no interaction effects. Overall measure averages can be found in Table 1.
|
| 271 |
+
|
| 272 |
+
§ HEIGHT ERROR
|
| 273 |
+
|
| 274 |
+
There was no main effect of movement-type $\left( {\mathrm{p} = {.31}}\right)$ but there was an effect of scale $\left( {\mathrm{F}\left( {2,{18}}\right) = {6.7},\mathrm{p} < {0.01}}\right)$ . Post hoc tests showed that the largest scale lead to significantly higher error than the smallest and medium scale $\left( {\mathrm{p} < {0.05}}\right)$ .
|
| 275 |
+
|
| 276 |
+
§ SUBJECTIVE RANKINGS
|
| 277 |
+
|
| 278 |
+
The movement-types, sorted by mean rank, are: continuous-teleport (1.8), teleport-front/back (2.0), teleport-4-platforms (2.1), back-platform-only (4.1), front-platform-only (4.7).
|
| 279 |
+
|
| 280 |
+
§ SUMMARY OF RESULTS
|
| 281 |
+
|
| 282 |
+
Adding movement/different perspectives to the viewpoint used in the first study always resulted in improvements to percentage estimations. The log percentage error of these improved conditions is about 1.7 which is very close to the 1.5 log percentage error achieved by Cleveland and McGill's position-length type 1 task which we modelled our study after. Furthermore, the effect of scale on percentage estimations was essentially eliminated when participants had the opportunity to view the visualization from far away and view the entire set of bars at once. Thus, it appears that relative distances tasks, like the percentage estimation we used here, are robust to the perceptual effects of VR if one can view the chart from different perspectives.
|
| 283 |
+
|
| 284 |
+
On the other hand, directly estimating the height was still problematic when movement and perspective were added. There was no condition which improved people's height estimations and larger scales were still more difficult. Thus movement/perspective was not successful at improving people's ability to estimate heights.
|
| 285 |
+
|
| 286 |
+
§ STUDY 3 - OTHER CHART TYPES
|
| 287 |
+
|
| 288 |
+
Now that we have established, that at least when it comes to bar charts, estimating relative distances may be robust to the effects of distance compression in VR, we were interested in looking at this effect in other chart types. To this end we ran a third study using bar charts, pie charts and scatter plots (Figure 6). We used the sunset environment from study 1 and the continuous teleport movement from study 2 as it was ranked the highest. Also, because the first two studies had confirmed that people are bad at estimating the size of skyscrapers, we had 9 of the 18 charts in each condition fit all data directly ahead without needing to look up and the other 9 about twice as tall.
|
| 289 |
+
|
| 290 |
+
< g r a p h i c s >
|
| 291 |
+
|
| 292 |
+
Figure 6. Chart types used in Study 3.
|
| 293 |
+
|
| 294 |
+
Bar Chart: Like with the bar chart in the previous studies, participants were asked to make percentage estimations between two colored bars and to estimate the height of the smaller colored bar. Colored bars were always consecutive but their order was randomized.
|
| 295 |
+
|
| 296 |
+
Scatter plot: Instead of bars, this chart used spherical markers. The markers were spread out horizontally on the x-axis such that they were all between ${0.5}\mathrm{\;m}$ and ${2.5}\mathrm{\;m}$ apart, however, the colored markers were always consecutive and ${1.5}\mathrm{\;m}$ apart. Participants were asked to make percentage estimations between the colored markers with respect to height (y-axis) and to estimate the height of the shorter colored marker.
|
| 297 |
+
|
| 298 |
+
Pie Chart: This five-section pie chart had two colored segments and three distinctly colored white sections. Colored segments were always in a random consecutive position and were assigned either dark or light purple randomly. Like the Cleveland and McGill's [6] position-angle experiment, participants were asked to make percentage estimations between the colored segments. However, as a pie chart uses angles rather than heights to encode data, participants were instructed to estimate the angle of the smaller segment.
|
| 299 |
+
|
| 300 |
+
§ TASK
|
| 301 |
+
|
| 302 |
+
Tasks were generated similar to Cleveland and McGill's [6] position-angle experiment. This meant that 5 numbers summing to 100 were generated, with percentage differences ranging from 10% to 97%. Participants completed 3 counterbalanced blocks, one for each chart type. Each block was introduced in a training mode where the researcher walked them through the exact questions used in this task. During the study tasks, participants were asked to teleport, walk around, or use the elevator at least once before giving their answers. They were asked to return to the center of the platform in between each of the 54 tasks.
|
| 303 |
+
|
| 304 |
+
§ MEASURES
|
| 305 |
+
|
| 306 |
+
The measures employed were similar to Study 1 and 2, except that the final questionnaire asked them to rank their performance with each chart.
|
| 307 |
+
|
| 308 |
+
Additionally, to compare participant's angle estimations to height estimations, we used a very similar formula to calculate the angle estimation error as a percentage of the actual angle they were estimating.
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
\text{ ErrorA } = \frac{\left| {\text{ angle }}_{\text{ guessed }} - {\text{ angle }}_{\text{ actual }}\right| }{{\text{ angle }}_{\text{ actual }}}
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
§ PARTICIPANTS
|
| 315 |
+
|
| 316 |
+
Study 3 had 10 participants. All participant details can be found in Table 1. The study took 40 minutes and participants were remunerated with a ${25}\mathrm{{CAD}}$ gift card.
|
| 317 |
+
|
| 318 |
+
§ RESULTS
|
| 319 |
+
|
| 320 |
+
We performed a (chart-type (5) x scale (2)) RM-ANOVA. Overall measurement means and standard deviations can be found in Table 1 and charts are in Figure 7.
|
| 321 |
+
|
| 322 |
+
§ PERCENTAGE LOG ERROR
|
| 323 |
+
|
| 324 |
+
There was a main effect of chart-type $(\mathrm{F}\left( {2,{18}}\right) = {11.06},\mathrm{p} <$ 0.001). Post-hoc tests showed that Pie Charts were significantly worse than all other condition (p<0.05). There was no effect of scale $\left( {\mathrm{p} = {0.30}}\right)$ and no interaction effects.
|
| 325 |
+
|
| 326 |
+
§ HEIGHT AND ANGLE ERROR
|
| 327 |
+
|
| 328 |
+
There was a main effect of chart-type $(\mathrm{\;F}\left( {2,{18}}\right) = {11.39},\mathrm{p} <$ 0.01 ). Post hoc tests showed that angle estimations had significantly less error than heights (p<0.01). There was no effect of scale $\left( {\mathrm{p} = {0.65}}\right)$ and no interaction effects.
|
| 329 |
+
|
| 330 |
+
§ SUBJECTIVE RANKINGS
|
| 331 |
+
|
| 332 |
+
The chart-types, sorted by mean rank, are: bar (1.4), pie (2), scatter (2.7).
|
| 333 |
+
|
| 334 |
+
< g r a p h i c s >
|
| 335 |
+
|
| 336 |
+
Figure 7. Log percent error (left) and height estimation error (right) for each chart type in Study 3. (Note: error bars show standard error.)
|
| 337 |
+
|
| 338 |
+
§ SUMMARY OF RESULTS
|
| 339 |
+
|
| 340 |
+
When it comes to percentage estimations, our results mirror Cleveland and McGills [6] results: people are better at lengths than they are at angles, by a factor of 2.1 (1.96 in the original study). This result suggests that for percentage estimations, other perceptual tasks (e.g., area) should follow the same patterns in VR as the original work.
|
| 341 |
+
|
| 342 |
+
However, when looking at angle/height estimations, participants were better at angles. This could be because the on only needs to look at the innermost pie chart point to do this estimation (as opposed to looking at the bottom and top of a large bar chart), because all angles are bounded by 360 (max angle was 100 degrees in our study), or because the angles were relatively small. Future work should investigate this more closely, especially at larger and smaller scales.
|
| 343 |
+
|
| 344 |
+
§ DISCUSSION
|
| 345 |
+
|
| 346 |
+
This work suggests that the distance compression problem that occurs in VR does alter one's perception of data visualizations. However, relative distance tasks, like the percentage estimation tasks in these three studies, appear to be robust to this distance compression, particularly when participants can reach a perspective where they can view the chart from far back.
|
| 347 |
+
|
| 348 |
+
§ DESIGN GUIDELINES
|
| 349 |
+
|
| 350 |
+
§ VR IS GOOD FOR VIRTUAL ENVIRONMENTS
|
| 351 |
+
|
| 352 |
+
In Study 1, VR had less error than the equivalent screen-based virtual environment for both percentage and height estimations. While VR may not be better in all circumstances (e.g., flat screen image), when the visualization exists inside a virtual environment, VR can be a good choice for immersive analytics.
|
| 353 |
+
|
| 354 |
+
§ USE MOVEMENT MODES THAT AVOID NAUSEA
|
| 355 |
+
|
| 356 |
+
Unity's practitioner guidelines [49] recommends that one avoids user simulator sickness by designing to avoid vection. Vection occurs in VR when the user's vestibular system is receiving different signals than their eyes and ears. This occurs most often in VR when the movement modality causes the player to experience motion in VR that their body does not (e.g., mapping movement in VR to a thumbstick on a controller), or when experienced bodily motion has no effect in VR (e.g., not updating the VR environment when the user turns their head).
|
| 357 |
+
|
| 358 |
+
Although we were not specifically investigating or avoiding nausea, we did find some techniques we used successful. We used a fade in/out teleport mechanism and platforms that matched the safely walkable space in the real world in Study 2 and Study 3 to avoid vection. The elevator feature was unfortunately vection inducing because it was activated with the touchpad instead of equivalent bodily motion. We combated vection-related nausea by severely limiting the elevator speed and used an easing function to prevent sudden stops. A future implementation of an elevator might have 'floors' that can be accessed with a teleport-like fade effect.
|
| 359 |
+
|
| 360 |
+
§ ENCODE DATA WITH RELATIVE DISTANCES
|
| 361 |
+
|
| 362 |
+
One should not have any requirements or expectations that a user can estimate a distance in VR. In all three studies, participants were underestimating heights on average by ${33}\%$ ,(i.e., $2/3$ their actual value). However, across all three studies, participants provided extremely low-ball heights 15% of the time, underestimating by more than 50%. Conversely participants were much more accurate in the percentage estimation task, off only by 6-10% on average across all three studies.
|
| 363 |
+
|
| 364 |
+
Therefore, designers should encode data in ways that allows users to compare two distances/lengths rather than expecting them to measure a distance directly. In a simple graph like a bar chart or scatter plot, this can be as simple as providing a labelled axis immediately beside the data. Avoid, say, providing a scale for a map requiring that one can estimate a distance (e.g.,1inch $= 1$ mile). We also recommend that where a measurement of distance is necessary, one should provide a tool which can be used for comparison, like a ruler, or a measuring tape.
|
| 365 |
+
|
| 366 |
+
§ CONSIDER MAXIMUM SCALE
|
| 367 |
+
|
| 368 |
+
It is important to consider the most extreme perspective that the user can navigate into, or, rather to consider the maximum distance from themselves that a user might be asked to evaluate. In Study 1 participants were pretty good at the personal and house scale, and were predictably bad at the skyscraper scale. Therefore, we recommend that one create situations that expect users to be evaluating distances less than ${15}\mathrm{\;m}$ , though this is just a rough estimation based on the particular distances we used in the study. Future work is needed to provide a better guideline.
|
| 369 |
+
|
| 370 |
+
§ PROVIDE AN OVERVIEW PERSPECTIVE
|
| 371 |
+
|
| 372 |
+
In Study 1, larger scales meant higher error in both percentage and height estimations. However, when the ability to view the data from an overview perspective was added in Study 2, this effect disappeared for percentage estimations. If one does use large distances, or data that is far away from the user, providing a way for the user to make a quick overview of relevant data can remove the negative effects of large distances by removing problems like foreshortening. It should be noted that in Study 2, the teleport-anywhere condition where one could move their view to an overview perspective was not significantly better than the back-platform-only condition which automatically provided an overview perspective. Therefore it may be enough to simply provide a generated overview of your visualization, or one could provide freeform movement options like teleport-anywhere depending on your needs.
|
| 373 |
+
|
| 374 |
+
§ USERS APPRECIATE ADDITIONAL DEPTH CUES
|
| 375 |
+
|
| 376 |
+
Given that every participant mentioned how helpful additional depth cues were in Study 1, despite no change in task performance, we would recommend providing as many depth cues as possible to improve user's comfort and perceived competency with the task. This could mean simple things like adding texture to an object, or more complicated, fully developed, contextually relevant environments with relatively sized objects.
|
| 377 |
+
|
| 378 |
+
§ CONCLUSION
|
| 379 |
+
|
| 380 |
+
In this paper we investigated how the phenomenon of distance compression alters perception of visualizations in VR. Through three studies that replicate foundational work around the perception of visualizations we found that estimations of actual lengths, in this case the heights of bars in a bar chart, are negatively impacted by distance compression, but relative distances are not. Furthermore, as with traditional visualizations, people can better estimate relative lengths over relative angles, suggesting that much of the existing perceptual research on visualizations may still apply. Finally, we provide set of design guidelines for designers wishing to develop VR visualizations that limit the negative effects of distance compression.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,381 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Real-Time Cinematic Tracking of Targets in Dynamic Environments
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Tracking a moving target in a 3D dynamic environment in a cinematic way remains a challenging problem through the need to simultaneously ensure a low computational cost, a good degree of reactivity to changes, and a high cinematic quality. In this paper, we draw on the idea of Motion-Predictive Control to propose an efficient real-time camera tracking technique which ensures these properties. Our approach relies on the predicted motion of a target to create and evaluate a very large number of camera motions using hardware ray casting. Our evaluation of camera motions includes a range of cinematic properties such as distance to target, visibility, collision, smoothness and jitter. Experiments are conducted to display the benefits of the approach with relation to prior work.
|
| 8 |
+
|
| 9 |
+
## 1 INTRODUCTION
|
| 10 |
+
|
| 11 |
+
The automated generation of cinematic camera motions in 3D virtual environments is a key problem for a number of computer graphics applications (computer games, automated generation of virtual tours, virtual storytelling). The first and foremost problem is to identify what are the intrinsic characteristics of a good camera motion. While film literature provides a thorough and in-depth analysis of what makes a viewpoint qualitative in terms of framing, angle to target, aesthetic composition, depth-of-field, lighting, the characterisation of camera motions has been far less addressed. This is pertained to specifics of real camera rigs (dollies, cranes) that reduce the set of motion possibilities, and the limited use of long camera sequences in movies (with the exception of steadicam sequence shots). In addition, the characteristics of camera motions in movies are strongly guided by the narrative intentions which need to be conveyed (e.g. rhythm, excitation, or soothing atmosphere).
|
| 12 |
+
|
| 13 |
+
In transposing the knowledge to the tracking of targets in virtual environments, one can however derive a number of desirable cinematic characteristics such as visibility (avoiding occlusion of the tracked target, and obviously collisions with the environment), smoothness (avoiding jerkiness in trajectories) and continuity (avoiding large changes in viewing angles and distances to target). In practice, these characteristics are often contradictory (avoiding a sudden occlusion requires a sudden acceleration, or a abrupt change in angle). Furthermore, the computational cost of evaluating visibility, collision, continuity and smoothness lowers the possibility of evaluating many alternative camera motions.
|
| 14 |
+
|
| 15 |
+
Existing work have either addressed the problem using global motion planning techniques typically based on precomputed roadmaps $\left\lbrack {5,{10},{11}}\right\rbrack$ , or local planning techniques using ray casting [12] and shadow maps for efficient visibility computations $\left\lbrack {1,2,4}\right\rbrack$ . While global motion planning techniques excel at ensuring visibility given their full prior knowledge of the scene, local planning techniques excel in handling strong dynamic changes in the environment. The main bottleneck of both approaches remains the limited capacity in evaluating expensive cinematic properties such as target visibility along a camera motion or in the local neighborhood of a camera.
|
| 16 |
+
|
| 17 |
+
Our approach builds on the idea of performing a mixed local+global approach by exploiting a finite-time horizon large enough to perform a global planning, yet efficient enough to react in real-time to sudden changes. This sliding window enables the real-time evaluation of thousands of camera motions by exploiting recent hardware raycasting techniques. As such, our approach draws inspiration from Motion Predictive control techniques [13] by optimizing a finite time-horizon, only implementing the current timeslot and then repeating the process on the following time slots.
|
| 18 |
+
|
| 19 |
+
A strong hypothesis we make is that the target object is controlled by the user through interactive inputs, hence its motions and actions can be predicted within a short time horizon $H$ . Our system comprises 2 main stages, illustrated in Figure 1. In the first stage, we predict the motion of the target over a given time horizon ${H}^{i}$ by using the target’s current position (at time $t$ ) and the user inputs. We then select an ideal camera position at time $t + {H}^{i}$ and propose to define a camera animation space as a collection of smooth camera animations which link the current camera position (at time $t$ ), to the ideal camera location (at time $t + {H}^{i}$ ). In the second stage, we perform an evaluation of the quality of the camera animations in the animation space by relying on hardware raycasting techniques and select the best camera animation. In a way similar to motion-predictive control [13], we then apply part of the camera animation and re-start the process at a low frequency(4Hz)or when a change in the user inputs is detected. Finally, to adapt the camera animation space to the scene configurations, we dynamically adapt a scaling factor on the animation space. As a whole, this process allows generating a continuous and smooth camera animation which enables the real-time tracking of a target object motions in fully dynamic and complex environments.
|
| 20 |
+
|
| 21 |
+
Our contributions are:
|
| 22 |
+
|
| 23 |
+
- the design of a camera animation space as a dedicated space in which to express a range of camera trajectories;
|
| 24 |
+
|
| 25 |
+
- an efficient evaluation technique using hardware ray casting;
|
| 26 |
+
|
| 27 |
+
- a motion predictive control approach that exploits the camera animation space to generate real-time cinematic camera motions.
|
| 28 |
+
|
| 29 |
+
## 2 RELATED WORK
|
| 30 |
+
|
| 31 |
+
We narrow the scope of related work to real-time camera planning techniques. For a broader view of camera control techniques in computer graphics, we refer the reader to [3].
|
| 32 |
+
|
| 33 |
+
## Global camera path planning
|
| 34 |
+
|
| 35 |
+
Global camera path-planning techniques build on well-known results from robotics such as probabilistic roadmaps, regular cell-decompositons or Delaunay triangulations. All have in common the prior computation of a roadmap as a graph where nodes represent regions of the configuration-free space (points, regular cells or other primitives), and edges represent collision-free links between the nodes. Niewenhuisen and Overmars exploited probabilistic roadmaps (PRM) to automatically perform queries withe the graph structure, linking given starting and ending configurations [10]. Heuristics were required to smooth the camera trajectories and avoid sudden changes in position and camera angles. Later, Oskam et al. [11] proposed a visibility-aware roadmap, by using sphere-sampling of the configuration-free space, and precomputing the combinatorial sphere-to-sphere visibility using stochastic ray-casting.
|
| 36 |
+
|
| 37 |
+
Lino et al. [7] exploited spatial partitioning techniques as dynamically evolving volumes around targets. Connectivity between volumes allowed to dynamically create a roadmap, through which camera paths were computed while accounting for visibility and viewpoint semantics along the path.
|
| 38 |
+
|
| 39 |
+
More recently, Jovane et al. exploited the 3D environments to create topology-aware camera roadmaps that would lower the road-amp complexity (compared to probablistic roadmaps) and enable the exploitation of different cinematic styles.
|
| 40 |
+
|
| 41 |
+
Yet, in all cases, the cost of precomputing the roadmap, and the difficulty in dynamically updating it to account for change in the 3D environments limits the practical applicability of such techniques to strongly dynamic environments, such as those met in computer games or storytelling applications.
|
| 42 |
+
|
| 43 |
+
## Local camera planning
|
| 44 |
+
|
| 45 |
+
The other class of real-time camera planning techniques relies on a local knowledge of the environment. Mostly by sampling and evaluating the local neighborhood around the current camera location, such system are able to take decisions as to were to move at the next iteration, while evaluation classical cinematic properties such as visibility, smoothness and continuity. To address the computational issue of evaluation visilibity of targets, Halper et al. [4] exploited shadow maps to compute potential visible sets, coupled with a hierarchical solver. Normand and Christie exploited slanted rendering frustums to compose spatial and temporal visibility for two targets over a small temporal window (10 frames) [2]. Additional criteria were added in order to select the best move to perform at each frame, and to balance between camera smoothness and camera reactivity. Litteneker et al. [8] proposed a local planning technique based on an active contour algorithm.
|
| 46 |
+
|
| 47 |
+
Burg et al. [1] performed shadow map projections from the targets to the surface of the Toric manifold (a specific manifold space dedicated to camera control [6]). The visibility information provided by the shadow maps was then exploited to move the camera on the surface of the Toric manifold while ensuring secondary visual properties.
|
| 48 |
+
|
| 49 |
+
Recently, for the specific case of drone cinematography, Nageli et al. [9] built a non-linear model predictive contouring controller to jointly optimize 3D motion paths, the associated velocities and control inputs for a drone.
|
| 50 |
+
|
| 51 |
+
Our approach partly builds on the work of Nageli et al. , by borrowing the idea of a receding horizon process in which motion planning is performed for a large enough time horizon (few seconds), and the processed is repeated at a higher frequency to account for dynamic changes in the environment. Rather than addressing the problem using a non-linear solver, we propose in our paper to exploit the hardware raycasting capacities of recent graphics cards to efficiently detect collisions and occlusion and evaluate thousands of camera trajectories for each time slot.
|
| 52 |
+
|
| 53 |
+
## 3 OVERVIEW
|
| 54 |
+
|
| 55 |
+
Our system aims at tracking in real-time a target object traveling through a $3\mathrm{\;d}$ animated scene, by generating a series of smooth cinematic-like camera motions.
|
| 56 |
+
|
| 57 |
+
In the following, we will present the construction of our camera animation space (Section 4), before detailing the evaluation of camera animations in this space (Section 5). Finally, we will show how we dynamically adapt and recompute our graph to fit the scene geometry and improve the results (Section 6).
|
| 58 |
+
|
| 59 |
+
## 4 CAMERA ANIMATION SPACE
|
| 60 |
+
|
| 61 |
+
We propose the design of a Camera Animation Space as relative local frame, defined by an initial camera configuration ${\mathbf{q}}_{\text{start }}$ at time $t$ and final camera configuration ${\mathbf{q}}_{\text{goal }}$ at time $t + {H}^{i}$ (see Figure 2). This local space defines all the possible camera animations that link ${\mathbf{q}}_{\text{start }}$ at time $t$ to ${\mathbf{q}}_{\text{goal }}$ at time $t + {H}^{i}$ . Our goal is to compute the optimal camera motion within this space considering a number of desired features on the trajectory (smoothness, collision and occlusion avoidance along the camera animation, ...).
|
| 62 |
+
|
| 63 |
+
---
|
| 64 |
+
|
| 65 |
+
${H}^{i}\;$ Time horizon for iteration $i$ (between times ${t}_{i}$ and ${t}_{i} + h$ )
|
| 66 |
+
|
| 67 |
+
Target behavior (predicted position) at time $t \in {H}_{i}$
|
| 68 |
+
|
| 69 |
+
Set of preferred viewpoints at time ${t}_{i} + h$
|
| 70 |
+
|
| 71 |
+
Camera animation space for horizon ${H}^{i}$
|
| 72 |
+
|
| 73 |
+
${M}^{i}$ Transform matrix of the camera animation space, for ${H}^{i}$
|
| 74 |
+
|
| 75 |
+
3D position in camera animation ${\mathbf{q}}_{j}^{i} \in {\mathbb{Q}}^{i}$ , at time $t \in {H}^{i}$
|
| 76 |
+
|
| 77 |
+
Starting camera position. ${\mathbf{q}}_{j}^{i}\left( {t}_{i}\right) = {q}_{\text{start }}^{i},\forall \left( {i, j}\right)$
|
| 78 |
+
|
| 79 |
+
Goal camera position. ${\mathbf{q}}_{j}^{i}\left( {{t}_{i} + h}\right) = {q}_{\text{goal }}^{i} \in {\mathbf{V}}^{i},\forall \left( {i, j}\right)$
|
| 80 |
+
|
| 81 |
+
Tangent vector of a camera track at time $t$
|
| 82 |
+
|
| 83 |
+
The camera view vector at time $t$
|
| 84 |
+
|
| 85 |
+
$\left( {\mathbf{x},\mathbf{y}}\right) \;$ angle between two vectors $\mathbf{x}$ and $\mathbf{y}$
|
| 86 |
+
|
| 87 |
+
Table 1: Notations used in the paper
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
To this end, we propose to follow a 3-step process: (i) anticipate the target object's behavior (i.e. its next positions) within a given time horizon, (ii) choose a goal camera viewpoint from which to view the target at the end of the time horizon, and (iii) given this goal viewpoint, and the current one, build and evaluate the space of possible camera animations between them.
|
| 92 |
+
|
| 93 |
+
### 4.1 Anticipating the target behavior
|
| 94 |
+
|
| 95 |
+
We here make the strong assumption that we can anticipate, with good approximation, the next positions of the tracked target within a time horizon ${H}^{i}$ . This is classical in character animation engines to select the best animation to play (e.g. motion matching). We consider ${H}^{i}$ begins at time ${t}_{i}$ and has a constant user-defined duration of $h$ seconds. Moreover, we consider that the target behavior will be consistent over the whole horizon ${H}^{i}$ . In our implementation, we consider the target as a rigid body (e.g. a capsule) with a current speed and acceleration, launch a physical simulation over ${H}^{i}$ , then store all simulated positions of the rigid body over time. With this anticipation, we account for the scene geometry which might influence future user inputs, e.g. to make the target avoid collisions. We then refer to the anticipated positions as the target behavior, output in the form of a 3d animation curve ${B}^{i}\left( t\right)$ with $t \in {H}^{i}$ (see Figure 3). Note that one may use another technique to anticipate the target behavior, provided it can output a $3\mathrm{\;d}$ animation curve ${B}^{i}\left( t\right)$ over time, it will not change the overall workflow of our camera system.
|
| 96 |
+
|
| 97 |
+
### 4.2 Selecting a goal viewpoint
|
| 98 |
+
|
| 99 |
+
We now make the second assumption that the user defines a set of viewpoints to portray the target object. By default, one might use a list of stereotypical viewpoints in movies such as 3-quarter front and back views, side views, or bird eye views. These viewpoints are sorted by order of preference (fixed by the user) in a priority queue V. Each preferred viewpoint is defined as a 3d position in spherical coordinates $\left( {d,\phi ,\theta }\right)$ , in the local frame of the target’s configuration, where $\left( {\phi ,\theta }\right)$ defines the vertical and horizontal viewing angles, and $d$ the viewing distance.
|
| 100 |
+
|
| 101 |
+
Given this set of viewpoints we propose, each time we update the target behavior, to select a good viewpoint where the camera should be at the end of the time horizon ${H}^{i}$ , i.e. at time ${t}_{i} + h$ . Considering all viewpoints are in $\mathbf{V}$ , we pop viewpoints by order of priority. We propose to stop as soon as a viewpoint is promising enough, i.e. at time ${t}_{i} + h$ neither the target will be occluded from this viewpoint, nor the camera will be in collision with the scene geometry. We then refer to this selected viewpoint as the goal viewpoint ${\mathbf{q}}_{\text{goal }}$ .
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
|
| 105 |
+
Figure 1: System overview: the orange box represents the CPU part of the system; the green box represent the GPU part of the system
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
|
| 109 |
+
Figure 2: Representation of the animation space and its local transform
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
|
| 113 |
+
Figure 3: Representation of the target’s behaviour curve at the ${i}^{th}$ iteration
|
| 114 |
+
|
| 115 |
+

|
| 116 |
+
|
| 117 |
+
Figure 4: Ray launched from the camera toward the target's sampling area at time $t$
|
| 118 |
+
|
| 119 |
+
Knowing the current camera viewpoint ${\mathbf{q}}_{\text{start }}$ (at time ${t}_{i}$ ) and this goal viewpoint ${\mathbf{q}}_{\text{goal }}$ (at time ${t}_{i} + h$ ) provides a good basis to build camera animations that can follow the target behavior. We now need to categorize the full range of possible camera animations.
|
| 120 |
+
|
| 121 |
+
### 4.3 Sampling camera animations
|
| 122 |
+
|
| 123 |
+
Given the target behavior to track, we have computed two key viewpoints ${\mathbf{q}}_{\text{start }}$ and ${\mathbf{q}}_{\text{goal }}$ , where the camera should be at start and end times of horizon ${H}^{i}$ . We now propose to categorize the space of possible camera animations, between these two viewpoints. It is worth noting that this space is infinite, which makes it difficult to explore and evaluate. Our idea is to instead create a compact representation of this space, by sampling a large set of animation curves, with a good coverage of the range of animations. We will hereafter note this stochastic set of camera animations as ${\mathbb{Q}}^{i}$ , and a sampled camera animation as ${\mathbf{q}}_{j}^{i}$ , where $j$ is the sample index.
|
| 124 |
+
|
| 125 |
+
Two requirements should be considered on this sampled space: (i) sampled camera animations should be as-smooth-as-possible, i.e. with low jerk, and (ii) the sampled animation space should enable to enforce continuity between successive horizons. To do so, we propose to encode each sampled camera animation as a cubic spline curve on all 3 camera position parameters, as they offer ${C}^{3}$ continuity between key-viewpoints. In practice, we make use of Hermite curves. They provide an easy means to sample the space of possible animations, by simply sampling a set of tangent vectors to the spline curve at start and end positions. Then, ${C}^{1}$ continuity between successive Hermite curve portions is commonly enforced by aligning both positions and tangents at connecting positions. We hence propose to rely on the same idea.
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
|
| 129 |
+
Figure 5: Example of a part of a Visibility data encoding texture; Black $=$ The target is visible from the camera, $\operatorname{Red} =$ The target is occluded or partially occluded from the camera, Blue = The camera is inside the scene geometry
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
|
| 133 |
+
Figure 6: Representation of the positioned animation space that is asymmetrically collided by the scene geometry
|
| 134 |
+
|
| 135 |
+
In practice we propose, for each camera animation, to complement the starting and the goal camera positions ${\mathbf{q}}_{\text{start }}$ and ${\mathbf{q}}_{\text{goal }}$ , by two tangents, i.e. the camera velocities ${\dot{\mathbf{q}}}_{\text{start }}$ and ${\dot{\mathbf{q}}}_{\text{goal }}$ (figure 2). To offer a good categorization of the whole animation space, we use a uniform sampling of these tangents in a sphere of radius $r$ (in our tests, we used $r = 5$ ). The number of sampled animations is left as a user-defined parameter. Though, providing a sufficient categorization might require to sample several hundreds of animations (an evaluation of results for different values is provided in section 7.2).
|
| 136 |
+
|
| 137 |
+
Recomputing such a stochastic graph might be costly and/or lack stability over time. Our last proposition is then to precompute a graph of uniformly sampled camera animations, in an orthonormal coordinate system (as illustrated in figure 2). In this system, ${\mathbf{q}}_{\text{start }}$ and ${\mathbf{q}}_{\text{goal }}$ has coordinates(0,0,0)and(0,0,1)respectively. Then, for any horizon ${H}^{i}$ , we apply a proper ${4x4}$ transform matrix ${M}^{i}$ to align the graph onto the computed viewpoints ${\mathbf{q}}_{\text{start }}^{i}$ and ${\mathbf{q}}_{\text{goal }}^{i}$ . It is worth noting that in ${M}^{i}$ the $3\mathrm{\;d}$ translation, $3\mathrm{\;d}$ rotation and the scaling on the $z$ axis will lead this axis to match the vector $\left( {{\mathbf{q}}_{\text{goal }}^{i} - {\mathbf{q}}_{\text{start }}^{i}}\right)$ . Two parameters remain free: the scaling for the other two axes ( $x$ and $y$ ). As a first assumption we could use the same scaling as for $z$ . However, we will further explain how to choose a better scaling in section 6, to take collisions and occlusions into consideration.
|
| 138 |
+
|
| 139 |
+
## 5 EVALUATING CAMERA ANIMATIONS
|
| 140 |
+
|
| 141 |
+
In the first stage, we have computed a set of camera animations ${\mathbb{Q}}^{i}$ , that can portray the target objects' behavior within time horizon ${H}^{i}$ . We now need to select one of these animations, as the one to apply to the camera. Our second stage is devoted to evaluating of the quality of all animations, and selecting the most promising one, in an efficient way. In the following, we will first detail our evaluation criteria, before focusing on how we perform this evaluation.
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 7: Projection of the success and fail of one Track on the four axis of resolution $R = 4$ . a) Collision and occlusion detection b) Enumeration and projection of the success and fail samples on the axis
|
| 146 |
+
|
| 147 |
+
### 5.1 Camera animation quality
|
| 148 |
+
|
| 149 |
+
A proper camera animation to portray the motions of a target object should meet a set of nice properties, among which the most important are: avoid collisions with the scene, enforce visibility on the target object, while offering a smooth series of intermediate viewpoints to the viewer. To evaluate how much these properties are enforced along a camera animation ${\mathbf{q}}_{j}^{i}$ , we propose to rely on a set of costs ${C}_{k}\left( t\right) \in \left\lbrack {0,1}\right\rbrack$ :
|
| 150 |
+
|
| 151 |
+
Occlusions and Collisions To evaluate how much the target object is occluded from a camera position ${q}_{j}^{i}\left( t\right)$ , we rely on ray-tracing. We firstly approximate the target object's geometry with a simple abstraction (e.g. a sphere). We secondly sample a set of points $s \in \left\lbrack {0, N}\right\rbrack$ onto this abstraction, which we position at the object’s anticipated position ${B}^{i}\left( t\right)$ . We thirdly launch a ray from the camera to each point $s$ (see figure 4). We lastly note ${R}_{s}\left( t\right)$ the result of this ray-launch. In the mean time, we use the same ray to also evaluate if the camera is in collision (i.e. inside another object of the scene), by setting its value as:
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
{R}_{s}\left( t\right) = \left\{ \begin{array}{ll} 0 & \text{ if Visible } \\ 1 & \text{ if Occluded } \\ 2 & \text{ if Collided } \end{array}\right.
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
We distinguish a collision from a simple occlusion as follow. By looking at the normal at the hit geometry, we know if the ray has hit a back face or a front face. When the ray hits a back face, ${q}_{j}^{i}\left( t\right)$ must be inside a geometry, hence we consider it as a camera collision. Conversely, when the ray hits a front face, ${q}_{j}^{i}\left( t\right)$ must be outside a geometry. If the ray does not reach $s$ , we consider $s$ as occluded, otherwise we consider it as visible.
|
| 158 |
+
|
| 159 |
+
Knowing ${R}_{s}\left( t\right)$ , we define our collision and occlusion costs, as:
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
{C}_{o}\left( t\right) = \frac{1}{N}\mathop{\sum }\limits_{{s = 0}}^{N}\left\{ \begin{array}{ll} 1 & \text{ if }{R}_{s}\left( t\right) = 1 \\ 0 & \text{ Otherwise } \end{array}\right.
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
and
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
{C}_{c}\left( t\right) = \frac{1}{N}\mathop{\sum }\limits_{{s = 0}}^{N}\left\{ \begin{array}{ll} 1 & \text{ if }{R}_{s}\left( t\right) = 2 \\ 0 & \text{ Otherwise } \end{array}\right.
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
In our tests, we used $N = {20}$ .
|
| 172 |
+
|
| 173 |
+
Viewpoint variations Providing a smooth series of intermediate camera viewpoints requires to regulate changes on successive viewpoints. We hence propose to evaluate how much the viewpoint changes over time. We split this evaluation into two distinct costs: one on the camera view angle, and one on the distance to the target object. Further, we define them for time steps ${\delta t}$ .
|
| 174 |
+
|
| 175 |
+
Beforehand, let's introduce the view vector connecting the target object to the camera, on which we will rely. It is computed as:
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
{D}_{j}^{i}\left( t\right) = {B}^{i}\left( t\right) - {q}_{j}^{i}\left( t\right)
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
From this view vector, we define the view angle change as:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
{C}_{{\Delta }_{\phi ,\theta }}\left( t\right) = \frac{\left( {D}_{j}^{i}\left( t\right) ,{D}_{j}^{i}\left( t + \delta t\right) \right) }{\pi }
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
In a way similar, we propose to rely on a squared distance variation, defined as:
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
{\Delta d}\left( t\right) = {\left( \begin{Vmatrix}{D}_{j}^{i}\left( t\right) \end{Vmatrix} - \begin{Vmatrix}{D}_{j}^{i}\left( t + \delta t\right) \end{Vmatrix}\right) }^{2}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
We then define a cost on this distance change, which we further normalize as:
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
{C}_{\Delta d}\left( t\right) = 1 - E\left( {{\Delta d}\left( t\right) ,\lambda }\right)
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
where $E$ is an exponential decay function, for which we set parameter $\lambda$ to ${10}^{-4}$ .
|
| 200 |
+
|
| 201 |
+
Preferred range of distances One side effect of the above costs is that for large distances, changes on the view angle and distance will be less penalized. In turn, this will favor large camera animations. It is worth noting that, in the same way, placing the camera too close to the target object is also not desired in general. We should then penalize both behaviors. To do so, we propose to introduce a last cost, aimed at favoring camera animations where the camera remains within a prescribed distance range $\left\lbrack {{d}_{\min },{d}_{\max }}\right\rbrack$ . We formulate this costs as:
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
{C}_{d}\left( t\right) = \left\{ \begin{array}{ll} 1 & \text{ if }\begin{Vmatrix}{{D}_{j}^{i}\left( t\right) }\end{Vmatrix} \notin \left\lbrack {{d}_{\min },{d}_{\max }}\right\rbrack \\ 0 & \text{ otherwise } \end{array}\right.
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
### 5.2 Selecting a camera animation
|
| 208 |
+
|
| 209 |
+
Given the previously introduced costs, we now target to compute the cost of an entire animation, and select the most promising one.
|
| 210 |
+
|
| 211 |
+
In a first step, we define the total cost of a camera animation as a weighted sum of single-criteria costs integrated over time:
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
C = \mathop{\sum }\limits_{k}{w}_{k} \cdot \left\lbrack {{\int }_{{t}_{i}}^{{t}_{i} + h}{C}_{k}\left( t\right) G\left( {t - {t}_{i},\sigma }\right) {dt}}\right\rbrack
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
where ${w}_{k} \in \left\lbrack {0,1}\right\rbrack$ is the weight of criterion $k.G$ is a Gaussian decay function, where we set standard deviation $\sigma$ to the value of $h/4$ . We also slightly tune the decay to converge toward 0.25 (instead of 0 ). This way, we give a higher importance to the costs of the beginning of the animation, yet considering the end. Indeed, our assumption is that the camera will only play the first part of it (10% in our tests), while the remaining part still brings a longer term information on what could be a good camera path. We compute this total cost for any camera animation ${q}_{j}^{i} \in {\mathbb{Q}}^{i}$ , and refer to it as ${C}_{j}^{i}$ .
|
| 218 |
+
|
| 219 |
+
In a second step, we propose to choose the most promising camera animation for time horizon ${H}^{i}$ , denoted as ${\mathbf{q}}^{i}$ , as the one with minimum total cost, i.e. :
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
{\mathbf{q}}^{i} = \underset{j}{\arg \min }{C}_{j}^{i}
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
### 5.3 GPU-based evaluation
|
| 226 |
+
|
| 227 |
+
We have presented our evaluation metric on camera animations. However, some costs might be expensive to compute. In particular, computing occlusion and collision require to trace many rays (i.e. $N$ rays, for many time steps, for hundreds of camera animations). We here focus more in detail on how we propose to perform computations in a very efficient way.
|
| 228 |
+
|
| 229 |
+
It is worth noting that many computations can be performed in parallel. The evaluation of camera animations are independent from each other. In a way similar, the evaluation of a cost at different time steps along a given animation are also independent. Hence, we propose to cast our evaluation of single costs into a massively-parallel computation on GPU.
|
| 230 |
+
|
| 231 |
+
Firstly, note that we only need to send the animation space (in orthonormal coordinate system) once to the GPU, when starting the system. Then, when we need to reposition the camera animation space, for horizon ${H}^{i}$ , we simply update the ${4x4}$ transform matrix ${M}^{i}$ . And, from this data, one can straightforwardly compute any camera position ${\mathbf{q}}_{j}^{i}\left( t\right)$ for any time $t$ .
|
| 232 |
+
|
| 233 |
+
Secondly, for occlusion and collision computations, we propose to rely on the recent RTX technology, allowing to perform real-time raytracing requests on GPU. We run $\frac{{H}^{i}}{\delta t}$ kernels per track, where each kernel launches $N$ rays, one per sample $s$ picked onto the target object. The result of these computations are stored into a 2D texture (as shown in figure 5), where the texture coordinates $u$ and $v$ map to one time step $t$ and one animation of index $j$ , respectively. Occlusion and collision costs are stored into two different channels.
|
| 234 |
+
|
| 235 |
+
Thirdly, we rely on a compute shader to compute all other costs, and combine them with occlusion and collision costs. This shader uses one kernel per camera animation. It stores the total cost of all animations into a GPU buffer, finally sent back to CPU where we perform the selection step.
|
| 236 |
+
|
| 237 |
+
## 6 Dynamic Trajectory Adaptation
|
| 238 |
+
|
| 239 |
+
Until now, we have considered a simplified configuration, where we evaluate the animation space and select one camera animation for one given time horizon ${H}^{i}$ . We now need to consider two other requirements. First, the camera should be animated to track the target object for an unknown duration, larger than $h$ . Changes in the target behavior may also occur, due to interactive user inputs Second, for any horizon ${H}^{i}$ , some camera animations could be in collision with the scene, or the target could be occluded. This would prevent finding a proper animation to apply. In other words, the space of potential camera animations should be influenced by the surrounding scene geometry. Hereafter, we explain how we account for these requirements.
|
| 240 |
+
|
| 241 |
+
### 6.1 User inputs and interactive update
|
| 242 |
+
|
| 243 |
+
We here assume the camera is currently animated along curve ${\mathbf{q}}^{i}$ . We then need to compute a new animation, for a new time horizon ${H}^{i + 1}$ in two cases. First, when the target's behavior has changed, this makes the currently played animation not valid for future time steps. Second, when the camera animation as reached a specific duration. In a way similar to motion-predictive control, we indeed consider the target behavior, as well as the collision and occlusion computations, less reliable after a certain anticipation duration. This in particular allows to handle dynamic collisions and occlusions. This update is specified by the user as a ratio of progress along animation ${\mathbf{q}}^{i}$ . In our tests the horizon duration is $h = 5$ seconds and the update ratio is 0.1 . In turn, the new horizon generally starts at ${t}_{i + 1} = {t}_{i} + {0.1h}$ , while we set ${\mathbf{q}}_{\text{start }}^{i + 1} = {\mathbf{q}}^{i}\left( {t}_{i + 1}\right)$ .
|
| 244 |
+
|
| 245 |
+
Knowing that an update is required, we iterate on the overall process explained earlier, for next horizon ${H}^{i + 1}$ . We select a new goal viewpoint (i.e. ${\mathbf{q}}_{\text{goal }}^{i + 1}$ ), and update the transform matrix (i.e. ${M}^{i + 1}$ ) to position the camera animation space ${\mathbb{Q}}^{i + 1}$ . We then need to evaluate all camera animations in ${\mathbb{Q}}^{i + 1}$ . To do so, we compute all costs presented in section 5 . However, this is not enough, as we also need to enforce continuity between animation ${q}^{i}$ and the animation ${\mathbf{q}}^{i + 1}$ that is to be selected. To do so, we rely on an additional criteria designed to favor a smooth transition between consecutive animations:
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+
Figure 8: Comparison of our system with an adaptive scale, or with a naïve scale, applied on the camera animation space.
|
| 250 |
+
|
| 251 |
+
Animation transitions this cost penalizes abrupt changes when transitioning between two camera animation curves. Our idea is to penalize a wide angle between the tangent vector to camera animation ${\mathbf{q}}^{i}$ and the tangent vector to animation ${\mathbf{q}}_{j}^{i + 1} \in {\mathbb{Q}}^{i + 1}$ , at connection time ${t}_{i + 1}$ . We write this cost as:
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
{C}_{i, i + 1}\left( j\right) = \frac{\left( {\dot{\mathbf{q}}}^{i}\left( {t}_{i + 1}\right) ,{\dot{\mathbf{q}}}_{j}^{i + 1}\left( {t}_{i + 1}\right) \right) }{\pi }
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
We then rewrite the selection of camera animation ${q}^{i + 1}$ as:
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
{\mathbf{q}}^{i + 1} = \underset{j}{\arg \min }\left\lbrack {{C}_{j}^{i + 1} + {w}_{i, i + 1}{C}_{i, i + 1}\left( j\right) }\right\rbrack
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
where ${w}_{i, i + 1}$ is the relative weight of the transition cost with regards to other costs.
|
| 264 |
+
|
| 265 |
+
### 6.2 Adapt to scene geometry
|
| 266 |
+
|
| 267 |
+
We would also like to adapt our camera animation space to the scene geometry. To this aim, on the one hand we seek to offer a camera animation space which exhibits as few collisions and occlusions as possible. On the other hand, we also seek a space which is not too restricted, i.e. covering as much as possible the free space between the target behavior and the scene geometry.
|
| 268 |
+
|
| 269 |
+
To do so, while we evaluate the quality of camera animations for an horizon ${H}^{i}$ , we also take the opportunity to analyse how much collisions and occlusions occur. In turn, this will allow to know if the free space is well covered (or not enough). We then propose to dynamically rescale the camera animation space to make it grow or shrink in the next time horizon ${H}^{i + 1}$ . This rescaling applies when we update the transform matrix ${M}^{i + 1}$ , and on the $x$ and $y$ axes only. It is worth noting that the free space might not be symmetrical around the target behavior (as illustrated figure 6). Indeed, this free space might for instance be larger (or smaller) on the left than on the right of the target. The same applies to the free space above or below the target. Consequently our idea is to compute four scale values, on all four directions $\{ - x, + x, - y, + y\}$ . For any camera position along a camera animation, we then apply either two of them, depending on the sign of the position’s $x$ and $y$ coordinates in the non-transformed animation space.
|
| 270 |
+
|
| 271 |
+
We propose to process in the following way. We firstly leverage the occlusions and collisions evaluation to store additional information: we count fails and successes along each axis. We consider a launched ray along a camera animation (i.e. from the camera position at a given time step) as a fail if it is marked as occluded or collided, and as a success if not. We secondly store this information in height arrays: for each half-axis (e.g. $+ x$ or $- x$ ), we count successes in one array, and fails in another array. We further discretize this half-axis by using a certain resolution $R$ , to output two histograms, of fails and successes (as illustrated in figure 7). Note that $R$ here define the scale precision on each axis. We lastly use both histograms to compute the new scale to apply. We compute the indices ${i}_{f}$ and ${i}_{s}$ of the medians of both arrays (fails and successes, respectively). By comparing them, we define how much we should rescale animations along this half-axis. If ${i}_{s} < {i}_{f}$ , we consider that there are too many fails, and multiply the current scale by ${i}_{f}/R$ , to shrink animations Otherwise, we consider the free space is not covered enough, and apply a passive inflation to the current scale. The aim of this inflation is to help return to a maximum scale value, when the surrounding geometry allows for large camera animations.
|
| 272 |
+
|
| 273 |
+
## 7 IMPLEMENTATION AND RESULTS
|
| 274 |
+
|
| 275 |
+
### 7.1 Implementation
|
| 276 |
+
|
| 277 |
+
We implemented our camera system within the Unity3D 2019 game engine. We compute our visibility and occlusion textures through raytracing shaders provided with Unity's integrated pipeline, and perform our scores for all sampled animations and timesteps through Unity Compute Shaders. All our results (detailed in section 7.2) have been processed on a laptop computer with a Intel Core i9-9880H CPU @ 2.30GHz and a NVIDIA Quadro RTX 4000.
|
| 278 |
+
|
| 279 |
+
### 7.2 Results
|
| 280 |
+
|
| 281 |
+
We split our evaluation into three parts. We firstly validate our adaptive scale mechanism. We secondly evaluate the robustness of our system, by comparing its performances when using a different number or set of reference camera animations. We thirdly validate the ability of our system, mixing local and global planning approaches, to outperform a purely local camera planning system. To do so, we compare results obtained with our system, and the one of Burg et al. [1], on the same test scenes.
|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
|
| 285 |
+
Figure 9: Results for multiple runs, each using a randomly generated camera animation space. This space is sampled with uniform distribution, with 2400 sample camera animations (Hermite curves). Each plot shows the mean value over time (blue), with a 95% confidence interval (red). Left: results for 22 runs using different seeds. Right: results for 10 runs using the same seed.
|
| 286 |
+
|
| 287 |
+
To validate our adaptive scale, we study its impact on the quality of the animation space. For the other evaluations, we compare camera systems with regards to two main criteria: how much the camera maintains visibility on the target object, and how smooth camera motions are. We compute visibility by launching rays onto the target object, and calculating the ratio of rays reaching the target. A ratio of 1 (respectively 0 ) means that the target is fully visible (respectively fully occluded). When relevant, we additionally provide statistics on the duration of partial occlusions. We then compare the quality of camera motions through their time derivatives (speed, acceleration and jerk), which provide a good indication of motion smoothness.
|
| 288 |
+
|
| 289 |
+
Our comparisons have been performed within 4 different scenes (illustrated in the accompanying video). We validated our system by using (i) a Toy example scene, where the target is travelling a maze, containing several tight corridors with sharp turns, an open area inside a building, and a ramp. We then performed the comparisons with the technique of Burg et al. [1], by using two static scenes and a dynamic scene, which the target goes through: (ii) a scene with a set of columns and a gate (Columns+Gate), (iii) a scene with set of small and large spheres (Spheres) and (iv) a fully dynamic scene with a set of randomly falling and rolling boxes, and a randomly sliding wall (Dynamic). To provide fair comparisons, in the dynamic scene, we pre-process the random motions of boxes and of the wall. As well, for all tests in a scene, we play a pre-recorded trajectory of the target avatar, but let the camera system run as if the avatar was interactively controlled by a user.
|
| 290 |
+
|
| 291 |
+
#### 7.2.1 Impact of adaptive scale
|
| 292 |
+
|
| 293 |
+
We validate our adaptive scale (section 6.2) by comparing results obtained (i) when we compute and apply the adaptive scale on all 4 half-axes $\left( {-x, + x, - y, + y}\right)$ , and (ii) when we simply apply the same scale as for the $z$ axis (which we will call the naive scale technique). We processed our tests by using the toy example scene. For each technique, each time we evaluate a new set of camera animations, we output the new scale values and the ratio of fails on each half-axis. As well, we output and plot the mean cost of the 5 best animations in this set. Results are presented in figure 8 .
|
| 294 |
+
|
| 295 |
+
Figure 8a shows how much our mechanism tightens the animation space (compared to the naive scaling technique) when the avatar is entering corridors, and grows back to the same scale when the avatar reaches less cluttered areas (e.g. in the open interior room, or the outdoor area). As expected, our mechanism allows to adapt the scale on half-spaces in a non-symmetrical way. As shown by figure $8\mathrm{\;b}$ , with our adaptive mechanism, the scaled animation space also exhibit less fails than using the naive scale technique. As well, as shown by figure 8c, it allows finding animations with lower cost, most of the time. One exception is between 40 s and 50 s, where the camera configuration isn't the same because the scale is different. In the naive case the camera is high above the character while in the adaptive case, the camera is closer to the ground, thus the scores is not relevant in this case because the two configurations are way to different to be compared.
|
| 296 |
+
|
| 297 |
+
In the next evaluations, we consider that the adaptive scale mechanism is always activated.
|
| 298 |
+
|
| 299 |
+
#### 7.2.2 Robustness
|
| 300 |
+
|
| 301 |
+
We also study the robustness of our system regarding our randomly generated camera animation space.
|
| 302 |
+
|
| 303 |
+
In a first step, we evaluate how performances vary if we run our real-time system multiple times, on the toy example scene. We also consider two cases: (i) using the same seed for every run (i.e. the same animation space is used), and (ii) using a new seed for every run (i.e. a new animation space is randomly sampled for each run). For each case, we sample a set of 2400 animations. Results are presented in figure 9. As it shows, with as many sampled animations, all runs lead to very similar results, both on the visibility enforcement, and on the camera motion smoothness. Differences are mainly due to variations in the actual framerate of the game engine, hence the rate at which the system takes new decisions.
|
| 304 |
+
|
| 305 |
+

|
| 306 |
+
|
| 307 |
+
Figure 10: Visibility when varying the number of sampled curves in our camera animation space.
|
| 308 |
+
|
| 309 |
+
In a second step, we evaluate how the size of the animation space (i.e. the number of sampled animations) impacts performances. We ran our system with 4 different sizes:2400,1600,800or 100 animations. For each size, we performed 5 runs with random seed, and combined the results in figures 10, 11 and 12. It shows that lowering the size (at least until 800 animations) still allows good performances. Our camera system is able to find a series of camera animations maintaining enough visibility on the target object, through smooth camera motions. As we expected, for 100 animations, our system's performances are poor: it becomes harder to find animations with sufficient visibility and ensuring smooth camera motions. Our intuition behind this result is that as soon as the size become too small, the distribution of tangents becomes very sparse, hence breaking our assumption of a uniform sampling. If the animation space does not sufficiently cover the free space, this prevents the exploration of a wide range of possible camera animations.
|
| 310 |
+
|
| 311 |
+
#### 7.2.3 Comparison to Burg 2020
|
| 312 |
+
|
| 313 |
+
We also compare our system, mixing local and global planning approaches, to a purely local camera planning system. To this aim, we have run our proposed camera system, and the local camera planning system of Burg et al. [1], in 3 different scenes: two static scenes (Columns+Gate and Spheres) and a fully dynamic scene (Dynamic). The Columns+Gate is the same as in [1], where the avatar is moving between some columns and go through a doorway. In the Spheres scene, the avatar is travelling a scene filled with a large set of spheres, which makes it moderately challenging for the camera systems. In the Dynamic scene, the avatar must go through a flat area, where a set of boxes are randomly flying, falling, rolling all over the place, and a wall is randomly sliding. This makes it challenging for camera systems to anticipate the scene dynamics and find occlusion-free and collision-free camera paths.
|
| 314 |
+
|
| 315 |
+

|
| 316 |
+
|
| 317 |
+
Figure 11: Camera speed when varying the number of sampled curves in our camera animation space.
|
| 318 |
+
|
| 319 |
+
In our camera system, we used 2400 animations the recomputation rate is set to 0.25 s and the adaptive scaling is on. We present results of our tests in figures 13, 14, 15, 16, and 17.
|
| 320 |
+
|
| 321 |
+
We firstly compare camera systems along their ability to enforce visibility on the target object (figure 13). Our tests show that for moderately challenging scenes, both lead to relatively good results. Few occlusions occur. However, for a more challenging scene (Dynamic), our system outperforms Burg et al. 's system. Even if occlusions may occur more often, the degree of occlusion is lower (figure 13b). Moreover, for all 3 scenes, when partial occlusions occur, they are shorter when using our system (figure 13c). This is explained by the fact that when no local solution exist, our system can still find a locally occluded path respecting other constraints, and leading to a less occluded area. This demonstrates our system's ability to better anticipate occlusions, and especially in dynamic scenes.
|
| 322 |
+
|
| 323 |
+
We secondly compare the smoothness of camera motions in both camera systems. Figure 14 presents, side-by-side, the distributions of speed, acceleration and jerk when using each system. We also provide the speed, acceleration and jerk along time, in figures 15, 16, and 17. One observation we make is that Burg et al. 's system leads to lower camera speeds, as it restricts itself to simply following the avatar. In our camera system, the camera is allowed to move faster, to bypass the avatar when visibility or another constraint may be poorly satisfied. Yet, our system provides smoother motions (i.e. less jerk). One explanation is that local systems often need to steer the camera from local minima (e.g. low visibility areas). A side effect is that it may lead, for successive iterations, to an indecision on which direction the camera should take to reach better visibility. In turn, this leads to frequent changes in camera acceleration (hence higher jerk). Conversely, our system has a more global knowledge on the scene, allowing to more easily find a better path, which avoids sacrificing the smoothness of camera motions.
|
| 324 |
+
|
| 325 |
+

|
| 326 |
+
|
| 327 |
+
Figure 12: Camera jerk when varying the number of sampled curves in our camera animation space.
|
| 328 |
+
|
| 329 |
+
## 8 DISCUSSION AND CONCLUSION
|
| 330 |
+
|
| 331 |
+
Our system presents a number of limitations. Despite the ability to evaluate thousands of trajectories, strongly cluttered environments remain challenging. As smoothness is enforced, visibility may be lost in specific cases, and designing a techniques that could properly balance between the properties to handle specific cases need to be addressed. Also while the dynamic scale adaptation does improve results by compressing the trajectories in different half spaces, low values in scales prevent the camera from larger motions where necessary. A future work could consist in biasing the sampling in the animation space in order to adapt the space to typical local topologies of the 3D environment. Despite the limitations, the proposed work improves over existing contributions by proving an efficient camera tracking technique adapted to dynamic 3D environments and does not require heavy roadmap precomputations.
|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
|
| 335 |
+
Figure 13: Comparison between our system and Burg et al. [1], regarding the target object's visibility (a)(b) and, when not fully visible, the duration of partial occlusion (c).
|
| 336 |
+
|
| 337 |
+
## REFERENCES
|
| 338 |
+
|
| 339 |
+
[1] L. Burg, C. Lino, and M. Christie. Real-time anticipation of occlusions for automated camera control in toric space. In Computer Graphics Forum, volume 39, pages 523-533. Wiley Online Library, 2020.
|
| 340 |
+
|
| 341 |
+
[2] M. Christie, J.-M. Normand, and P. Olivier. Occlusion-free camera control for multiple targets. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2012.
|
| 342 |
+
|
| 343 |
+
[3] M. Christie, P. Olivier, and J.-M. Normand. Camera control in computer graphics. In Computer Graphics Forum, volume 27, pages 2197-2218. Wiley Online Library, 2008.
|
| 344 |
+
|
| 345 |
+
[4] N. Halper, R. Helbing, and T. Strothotte. A camera engine for computer games: Managing the trade-off between constraint satisfaction and frame coherence. In Computer Graphics Forum, volume 20, pages 174-183. Wiley Online Library, 2001.
|
| 346 |
+
|
| 347 |
+
[5] A. Jovane, A. Louarn, and M. Christie. Topology-aware camera control for real-time applications. In Motion, Interaction and Games, pages 1-10. 2020.
|
| 348 |
+
|
| 349 |
+
[6] C. Lino and M. Christie. Intuitive and efficient camera control with the toric space. ACM Transactions on Graphics (TOG), 34(4):1-12, 2015.
|
| 350 |
+
|
| 351 |
+

|
| 352 |
+
|
| 353 |
+
Figure 14: Comparison between our system and Burg et al. [1], regarding the camera speed (a), acceleration (b) and jerk (c) distributions.
|
| 354 |
+
|
| 355 |
+

|
| 356 |
+
|
| 357 |
+
Figure 15: Speed along time, for our camera system (blue) and the one of Burg et al. [1] (red)
|
| 358 |
+
|
| 359 |
+
[7] C. Lino, M. Christie, F. Lamarche, G. Schofield, and P. Olivier. A real-time cinematography system for interactive $3\mathrm{\;d}$ environments. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 139-148. Eurographics Association, 2010.
|
| 360 |
+
|
| 361 |
+
[8] A. Litteneker and D. Terzopoulos. Virtual cinematography using opti-
|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+
Figure 16: Acceleration along time, for our camera system (blue) and the one of Burg et al. [1] (red)
|
| 366 |
+
|
| 367 |
+

|
| 368 |
+
|
| 369 |
+
Figure 17: Jerk along time, for our camera system (blue) and the one of Burg et al. [1] (red)
|
| 370 |
+
|
| 371 |
+
mization and temporal smoothing. In Proceedings of the Tenth International Conference on Motion in Games, pages 1-6, 2017.
|
| 372 |
+
|
| 373 |
+
[9] T. Nägeli, L. Meier, A. Domahidi, J. Alonso-Mora, and O. Hilliges. Real-time planning for automated multi-view drone cinematography. ACM Transactions on Graphics (TOG), 36(4):1-10, 2017.
|
| 374 |
+
|
| 375 |
+
[10] D. Nieuwenhuisen and M. H. Overmars. Motion planning for camera movements. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA'04. 2004, volume 4, pages 3870- 3876. IEEE, 2004.
|
| 376 |
+
|
| 377 |
+
[11] T. Oskam, R. W. Sumner, N. Thuerey, and M. Gross. Visibility transition planning for dynamic camera control. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 55-65, 2009.
|
| 378 |
+
|
| 379 |
+
[12] R. Ranon and T. Urli. Improving the efficiency of viewpoint composition. IEEE Transactions on Visualization and Computer Graphics, 20(5):795-807, 2014.
|
| 380 |
+
|
| 381 |
+
[13] P. O. Scokaert and D. Q. Mayne. Min-max feedback model predictive control for constrained linear systems. IEEE Transactions on Automatic control, 43(8):1136-1142, 1998.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/JpX53OXtp1r/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,331 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REAL-TIME CINEMATIC TRACKING OF TARGETS IN DYNAMIC ENVIRONMENTS
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Tracking a moving target in a 3D dynamic environment in a cinematic way remains a challenging problem through the need to simultaneously ensure a low computational cost, a good degree of reactivity to changes, and a high cinematic quality. In this paper, we draw on the idea of Motion-Predictive Control to propose an efficient real-time camera tracking technique which ensures these properties. Our approach relies on the predicted motion of a target to create and evaluate a very large number of camera motions using hardware ray casting. Our evaluation of camera motions includes a range of cinematic properties such as distance to target, visibility, collision, smoothness and jitter. Experiments are conducted to display the benefits of the approach with relation to prior work.
|
| 8 |
+
|
| 9 |
+
§ 1 INTRODUCTION
|
| 10 |
+
|
| 11 |
+
The automated generation of cinematic camera motions in 3D virtual environments is a key problem for a number of computer graphics applications (computer games, automated generation of virtual tours, virtual storytelling). The first and foremost problem is to identify what are the intrinsic characteristics of a good camera motion. While film literature provides a thorough and in-depth analysis of what makes a viewpoint qualitative in terms of framing, angle to target, aesthetic composition, depth-of-field, lighting, the characterisation of camera motions has been far less addressed. This is pertained to specifics of real camera rigs (dollies, cranes) that reduce the set of motion possibilities, and the limited use of long camera sequences in movies (with the exception of steadicam sequence shots). In addition, the characteristics of camera motions in movies are strongly guided by the narrative intentions which need to be conveyed (e.g. rhythm, excitation, or soothing atmosphere).
|
| 12 |
+
|
| 13 |
+
In transposing the knowledge to the tracking of targets in virtual environments, one can however derive a number of desirable cinematic characteristics such as visibility (avoiding occlusion of the tracked target, and obviously collisions with the environment), smoothness (avoiding jerkiness in trajectories) and continuity (avoiding large changes in viewing angles and distances to target). In practice, these characteristics are often contradictory (avoiding a sudden occlusion requires a sudden acceleration, or a abrupt change in angle). Furthermore, the computational cost of evaluating visibility, collision, continuity and smoothness lowers the possibility of evaluating many alternative camera motions.
|
| 14 |
+
|
| 15 |
+
Existing work have either addressed the problem using global motion planning techniques typically based on precomputed roadmaps $\left\lbrack {5,{10},{11}}\right\rbrack$ , or local planning techniques using ray casting [12] and shadow maps for efficient visibility computations $\left\lbrack {1,2,4}\right\rbrack$ . While global motion planning techniques excel at ensuring visibility given their full prior knowledge of the scene, local planning techniques excel in handling strong dynamic changes in the environment. The main bottleneck of both approaches remains the limited capacity in evaluating expensive cinematic properties such as target visibility along a camera motion or in the local neighborhood of a camera.
|
| 16 |
+
|
| 17 |
+
Our approach builds on the idea of performing a mixed local+global approach by exploiting a finite-time horizon large enough to perform a global planning, yet efficient enough to react in real-time to sudden changes. This sliding window enables the real-time evaluation of thousands of camera motions by exploiting recent hardware raycasting techniques. As such, our approach draws inspiration from Motion Predictive control techniques [13] by optimizing a finite time-horizon, only implementing the current timeslot and then repeating the process on the following time slots.
|
| 18 |
+
|
| 19 |
+
A strong hypothesis we make is that the target object is controlled by the user through interactive inputs, hence its motions and actions can be predicted within a short time horizon $H$ . Our system comprises 2 main stages, illustrated in Figure 1. In the first stage, we predict the motion of the target over a given time horizon ${H}^{i}$ by using the target’s current position (at time $t$ ) and the user inputs. We then select an ideal camera position at time $t + {H}^{i}$ and propose to define a camera animation space as a collection of smooth camera animations which link the current camera position (at time $t$ ), to the ideal camera location (at time $t + {H}^{i}$ ). In the second stage, we perform an evaluation of the quality of the camera animations in the animation space by relying on hardware raycasting techniques and select the best camera animation. In a way similar to motion-predictive control [13], we then apply part of the camera animation and re-start the process at a low frequency(4Hz)or when a change in the user inputs is detected. Finally, to adapt the camera animation space to the scene configurations, we dynamically adapt a scaling factor on the animation space. As a whole, this process allows generating a continuous and smooth camera animation which enables the real-time tracking of a target object motions in fully dynamic and complex environments.
|
| 20 |
+
|
| 21 |
+
Our contributions are:
|
| 22 |
+
|
| 23 |
+
* the design of a camera animation space as a dedicated space in which to express a range of camera trajectories;
|
| 24 |
+
|
| 25 |
+
* an efficient evaluation technique using hardware ray casting;
|
| 26 |
+
|
| 27 |
+
* a motion predictive control approach that exploits the camera animation space to generate real-time cinematic camera motions.
|
| 28 |
+
|
| 29 |
+
§ 2 RELATED WORK
|
| 30 |
+
|
| 31 |
+
We narrow the scope of related work to real-time camera planning techniques. For a broader view of camera control techniques in computer graphics, we refer the reader to [3].
|
| 32 |
+
|
| 33 |
+
§ GLOBAL CAMERA PATH PLANNING
|
| 34 |
+
|
| 35 |
+
Global camera path-planning techniques build on well-known results from robotics such as probabilistic roadmaps, regular cell-decompositons or Delaunay triangulations. All have in common the prior computation of a roadmap as a graph where nodes represent regions of the configuration-free space (points, regular cells or other primitives), and edges represent collision-free links between the nodes. Niewenhuisen and Overmars exploited probabilistic roadmaps (PRM) to automatically perform queries withe the graph structure, linking given starting and ending configurations [10]. Heuristics were required to smooth the camera trajectories and avoid sudden changes in position and camera angles. Later, Oskam et al. [11] proposed a visibility-aware roadmap, by using sphere-sampling of the configuration-free space, and precomputing the combinatorial sphere-to-sphere visibility using stochastic ray-casting.
|
| 36 |
+
|
| 37 |
+
Lino et al. [7] exploited spatial partitioning techniques as dynamically evolving volumes around targets. Connectivity between volumes allowed to dynamically create a roadmap, through which camera paths were computed while accounting for visibility and viewpoint semantics along the path.
|
| 38 |
+
|
| 39 |
+
More recently, Jovane et al. exploited the 3D environments to create topology-aware camera roadmaps that would lower the road-amp complexity (compared to probablistic roadmaps) and enable the exploitation of different cinematic styles.
|
| 40 |
+
|
| 41 |
+
Yet, in all cases, the cost of precomputing the roadmap, and the difficulty in dynamically updating it to account for change in the 3D environments limits the practical applicability of such techniques to strongly dynamic environments, such as those met in computer games or storytelling applications.
|
| 42 |
+
|
| 43 |
+
§ LOCAL CAMERA PLANNING
|
| 44 |
+
|
| 45 |
+
The other class of real-time camera planning techniques relies on a local knowledge of the environment. Mostly by sampling and evaluating the local neighborhood around the current camera location, such system are able to take decisions as to were to move at the next iteration, while evaluation classical cinematic properties such as visibility, smoothness and continuity. To address the computational issue of evaluation visilibity of targets, Halper et al. [4] exploited shadow maps to compute potential visible sets, coupled with a hierarchical solver. Normand and Christie exploited slanted rendering frustums to compose spatial and temporal visibility for two targets over a small temporal window (10 frames) [2]. Additional criteria were added in order to select the best move to perform at each frame, and to balance between camera smoothness and camera reactivity. Litteneker et al. [8] proposed a local planning technique based on an active contour algorithm.
|
| 46 |
+
|
| 47 |
+
Burg et al. [1] performed shadow map projections from the targets to the surface of the Toric manifold (a specific manifold space dedicated to camera control [6]). The visibility information provided by the shadow maps was then exploited to move the camera on the surface of the Toric manifold while ensuring secondary visual properties.
|
| 48 |
+
|
| 49 |
+
Recently, for the specific case of drone cinematography, Nageli et al. [9] built a non-linear model predictive contouring controller to jointly optimize 3D motion paths, the associated velocities and control inputs for a drone.
|
| 50 |
+
|
| 51 |
+
Our approach partly builds on the work of Nageli et al., by borrowing the idea of a receding horizon process in which motion planning is performed for a large enough time horizon (few seconds), and the processed is repeated at a higher frequency to account for dynamic changes in the environment. Rather than addressing the problem using a non-linear solver, we propose in our paper to exploit the hardware raycasting capacities of recent graphics cards to efficiently detect collisions and occlusion and evaluate thousands of camera trajectories for each time slot.
|
| 52 |
+
|
| 53 |
+
§ 3 OVERVIEW
|
| 54 |
+
|
| 55 |
+
Our system aims at tracking in real-time a target object traveling through a $3\mathrm{\;d}$ animated scene, by generating a series of smooth cinematic-like camera motions.
|
| 56 |
+
|
| 57 |
+
In the following, we will present the construction of our camera animation space (Section 4), before detailing the evaluation of camera animations in this space (Section 5). Finally, we will show how we dynamically adapt and recompute our graph to fit the scene geometry and improve the results (Section 6).
|
| 58 |
+
|
| 59 |
+
§ 4 CAMERA ANIMATION SPACE
|
| 60 |
+
|
| 61 |
+
We propose the design of a Camera Animation Space as relative local frame, defined by an initial camera configuration ${\mathbf{q}}_{\text{ start }}$ at time $t$ and final camera configuration ${\mathbf{q}}_{\text{ goal }}$ at time $t + {H}^{i}$ (see Figure 2). This local space defines all the possible camera animations that link ${\mathbf{q}}_{\text{ start }}$ at time $t$ to ${\mathbf{q}}_{\text{ goal }}$ at time $t + {H}^{i}$ . Our goal is to compute the optimal camera motion within this space considering a number of desired features on the trajectory (smoothness, collision and occlusion avoidance along the camera animation, ...).
|
| 62 |
+
|
| 63 |
+
${H}^{i}\;$ Time horizon for iteration $i$ (between times ${t}_{i}$ and ${t}_{i} + h$ )
|
| 64 |
+
|
| 65 |
+
Target behavior (predicted position) at time $t \in {H}_{i}$
|
| 66 |
+
|
| 67 |
+
Set of preferred viewpoints at time ${t}_{i} + h$
|
| 68 |
+
|
| 69 |
+
Camera animation space for horizon ${H}^{i}$
|
| 70 |
+
|
| 71 |
+
${M}^{i}$ Transform matrix of the camera animation space, for ${H}^{i}$
|
| 72 |
+
|
| 73 |
+
3D position in camera animation ${\mathbf{q}}_{j}^{i} \in {\mathbb{Q}}^{i}$ , at time $t \in {H}^{i}$
|
| 74 |
+
|
| 75 |
+
Starting camera position. ${\mathbf{q}}_{j}^{i}\left( {t}_{i}\right) = {q}_{\text{ start }}^{i},\forall \left( {i,j}\right)$
|
| 76 |
+
|
| 77 |
+
Goal camera position. ${\mathbf{q}}_{j}^{i}\left( {{t}_{i} + h}\right) = {q}_{\text{ goal }}^{i} \in {\mathbf{V}}^{i},\forall \left( {i,j}\right)$
|
| 78 |
+
|
| 79 |
+
Tangent vector of a camera track at time $t$
|
| 80 |
+
|
| 81 |
+
The camera view vector at time $t$
|
| 82 |
+
|
| 83 |
+
$\left( {\mathbf{x},\mathbf{y}}\right) \;$ angle between two vectors $\mathbf{x}$ and $\mathbf{y}$
|
| 84 |
+
|
| 85 |
+
Table 1: Notations used in the paper
|
| 86 |
+
|
| 87 |
+
To this end, we propose to follow a 3-step process: (i) anticipate the target object's behavior (i.e. its next positions) within a given time horizon, (ii) choose a goal camera viewpoint from which to view the target at the end of the time horizon, and (iii) given this goal viewpoint, and the current one, build and evaluate the space of possible camera animations between them.
|
| 88 |
+
|
| 89 |
+
§ 4.1 ANTICIPATING THE TARGET BEHAVIOR
|
| 90 |
+
|
| 91 |
+
We here make the strong assumption that we can anticipate, with good approximation, the next positions of the tracked target within a time horizon ${H}^{i}$ . This is classical in character animation engines to select the best animation to play (e.g. motion matching). We consider ${H}^{i}$ begins at time ${t}_{i}$ and has a constant user-defined duration of $h$ seconds. Moreover, we consider that the target behavior will be consistent over the whole horizon ${H}^{i}$ . In our implementation, we consider the target as a rigid body (e.g. a capsule) with a current speed and acceleration, launch a physical simulation over ${H}^{i}$ , then store all simulated positions of the rigid body over time. With this anticipation, we account for the scene geometry which might influence future user inputs, e.g. to make the target avoid collisions. We then refer to the anticipated positions as the target behavior, output in the form of a 3d animation curve ${B}^{i}\left( t\right)$ with $t \in {H}^{i}$ (see Figure 3). Note that one may use another technique to anticipate the target behavior, provided it can output a $3\mathrm{\;d}$ animation curve ${B}^{i}\left( t\right)$ over time, it will not change the overall workflow of our camera system.
|
| 92 |
+
|
| 93 |
+
§ 4.2 SELECTING A GOAL VIEWPOINT
|
| 94 |
+
|
| 95 |
+
We now make the second assumption that the user defines a set of viewpoints to portray the target object. By default, one might use a list of stereotypical viewpoints in movies such as 3-quarter front and back views, side views, or bird eye views. These viewpoints are sorted by order of preference (fixed by the user) in a priority queue V. Each preferred viewpoint is defined as a 3d position in spherical coordinates $\left( {d,\phi ,\theta }\right)$ , in the local frame of the target’s configuration, where $\left( {\phi ,\theta }\right)$ defines the vertical and horizontal viewing angles, and $d$ the viewing distance.
|
| 96 |
+
|
| 97 |
+
Given this set of viewpoints we propose, each time we update the target behavior, to select a good viewpoint where the camera should be at the end of the time horizon ${H}^{i}$ , i.e. at time ${t}_{i} + h$ . Considering all viewpoints are in $\mathbf{V}$ , we pop viewpoints by order of priority. We propose to stop as soon as a viewpoint is promising enough, i.e. at time ${t}_{i} + h$ neither the target will be occluded from this viewpoint, nor the camera will be in collision with the scene geometry. We then refer to this selected viewpoint as the goal viewpoint ${\mathbf{q}}_{\text{ goal }}$ .
|
| 98 |
+
|
| 99 |
+
< g r a p h i c s >
|
| 100 |
+
|
| 101 |
+
Figure 1: System overview: the orange box represents the CPU part of the system; the green box represent the GPU part of the system
|
| 102 |
+
|
| 103 |
+
< g r a p h i c s >
|
| 104 |
+
|
| 105 |
+
Figure 2: Representation of the animation space and its local transform
|
| 106 |
+
|
| 107 |
+
< g r a p h i c s >
|
| 108 |
+
|
| 109 |
+
Figure 3: Representation of the target’s behaviour curve at the ${i}^{th}$ iteration
|
| 110 |
+
|
| 111 |
+
< g r a p h i c s >
|
| 112 |
+
|
| 113 |
+
Figure 4: Ray launched from the camera toward the target's sampling area at time $t$
|
| 114 |
+
|
| 115 |
+
Knowing the current camera viewpoint ${\mathbf{q}}_{\text{ start }}$ (at time ${t}_{i}$ ) and this goal viewpoint ${\mathbf{q}}_{\text{ goal }}$ (at time ${t}_{i} + h$ ) provides a good basis to build camera animations that can follow the target behavior. We now need to categorize the full range of possible camera animations.
|
| 116 |
+
|
| 117 |
+
§ 4.3 SAMPLING CAMERA ANIMATIONS
|
| 118 |
+
|
| 119 |
+
Given the target behavior to track, we have computed two key viewpoints ${\mathbf{q}}_{\text{ start }}$ and ${\mathbf{q}}_{\text{ goal }}$ , where the camera should be at start and end times of horizon ${H}^{i}$ . We now propose to categorize the space of possible camera animations, between these two viewpoints. It is worth noting that this space is infinite, which makes it difficult to explore and evaluate. Our idea is to instead create a compact representation of this space, by sampling a large set of animation curves, with a good coverage of the range of animations. We will hereafter note this stochastic set of camera animations as ${\mathbb{Q}}^{i}$ , and a sampled camera animation as ${\mathbf{q}}_{j}^{i}$ , where $j$ is the sample index.
|
| 120 |
+
|
| 121 |
+
Two requirements should be considered on this sampled space: (i) sampled camera animations should be as-smooth-as-possible, i.e. with low jerk, and (ii) the sampled animation space should enable to enforce continuity between successive horizons. To do so, we propose to encode each sampled camera animation as a cubic spline curve on all 3 camera position parameters, as they offer ${C}^{3}$ continuity between key-viewpoints. In practice, we make use of Hermite curves. They provide an easy means to sample the space of possible animations, by simply sampling a set of tangent vectors to the spline curve at start and end positions. Then, ${C}^{1}$ continuity between successive Hermite curve portions is commonly enforced by aligning both positions and tangents at connecting positions. We hence propose to rely on the same idea.
|
| 122 |
+
|
| 123 |
+
< g r a p h i c s >
|
| 124 |
+
|
| 125 |
+
Figure 5: Example of a part of a Visibility data encoding texture; Black $=$ The target is visible from the camera, $\operatorname{Red} =$ The target is occluded or partially occluded from the camera, Blue = The camera is inside the scene geometry
|
| 126 |
+
|
| 127 |
+
< g r a p h i c s >
|
| 128 |
+
|
| 129 |
+
Figure 6: Representation of the positioned animation space that is asymmetrically collided by the scene geometry
|
| 130 |
+
|
| 131 |
+
In practice we propose, for each camera animation, to complement the starting and the goal camera positions ${\mathbf{q}}_{\text{ start }}$ and ${\mathbf{q}}_{\text{ goal }}$ , by two tangents, i.e. the camera velocities ${\dot{\mathbf{q}}}_{\text{ start }}$ and ${\dot{\mathbf{q}}}_{\text{ goal }}$ (figure 2). To offer a good categorization of the whole animation space, we use a uniform sampling of these tangents in a sphere of radius $r$ (in our tests, we used $r = 5$ ). The number of sampled animations is left as a user-defined parameter. Though, providing a sufficient categorization might require to sample several hundreds of animations (an evaluation of results for different values is provided in section 7.2).
|
| 132 |
+
|
| 133 |
+
Recomputing such a stochastic graph might be costly and/or lack stability over time. Our last proposition is then to precompute a graph of uniformly sampled camera animations, in an orthonormal coordinate system (as illustrated in figure 2). In this system, ${\mathbf{q}}_{\text{ start }}$ and ${\mathbf{q}}_{\text{ goal }}$ has coordinates(0,0,0)and(0,0,1)respectively. Then, for any horizon ${H}^{i}$ , we apply a proper ${4x4}$ transform matrix ${M}^{i}$ to align the graph onto the computed viewpoints ${\mathbf{q}}_{\text{ start }}^{i}$ and ${\mathbf{q}}_{\text{ goal }}^{i}$ . It is worth noting that in ${M}^{i}$ the $3\mathrm{\;d}$ translation, $3\mathrm{\;d}$ rotation and the scaling on the $z$ axis will lead this axis to match the vector $\left( {{\mathbf{q}}_{\text{ goal }}^{i} - {\mathbf{q}}_{\text{ start }}^{i}}\right)$ . Two parameters remain free: the scaling for the other two axes ( $x$ and $y$ ). As a first assumption we could use the same scaling as for $z$ . However, we will further explain how to choose a better scaling in section 6, to take collisions and occlusions into consideration.
|
| 134 |
+
|
| 135 |
+
§ 5 EVALUATING CAMERA ANIMATIONS
|
| 136 |
+
|
| 137 |
+
In the first stage, we have computed a set of camera animations ${\mathbb{Q}}^{i}$ , that can portray the target objects' behavior within time horizon ${H}^{i}$ . We now need to select one of these animations, as the one to apply to the camera. Our second stage is devoted to evaluating of the quality of all animations, and selecting the most promising one, in an efficient way. In the following, we will first detail our evaluation criteria, before focusing on how we perform this evaluation.
|
| 138 |
+
|
| 139 |
+
< g r a p h i c s >
|
| 140 |
+
|
| 141 |
+
Figure 7: Projection of the success and fail of one Track on the four axis of resolution $R = 4$ . a) Collision and occlusion detection b) Enumeration and projection of the success and fail samples on the axis
|
| 142 |
+
|
| 143 |
+
§ 5.1 CAMERA ANIMATION QUALITY
|
| 144 |
+
|
| 145 |
+
A proper camera animation to portray the motions of a target object should meet a set of nice properties, among which the most important are: avoid collisions with the scene, enforce visibility on the target object, while offering a smooth series of intermediate viewpoints to the viewer. To evaluate how much these properties are enforced along a camera animation ${\mathbf{q}}_{j}^{i}$ , we propose to rely on a set of costs ${C}_{k}\left( t\right) \in \left\lbrack {0,1}\right\rbrack$ :
|
| 146 |
+
|
| 147 |
+
Occlusions and Collisions To evaluate how much the target object is occluded from a camera position ${q}_{j}^{i}\left( t\right)$ , we rely on ray-tracing. We firstly approximate the target object's geometry with a simple abstraction (e.g. a sphere). We secondly sample a set of points $s \in \left\lbrack {0,N}\right\rbrack$ onto this abstraction, which we position at the object’s anticipated position ${B}^{i}\left( t\right)$ . We thirdly launch a ray from the camera to each point $s$ (see figure 4). We lastly note ${R}_{s}\left( t\right)$ the result of this ray-launch. In the mean time, we use the same ray to also evaluate if the camera is in collision (i.e. inside another object of the scene), by setting its value as:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
{R}_{s}\left( t\right) = \left\{ \begin{array}{ll} 0 & \text{ if Visible } \\ 1 & \text{ if Occluded } \\ 2 & \text{ if Collided } \end{array}\right.
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
We distinguish a collision from a simple occlusion as follow. By looking at the normal at the hit geometry, we know if the ray has hit a back face or a front face. When the ray hits a back face, ${q}_{j}^{i}\left( t\right)$ must be inside a geometry, hence we consider it as a camera collision. Conversely, when the ray hits a front face, ${q}_{j}^{i}\left( t\right)$ must be outside a geometry. If the ray does not reach $s$ , we consider $s$ as occluded, otherwise we consider it as visible.
|
| 154 |
+
|
| 155 |
+
Knowing ${R}_{s}\left( t\right)$ , we define our collision and occlusion costs, as:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{C}_{o}\left( t\right) = \frac{1}{N}\mathop{\sum }\limits_{{s = 0}}^{N}\left\{ \begin{array}{ll} 1 & \text{ if }{R}_{s}\left( t\right) = 1 \\ 0 & \text{ Otherwise } \end{array}\right.
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
and
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
{C}_{c}\left( t\right) = \frac{1}{N}\mathop{\sum }\limits_{{s = 0}}^{N}\left\{ \begin{array}{ll} 1 & \text{ if }{R}_{s}\left( t\right) = 2 \\ 0 & \text{ Otherwise } \end{array}\right.
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
In our tests, we used $N = {20}$ .
|
| 168 |
+
|
| 169 |
+
Viewpoint variations Providing a smooth series of intermediate camera viewpoints requires to regulate changes on successive viewpoints. We hence propose to evaluate how much the viewpoint changes over time. We split this evaluation into two distinct costs: one on the camera view angle, and one on the distance to the target object. Further, we define them for time steps ${\delta t}$ .
|
| 170 |
+
|
| 171 |
+
Beforehand, let's introduce the view vector connecting the target object to the camera, on which we will rely. It is computed as:
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
{D}_{j}^{i}\left( t\right) = {B}^{i}\left( t\right) - {q}_{j}^{i}\left( t\right)
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
From this view vector, we define the view angle change as:
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
{C}_{{\Delta }_{\phi ,\theta }}\left( t\right) = \frac{\left( {D}_{j}^{i}\left( t\right) ,{D}_{j}^{i}\left( t + \delta t\right) \right) }{\pi }
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
In a way similar, we propose to rely on a squared distance variation, defined as:
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
{\Delta d}\left( t\right) = {\left( \begin{Vmatrix}{D}_{j}^{i}\left( t\right) \end{Vmatrix} - \begin{Vmatrix}{D}_{j}^{i}\left( t + \delta t\right) \end{Vmatrix}\right) }^{2}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
We then define a cost on this distance change, which we further normalize as:
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
{C}_{\Delta d}\left( t\right) = 1 - E\left( {{\Delta d}\left( t\right) ,\lambda }\right)
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
where $E$ is an exponential decay function, for which we set parameter $\lambda$ to ${10}^{-4}$ .
|
| 196 |
+
|
| 197 |
+
Preferred range of distances One side effect of the above costs is that for large distances, changes on the view angle and distance will be less penalized. In turn, this will favor large camera animations. It is worth noting that, in the same way, placing the camera too close to the target object is also not desired in general. We should then penalize both behaviors. To do so, we propose to introduce a last cost, aimed at favoring camera animations where the camera remains within a prescribed distance range $\left\lbrack {{d}_{\min },{d}_{\max }}\right\rbrack$ . We formulate this costs as:
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
{C}_{d}\left( t\right) = \left\{ \begin{array}{ll} 1 & \text{ if }\begin{Vmatrix}{{D}_{j}^{i}\left( t\right) }\end{Vmatrix} \notin \left\lbrack {{d}_{\min },{d}_{\max }}\right\rbrack \\ 0 & \text{ otherwise } \end{array}\right.
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
§ 5.2 SELECTING A CAMERA ANIMATION
|
| 204 |
+
|
| 205 |
+
Given the previously introduced costs, we now target to compute the cost of an entire animation, and select the most promising one.
|
| 206 |
+
|
| 207 |
+
In a first step, we define the total cost of a camera animation as a weighted sum of single-criteria costs integrated over time:
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
C = \mathop{\sum }\limits_{k}{w}_{k} \cdot \left\lbrack {{\int }_{{t}_{i}}^{{t}_{i} + h}{C}_{k}\left( t\right) G\left( {t - {t}_{i},\sigma }\right) {dt}}\right\rbrack
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
where ${w}_{k} \in \left\lbrack {0,1}\right\rbrack$ is the weight of criterion $k.G$ is a Gaussian decay function, where we set standard deviation $\sigma$ to the value of $h/4$ . We also slightly tune the decay to converge toward 0.25 (instead of 0 ). This way, we give a higher importance to the costs of the beginning of the animation, yet considering the end. Indeed, our assumption is that the camera will only play the first part of it (10% in our tests), while the remaining part still brings a longer term information on what could be a good camera path. We compute this total cost for any camera animation ${q}_{j}^{i} \in {\mathbb{Q}}^{i}$ , and refer to it as ${C}_{j}^{i}$ .
|
| 214 |
+
|
| 215 |
+
In a second step, we propose to choose the most promising camera animation for time horizon ${H}^{i}$ , denoted as ${\mathbf{q}}^{i}$ , as the one with minimum total cost, i.e. :
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
{\mathbf{q}}^{i} = \underset{j}{\arg \min }{C}_{j}^{i}
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
§ 5.3 GPU-BASED EVALUATION
|
| 222 |
+
|
| 223 |
+
We have presented our evaluation metric on camera animations. However, some costs might be expensive to compute. In particular, computing occlusion and collision require to trace many rays (i.e. $N$ rays, for many time steps, for hundreds of camera animations). We here focus more in detail on how we propose to perform computations in a very efficient way.
|
| 224 |
+
|
| 225 |
+
It is worth noting that many computations can be performed in parallel. The evaluation of camera animations are independent from each other. In a way similar, the evaluation of a cost at different time steps along a given animation are also independent. Hence, we propose to cast our evaluation of single costs into a massively-parallel computation on GPU.
|
| 226 |
+
|
| 227 |
+
Firstly, note that we only need to send the animation space (in orthonormal coordinate system) once to the GPU, when starting the system. Then, when we need to reposition the camera animation space, for horizon ${H}^{i}$ , we simply update the ${4x4}$ transform matrix ${M}^{i}$ . And, from this data, one can straightforwardly compute any camera position ${\mathbf{q}}_{j}^{i}\left( t\right)$ for any time $t$ .
|
| 228 |
+
|
| 229 |
+
Secondly, for occlusion and collision computations, we propose to rely on the recent RTX technology, allowing to perform real-time raytracing requests on GPU. We run $\frac{{H}^{i}}{\delta t}$ kernels per track, where each kernel launches $N$ rays, one per sample $s$ picked onto the target object. The result of these computations are stored into a 2D texture (as shown in figure 5), where the texture coordinates $u$ and $v$ map to one time step $t$ and one animation of index $j$ , respectively. Occlusion and collision costs are stored into two different channels.
|
| 230 |
+
|
| 231 |
+
Thirdly, we rely on a compute shader to compute all other costs, and combine them with occlusion and collision costs. This shader uses one kernel per camera animation. It stores the total cost of all animations into a GPU buffer, finally sent back to CPU where we perform the selection step.
|
| 232 |
+
|
| 233 |
+
§ 6 DYNAMIC TRAJECTORY ADAPTATION
|
| 234 |
+
|
| 235 |
+
Until now, we have considered a simplified configuration, where we evaluate the animation space and select one camera animation for one given time horizon ${H}^{i}$ . We now need to consider two other requirements. First, the camera should be animated to track the target object for an unknown duration, larger than $h$ . Changes in the target behavior may also occur, due to interactive user inputs Second, for any horizon ${H}^{i}$ , some camera animations could be in collision with the scene, or the target could be occluded. This would prevent finding a proper animation to apply. In other words, the space of potential camera animations should be influenced by the surrounding scene geometry. Hereafter, we explain how we account for these requirements.
|
| 236 |
+
|
| 237 |
+
§ 6.1 USER INPUTS AND INTERACTIVE UPDATE
|
| 238 |
+
|
| 239 |
+
We here assume the camera is currently animated along curve ${\mathbf{q}}^{i}$ . We then need to compute a new animation, for a new time horizon ${H}^{i + 1}$ in two cases. First, when the target's behavior has changed, this makes the currently played animation not valid for future time steps. Second, when the camera animation as reached a specific duration. In a way similar to motion-predictive control, we indeed consider the target behavior, as well as the collision and occlusion computations, less reliable after a certain anticipation duration. This in particular allows to handle dynamic collisions and occlusions. This update is specified by the user as a ratio of progress along animation ${\mathbf{q}}^{i}$ . In our tests the horizon duration is $h = 5$ seconds and the update ratio is 0.1 . In turn, the new horizon generally starts at ${t}_{i + 1} = {t}_{i} + {0.1h}$ , while we set ${\mathbf{q}}_{\text{ start }}^{i + 1} = {\mathbf{q}}^{i}\left( {t}_{i + 1}\right)$ .
|
| 240 |
+
|
| 241 |
+
Knowing that an update is required, we iterate on the overall process explained earlier, for next horizon ${H}^{i + 1}$ . We select a new goal viewpoint (i.e. ${\mathbf{q}}_{\text{ goal }}^{i + 1}$ ), and update the transform matrix (i.e. ${M}^{i + 1}$ ) to position the camera animation space ${\mathbb{Q}}^{i + 1}$ . We then need to evaluate all camera animations in ${\mathbb{Q}}^{i + 1}$ . To do so, we compute all costs presented in section 5 . However, this is not enough, as we also need to enforce continuity between animation ${q}^{i}$ and the animation ${\mathbf{q}}^{i + 1}$ that is to be selected. To do so, we rely on an additional criteria designed to favor a smooth transition between consecutive animations:
|
| 242 |
+
|
| 243 |
+
< g r a p h i c s >
|
| 244 |
+
|
| 245 |
+
Figure 8: Comparison of our system with an adaptive scale, or with a naïve scale, applied on the camera animation space.
|
| 246 |
+
|
| 247 |
+
Animation transitions this cost penalizes abrupt changes when transitioning between two camera animation curves. Our idea is to penalize a wide angle between the tangent vector to camera animation ${\mathbf{q}}^{i}$ and the tangent vector to animation ${\mathbf{q}}_{j}^{i + 1} \in {\mathbb{Q}}^{i + 1}$ , at connection time ${t}_{i + 1}$ . We write this cost as:
|
| 248 |
+
|
| 249 |
+
$$
|
| 250 |
+
{C}_{i,i + 1}\left( j\right) = \frac{\left( {\dot{\mathbf{q}}}^{i}\left( {t}_{i + 1}\right) ,{\dot{\mathbf{q}}}_{j}^{i + 1}\left( {t}_{i + 1}\right) \right) }{\pi }
|
| 251 |
+
$$
|
| 252 |
+
|
| 253 |
+
We then rewrite the selection of camera animation ${q}^{i + 1}$ as:
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
{\mathbf{q}}^{i + 1} = \underset{j}{\arg \min }\left\lbrack {{C}_{j}^{i + 1} + {w}_{i,i + 1}{C}_{i,i + 1}\left( j\right) }\right\rbrack
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
where ${w}_{i,i + 1}$ is the relative weight of the transition cost with regards to other costs.
|
| 260 |
+
|
| 261 |
+
§ 6.2 ADAPT TO SCENE GEOMETRY
|
| 262 |
+
|
| 263 |
+
We would also like to adapt our camera animation space to the scene geometry. To this aim, on the one hand we seek to offer a camera animation space which exhibits as few collisions and occlusions as possible. On the other hand, we also seek a space which is not too restricted, i.e. covering as much as possible the free space between the target behavior and the scene geometry.
|
| 264 |
+
|
| 265 |
+
To do so, while we evaluate the quality of camera animations for an horizon ${H}^{i}$ , we also take the opportunity to analyse how much collisions and occlusions occur. In turn, this will allow to know if the free space is well covered (or not enough). We then propose to dynamically rescale the camera animation space to make it grow or shrink in the next time horizon ${H}^{i + 1}$ . This rescaling applies when we update the transform matrix ${M}^{i + 1}$ , and on the $x$ and $y$ axes only. It is worth noting that the free space might not be symmetrical around the target behavior (as illustrated figure 6). Indeed, this free space might for instance be larger (or smaller) on the left than on the right of the target. The same applies to the free space above or below the target. Consequently our idea is to compute four scale values, on all four directions $\{ - x, + x, - y, + y\}$ . For any camera position along a camera animation, we then apply either two of them, depending on the sign of the position’s $x$ and $y$ coordinates in the non-transformed animation space.
|
| 266 |
+
|
| 267 |
+
We propose to process in the following way. We firstly leverage the occlusions and collisions evaluation to store additional information: we count fails and successes along each axis. We consider a launched ray along a camera animation (i.e. from the camera position at a given time step) as a fail if it is marked as occluded or collided, and as a success if not. We secondly store this information in height arrays: for each half-axis (e.g. $+ x$ or $- x$ ), we count successes in one array, and fails in another array. We further discretize this half-axis by using a certain resolution $R$ , to output two histograms, of fails and successes (as illustrated in figure 7). Note that $R$ here define the scale precision on each axis. We lastly use both histograms to compute the new scale to apply. We compute the indices ${i}_{f}$ and ${i}_{s}$ of the medians of both arrays (fails and successes, respectively). By comparing them, we define how much we should rescale animations along this half-axis. If ${i}_{s} < {i}_{f}$ , we consider that there are too many fails, and multiply the current scale by ${i}_{f}/R$ , to shrink animations Otherwise, we consider the free space is not covered enough, and apply a passive inflation to the current scale. The aim of this inflation is to help return to a maximum scale value, when the surrounding geometry allows for large camera animations.
|
| 268 |
+
|
| 269 |
+
§ 7 IMPLEMENTATION AND RESULTS
|
| 270 |
+
|
| 271 |
+
§ 7.1 IMPLEMENTATION
|
| 272 |
+
|
| 273 |
+
We implemented our camera system within the Unity3D 2019 game engine. We compute our visibility and occlusion textures through raytracing shaders provided with Unity's integrated pipeline, and perform our scores for all sampled animations and timesteps through Unity Compute Shaders. All our results (detailed in section 7.2) have been processed on a laptop computer with a Intel Core i9-9880H CPU @ 2.30GHz and a NVIDIA Quadro RTX 4000.
|
| 274 |
+
|
| 275 |
+
§ 7.2 RESULTS
|
| 276 |
+
|
| 277 |
+
We split our evaluation into three parts. We firstly validate our adaptive scale mechanism. We secondly evaluate the robustness of our system, by comparing its performances when using a different number or set of reference camera animations. We thirdly validate the ability of our system, mixing local and global planning approaches, to outperform a purely local camera planning system. To do so, we compare results obtained with our system, and the one of Burg et al. [1], on the same test scenes.
|
| 278 |
+
|
| 279 |
+
< g r a p h i c s >
|
| 280 |
+
|
| 281 |
+
Figure 9: Results for multiple runs, each using a randomly generated camera animation space. This space is sampled with uniform distribution, with 2400 sample camera animations (Hermite curves). Each plot shows the mean value over time (blue), with a 95% confidence interval (red). Left: results for 22 runs using different seeds. Right: results for 10 runs using the same seed.
|
| 282 |
+
|
| 283 |
+
To validate our adaptive scale, we study its impact on the quality of the animation space. For the other evaluations, we compare camera systems with regards to two main criteria: how much the camera maintains visibility on the target object, and how smooth camera motions are. We compute visibility by launching rays onto the target object, and calculating the ratio of rays reaching the target. A ratio of 1 (respectively 0 ) means that the target is fully visible (respectively fully occluded). When relevant, we additionally provide statistics on the duration of partial occlusions. We then compare the quality of camera motions through their time derivatives (speed, acceleration and jerk), which provide a good indication of motion smoothness.
|
| 284 |
+
|
| 285 |
+
Our comparisons have been performed within 4 different scenes (illustrated in the accompanying video). We validated our system by using (i) a Toy example scene, where the target is travelling a maze, containing several tight corridors with sharp turns, an open area inside a building, and a ramp. We then performed the comparisons with the technique of Burg et al. [1], by using two static scenes and a dynamic scene, which the target goes through: (ii) a scene with a set of columns and a gate (Columns+Gate), (iii) a scene with set of small and large spheres (Spheres) and (iv) a fully dynamic scene with a set of randomly falling and rolling boxes, and a randomly sliding wall (Dynamic). To provide fair comparisons, in the dynamic scene, we pre-process the random motions of boxes and of the wall. As well, for all tests in a scene, we play a pre-recorded trajectory of the target avatar, but let the camera system run as if the avatar was interactively controlled by a user.
|
| 286 |
+
|
| 287 |
+
§ 7.2.1 IMPACT OF ADAPTIVE SCALE
|
| 288 |
+
|
| 289 |
+
We validate our adaptive scale (section 6.2) by comparing results obtained (i) when we compute and apply the adaptive scale on all 4 half-axes $\left( {-x, + x, - y, + y}\right)$ , and (ii) when we simply apply the same scale as for the $z$ axis (which we will call the naive scale technique). We processed our tests by using the toy example scene. For each technique, each time we evaluate a new set of camera animations, we output the new scale values and the ratio of fails on each half-axis. As well, we output and plot the mean cost of the 5 best animations in this set. Results are presented in figure 8 .
|
| 290 |
+
|
| 291 |
+
Figure 8a shows how much our mechanism tightens the animation space (compared to the naive scaling technique) when the avatar is entering corridors, and grows back to the same scale when the avatar reaches less cluttered areas (e.g. in the open interior room, or the outdoor area). As expected, our mechanism allows to adapt the scale on half-spaces in a non-symmetrical way. As shown by figure $8\mathrm{\;b}$ , with our adaptive mechanism, the scaled animation space also exhibit less fails than using the naive scale technique. As well, as shown by figure 8c, it allows finding animations with lower cost, most of the time. One exception is between 40 s and 50 s, where the camera configuration isn't the same because the scale is different. In the naive case the camera is high above the character while in the adaptive case, the camera is closer to the ground, thus the scores is not relevant in this case because the two configurations are way to different to be compared.
|
| 292 |
+
|
| 293 |
+
In the next evaluations, we consider that the adaptive scale mechanism is always activated.
|
| 294 |
+
|
| 295 |
+
§ 7.2.2 ROBUSTNESS
|
| 296 |
+
|
| 297 |
+
We also study the robustness of our system regarding our randomly generated camera animation space.
|
| 298 |
+
|
| 299 |
+
In a first step, we evaluate how performances vary if we run our real-time system multiple times, on the toy example scene. We also consider two cases: (i) using the same seed for every run (i.e. the same animation space is used), and (ii) using a new seed for every run (i.e. a new animation space is randomly sampled for each run). For each case, we sample a set of 2400 animations. Results are presented in figure 9. As it shows, with as many sampled animations, all runs lead to very similar results, both on the visibility enforcement, and on the camera motion smoothness. Differences are mainly due to variations in the actual framerate of the game engine, hence the rate at which the system takes new decisions.
|
| 300 |
+
|
| 301 |
+
< g r a p h i c s >
|
| 302 |
+
|
| 303 |
+
Figure 10: Visibility when varying the number of sampled curves in our camera animation space.
|
| 304 |
+
|
| 305 |
+
In a second step, we evaluate how the size of the animation space (i.e. the number of sampled animations) impacts performances. We ran our system with 4 different sizes:2400,1600,800or 100 animations. For each size, we performed 5 runs with random seed, and combined the results in figures 10, 11 and 12. It shows that lowering the size (at least until 800 animations) still allows good performances. Our camera system is able to find a series of camera animations maintaining enough visibility on the target object, through smooth camera motions. As we expected, for 100 animations, our system's performances are poor: it becomes harder to find animations with sufficient visibility and ensuring smooth camera motions. Our intuition behind this result is that as soon as the size become too small, the distribution of tangents becomes very sparse, hence breaking our assumption of a uniform sampling. If the animation space does not sufficiently cover the free space, this prevents the exploration of a wide range of possible camera animations.
|
| 306 |
+
|
| 307 |
+
§ 7.2.3 COMPARISON TO BURG 2020
|
| 308 |
+
|
| 309 |
+
We also compare our system, mixing local and global planning approaches, to a purely local camera planning system. To this aim, we have run our proposed camera system, and the local camera planning system of Burg et al. [1], in 3 different scenes: two static scenes (Columns+Gate and Spheres) and a fully dynamic scene (Dynamic). The Columns+Gate is the same as in [1], where the avatar is moving between some columns and go through a doorway. In the Spheres scene, the avatar is travelling a scene filled with a large set of spheres, which makes it moderately challenging for the camera systems. In the Dynamic scene, the avatar must go through a flat area, where a set of boxes are randomly flying, falling, rolling all over the place, and a wall is randomly sliding. This makes it challenging for camera systems to anticipate the scene dynamics and find occlusion-free and collision-free camera paths.
|
| 310 |
+
|
| 311 |
+
< g r a p h i c s >
|
| 312 |
+
|
| 313 |
+
Figure 11: Camera speed when varying the number of sampled curves in our camera animation space.
|
| 314 |
+
|
| 315 |
+
In our camera system, we used 2400 animations the recomputation rate is set to 0.25 s and the adaptive scaling is on. We present results of our tests in figures 13, 14, 15, 16, and 17.
|
| 316 |
+
|
| 317 |
+
We firstly compare camera systems along their ability to enforce visibility on the target object (figure 13). Our tests show that for moderately challenging scenes, both lead to relatively good results. Few occlusions occur. However, for a more challenging scene (Dynamic), our system outperforms Burg et al. 's system. Even if occlusions may occur more often, the degree of occlusion is lower (figure 13b). Moreover, for all 3 scenes, when partial occlusions occur, they are shorter when using our system (figure 13c). This is explained by the fact that when no local solution exist, our system can still find a locally occluded path respecting other constraints, and leading to a less occluded area. This demonstrates our system's ability to better anticipate occlusions, and especially in dynamic scenes.
|
| 318 |
+
|
| 319 |
+
We secondly compare the smoothness of camera motions in both camera systems. Figure 14 presents, side-by-side, the distributions of speed, acceleration and jerk when using each system. We also provide the speed, acceleration and jerk along time, in figures 15, 16, and 17. One observation we make is that Burg et al. 's system leads to lower camera speeds, as it restricts itself to simply following the avatar. In our camera system, the camera is allowed to move faster, to bypass the avatar when visibility or another constraint may be poorly satisfied. Yet, our system provides smoother motions (i.e. less jerk). One explanation is that local systems often need to steer the camera from local minima (e.g. low visibility areas). A side effect is that it may lead, for successive iterations, to an indecision on which direction the camera should take to reach better visibility. In turn, this leads to frequent changes in camera acceleration (hence higher jerk). Conversely, our system has a more global knowledge on the scene, allowing to more easily find a better path, which avoids sacrificing the smoothness of camera motions.
|
| 320 |
+
|
| 321 |
+
< g r a p h i c s >
|
| 322 |
+
|
| 323 |
+
Figure 12: Camera jerk when varying the number of sampled curves in our camera animation space.
|
| 324 |
+
|
| 325 |
+
§ 8 DISCUSSION AND CONCLUSION
|
| 326 |
+
|
| 327 |
+
Our system presents a number of limitations. Despite the ability to evaluate thousands of trajectories, strongly cluttered environments remain challenging. As smoothness is enforced, visibility may be lost in specific cases, and designing a techniques that could properly balance between the properties to handle specific cases need to be addressed. Also while the dynamic scale adaptation does improve results by compressing the trajectories in different half spaces, low values in scales prevent the camera from larger motions where necessary. A future work could consist in biasing the sampling in the animation space in order to adapt the space to typical local topologies of the 3D environment. Despite the limitations, the proposed work improves over existing contributions by proving an efficient camera tracking technique adapted to dynamic 3D environments and does not require heavy roadmap precomputations.
|
| 328 |
+
|
| 329 |
+
< g r a p h i c s >
|
| 330 |
+
|
| 331 |
+
Figure 13: Comparison between our system and Burg et al. [1], regarding the target object's visibility (a)(b) and, when not fully visible, the duration of partial occlusion (c).
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Simulating Mass in Virtual Reality using Physically-Based Hand-Object Interactions with Vibration Feedback
|
| 2 |
+
|
| 3 |
+

|
| 4 |
+
|
| 5 |
+
Figure 1: Physics-based interactions with virtual objects using a co-located virtual hand (the left figure) are augmented using vibrational feedback proportional to objects' mass and acceleration (the right figure).
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Providing the sense of mass for virtual objects using un-grounded haptic interfaces has proven to be a complicated task in virtual reality. This paper proposes using a physically-based virtual hand and a complementary vibrotactile effect on the index fingertip to give the sensation of mass to objects in virtual reality. The vibro-tactile feedback is proportional to the balanced forces acting on the virtual object and is modulated based on the object's velocity. For evaluating this method, we set an experiment in a virtual environment where participants wear a VR headset and attempt to pick up and move different virtual objects using a virtual physically-based force-controlled hand while a voice-coil actuator attached to their index fingertip provides the vibrotactile feedback. Our experiments indicate that the virtual hand and our vibration effect give the ability to discriminate and perceive the mass of virtual objects.
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction devices-Haptic devices; Human-centered computing-Human computer interaction (HCI)— Interaction paradigms-Virtual reality
|
| 12 |
+
|
| 13 |
+
## 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Virtual Reality (VR) has significantly revolutionized simulated human experiences. VR enables an immersive virtual experience by simulating and triggering most of our senses as if we are present in another environment. Notably, in VR it is possible to see one's own co-located virtual hands, perceive them as their own real hands and interact with virtual objects [16]. However, virtual objects have no real mass, and the problem is including touch and visual cues that we rely on for mass perception. The physical cues include skin stretch and contact pressure at the fingertips (cutaneous feedback) and proprioceptive feedback from multiple muscles and joints (kinesthetic feedback).
|
| 16 |
+
|
| 17 |
+
Grounded haptic devices can render the necessary forces for kinesthetic and cutaneous haptic feedback. However, their size, weight, and limited workspace restrict free-hand movements, making them less desirable in various VR applications.
|
| 18 |
+
|
| 19 |
+
Alternatively, ungrounded haptic devices (such as finger-mounted or hand-held devices) can be built more compactly and lighter, making them more convenient to use in a larger workspace. Sensing the mass of a virtual object in every direction needs more complex ungrounded hardware with higher degrees of freedom. However, such devices require multiple actuators and can limit hand and finger movements.
|
| 20 |
+
|
| 21 |
+
Another approach to overcome the hardware limitations is to use visio-haptic illusions. These methods aim to trick the brain into perceiving the mass by manipulating the objects' visual cues. For example, limiting the virtual object's velocity [1], or scaling its displacement compared to the user's hand [21] are shown to give a sense of mass to the objects. However, these methods are not physically realistic or decrease the co-location between the actual and virtual hands.
|
| 22 |
+
|
| 23 |
+
In this paper, we introduce a novel mass rendering method that combines a visio-haptic technique with a simple finger-mounted vibration actuator. For the visio-haptic part, we replicate the visual cues that humans perceive during a real-world hand interaction with physical objects. We use a force-controlled physically-based virtual hand in VR to interact with virtual objects, which results in a limit on the heaviness of the objects that the user can pick up and how fast they can accelerate them based on their mass. However, it is difficult to distinguish between light objects using this technique. We complement our visio-haptic method with haptic feedback. The haptic actuator that renders the feedback should be small and compact enough to allow individual fingers to move independently and perform dexterous interactions. Also, we prefer an ungrounded device since it allows a larger workspace. One method to reduce the device's size is to use haptic feedback that is directionally invariant to our sense of touch. If the haptic stimulus's direction is detectable by the sense of touch, we need multiple actuators to render the haptic effect in different directions during a virtual interaction. Therefore we employ an ungrounded, direction invariant haptic effect to complement our physically-based virtual hand. We explore using a mechanical vibration feedback effect to achieve ungrounded mass rendering for virtual objects. In our work, while the virtual object is in the user's grasp, a sinusoid vibration proportional to the object's mass and acceleration is played through an ungrounded voice-coiled actuator at the tip of the user's index finger. An overview of the proposed method is shown in Fig. 1.
|
| 24 |
+
|
| 25 |
+
When moving two objects with different mass, in addition to the physically-based visio-haptic feedback, users feel proportionally stronger vibration while grasping the heavier object. This vibro-tactile feedback gives the user a clue to the net force acting on the virtual object. To make this a direction-invariant haptic feedback, we use frequencies above ${100}\mathrm{\;{Hz}}$ . These frequencies are sensed by Pacini mechanoreceptors, which are not sensitive to the stimuli's directions.
|
| 26 |
+
|
| 27 |
+
To evaluate the proposed physically-based virtual hand and the vibration feedback, we conducted a user study where participants interact with virtual objects with different masses and perform virtual tasks. Using qualitative and quantitative methods, we show that the physically-based hand gives a sense of mass to virtual objects, and adding the vibration feedback does improve mass perception and discrimination.
|
| 28 |
+
|
| 29 |
+
The main contribution of this work is the design, development, and evaluation of a novel mass rendering method for virtual objects using physically-based hand-object interactions and vibration feedback.
|
| 30 |
+
|
| 31 |
+
## 2 Related Works
|
| 32 |
+
|
| 33 |
+
In this section, we review relevant literature on simulating the mass of virtual objects during a VR experience. Also, we discuss modes of interaction in VR, including physically realistic grasping and interaction.
|
| 34 |
+
|
| 35 |
+
Grounded haptic devices are highly sought-after in tool-mediated applications where precision and fidelity are essential such as surgical training $\left\lbrack {{11},{18}}\right\rbrack$ . Hand wearable grounded devices have also been developed. HIRO III [10] is an example of a five-fingered grounded haptic interface, with three DoF for each of its haptic fingers and a $6\mathrm{{DoF}}$ base capable of providing high precision force feedback to a hand while it is attached to each of the fingertips. The main challenge with grounded devices is their limited workspace size.
|
| 36 |
+
|
| 37 |
+
Ungrounded haptic devices are attached to the user's body instead of a fixed point in the room, which allows a larger workspace. These devices are either hand-held or attached to the user's fingers, hands, or body. Minamizawa et al. [19] introduce a fingertip mounted ungrounded haptic device called the Gravity Grabber that can create a sense of weight when grabbing virtual objects in specific orientations. Gravity Grabber achieves this using one degree of freedom for shear force feedback and another degree of freedom in the normal direction of the fingertip skin. However, since our skin can detect the direction of skin stretch, this method cannot give a sense of weight to a virtual object in all orientations. Sensing the weight and inertia of a virtual object in all directions requires an ungrounded device with more complex hardware and higher degrees of freedom. Such as the works of Chinello et al. [4], and Prattichizzo et al. [20]. However, such devices are mechanically complicated since they require multiple actuators and limit hand and finger movements. In our method, we use one haptic actuator to render the mass of objects in all directions since we use sinusoidal vibration feedback.
|
| 38 |
+
|
| 39 |
+
Hand-held ungrounded devices are desirable for simulating interactions with hand-held tools such as a hammer or a baseball bat. However, they limit the movement of fingers and the hand. Zenner in [26] introduced Drag:on a custom VR hand controller with two actuated fans, which can dynamically adjust the controller's aerodynamic properties, therefore changing the sensed inertia of a virtual object. Zenner et al. [25] introduce Shifty, a hand-held VR controller with an internal prismatic joint connected to a weight that shifts the center of mass of the device, resulting in different rotational inertia and resistance as the user interacts with various virtual tools. In the work of Lykke et al. [17], users have two hand controllers to pick up round virtual objects (scooping), and they should keep their hands closer together when the objects are heavier. Our method tracks the user's own hand instead of using a VR controller, which increases the sense of ownership and realism of the virtual hand [16] while not limiting the fingers' and hand's mobility.
|
| 40 |
+
|
| 41 |
+
Humans can use visual cues to determine the weight of a virtual object. Backstrom [1] gives the sensation of mass to virtual objects in VR by limiting the velocity of a virtual object based on how heavy it is. Such constraints on the object's movements are not physically realistic. Dominjon et al. [9] show that manipulating the control-display ratios of virtual objects can change the perceived mass in virtual environments. In other words, if a virtual object's displacement is proportionally increased compared to the user's actual hand, its mass is perceived as lighter than it is and vice versa. Samad et al. [21] utilize the same technique in VR to change the perceived weight of wooden cubes. However, one downside of changing the control-display ratios is that the offset between the actual and the virtual representation of the object increases as the hand gets further away from the initial contact point. Therefore, bi-manual coordination and interaction could become difficult since the virtual hand's relative position is different from the actual hand's, even if it is not moving. Our approach aims to give a sense of mass to objects by using a physically-based virtual hand that enables realistic interactions with virtual objects and preserves the co-location between the virtual and actual palm when the hands are steady or when their acceleration is not changing.
|
| 42 |
+
|
| 43 |
+
Interaction is an important part of an immersive virtual experience and increases the user's sense of presence [3] [24]. There are various ways to enable interactions between a virtual hand and virtual objects. In gesture and metaphor-based approaches, the interaction uses specified hand commands. For example, if the virtual hand is in a grasping pose and near a virtual object, that object's orientation follows the virtual hand. Song et al. [23] enables 9 DoF control of a virtual tool using bi-manual gestures. Gesture-based approaches have proven to be robust and effective. However, they are unintuitive and artificial by nature; therefore, they are not suitable for a physically realistic interaction. Another approach is to use physically-based manipulation techniques. For example, Borst and Indugula in [2] propose virtual coupling of the tracked hand to a rigid virtual hand that enables whole hand grasping. In this method, the palm and finger joints of the tracked hand and the virtual hand are connected to the corresponding parts using linear and torsional virtual spring-dampers. Moreover, since the spring damper links work based on applying a limited and proportional amount of force, this method shares the same physical limitations that a realistic interaction has. We modify this method to preserve the co-location between the virtual and actual palms and evaluate it for mass rendering in VR.
|
| 44 |
+
|
| 45 |
+
Vibration feedback can be used to simulate different touch stimuli. We use sinusoidal vibrations to render the mass of a virtual object. Asymmetrical vibration is another type of vibration feedback that has been used by Choi et al. [5] to simulate weight in VR. These vibrations cause skin-stretch, and the user can detect their direction. Therefore, multiple actuators are required for simulating weight and inertia in all directions. Moreover, the intensity of these asymmetrical vibrations is much stronger (up to ${20}\mathrm{\;g}\left( {{9.8}{\mathrm{\;{ms}}}^{-2}}\right)$ ) compare to our vibration feedback (less than $1\mathrm{\;g}$ ). Kildal [13] uses grain mechanical vibrations to create the illusion of compliance for a rigid box. Sinusoidal vibration feedback has been used in other haptic applications such as simulating a button press on a rigid box [15] and a virtual button in VR [14]. Moreover, Seo et al. [22] simulate a moving cart by adding vibration feedback to a chair and changing the amplitude and frequency of the vibration feedback proportional to the simulated cart's angular velocity.
|
| 46 |
+
|
| 47 |
+
Mass rendering methods in VR limit the hand and finger movements or engage users in unrealistic interactions. Our physically-based interaction is realistic and preserves the co-location of actual and virtual palms when the hands are under no or constant acceleration, and our vibration feedback works with a single actuator on the fingertip without limiting the hand and finger movements.
|
| 48 |
+
|
| 49 |
+
## 3 FORCE-CONTROLLED VIRTUAL HAND
|
| 50 |
+
|
| 51 |
+
One of the goals of this paper is to explore the effect of a physically-based interaction on mass perception and discrimination. There is a weight limit on objects in the real world that we can pick up using our hands. Our grip strength and the force that we can apply to a grasped object are bounded. Therefore, there is a limit to how fast we can accelerate an object based on its mass. In VR, we hypothesize that physically-based interaction between the user's virtual hand and object creates a sense of mass for that object. For this purpose, we track the user's hand, couple it with a 3D model of a hand, and use a physically-based simulation for hand-object interactions. We use a vision-based hand tracking system (Leap Motion hand tracker) to allow the user's hand and fingers to move freely, providing a virtual experience analogous to real-world interaction.
|
| 52 |
+
|
| 53 |
+
For modeling the hand, we consider one rigid palm and five fingers, each of which has three rigid phalanges. Interaction between VR objects and the force-controlled virtual hand is more realistic than interactions between the tracked hand and VR objects. For example, when grasping an object, the tracked hand can go inside the object, but the virtual hand grasps around the object. Therefore, we only display the force-controlled virtual hand (VR hand). The VR hand must be co-located and coupled with the tracked hand. To achieve this, rather than a purely geometric approach, we modify the physically-based method described by Borst and Indugula in [2]. The physically-based coupling helps us to efficiently prevent unrealistic collisions and interactions between the VR hand and objects. In the physically-based coupling method, we associate one spring-damper to each rigid component of fingers. The spring-dampers apply force to the VR hand's components to match their positions and orientations to the tracked hand's corresponding components. To achieve consistent behavior from the physical simulation, we use a fixed size VR hand. Having a fixed size for the VR hand does not directly influence efficiency in virtual object manipulation tasks, sense of hand ownership, realism, or immersion in VR [16].
|
| 54 |
+
|
| 55 |
+
The spring-damper coupling applies both force and torque to the virtual part. The force at time $t,\overrightarrow{F}\left( t\right)$ , is proportional to ${\Delta }_{\text{Position }}\left( t\right)$ , the distance between the center of the mass of the two corresponding parts and the torque at time $t,\overrightarrow{\tau }\left( t\right)$ , is proportional to ${\Delta }_{\text{Rotation }}\left( t\right)$ , the difference in their rotation. To prevent the virtual part from overshooting its target position and orientation, the spring-damper applies another force to the virtual object proportional to $\overrightarrow{V}\left( t\right)$ , its linear velocity and torque proportional to $\overrightarrow{\omega }\left( t\right)$ , its angular velocity. That gives:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\overrightarrow{F}\left( t\right) = {k}_{p}^{\prime }{\overrightarrow{\Delta }}_{\text{Position }}\left( t\right) - {k}_{d}^{\prime }\overrightarrow{V}\left( t\right) , \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\overrightarrow{\tau }\left( t\right) = {k}_{p}^{\prime \prime }{\overrightarrow{\Delta }}_{\text{Rotation }}\left( t\right) - {k}_{d}^{\prime \prime }\overrightarrow{\omega }\left( t\right) \tag{2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where ${k}_{p}^{\prime },{k}_{p}^{\prime \prime },{k}_{d}^{\prime }$ and ${k}_{d}^{\prime \prime }$ are the spring-damper coefficients. These parameters, are set during the preliminary experiments to ensure that the VR hand is responsive and closely and smoothly follows the actual hand and can pick up virtual mass up to $4\mathrm{\;{kg}}$ .
|
| 66 |
+
|
| 67 |
+
If we use a similar spring-damper to couple the palms, when the user holds an object using the VR hand, the distance between the VR hand and the actual hand increases until the spring-dampers' forces equal the weight of the VR hand and the object that it is holding. This causes a discrepancy between the visual and the proprioceptive sense. To solve this problem, we introduce an additional term in the spring-damper for the palms:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\overrightarrow{{F}_{\text{Palm }}}\left( t\right) = {k}_{p}^{\prime }{\overrightarrow{\Delta }}_{\text{Position }}\left( t\right) - {k}_{d}^{\prime }\overrightarrow{V}\left( t\right) + {k}_{i}^{\prime }\mathop{\sum }\limits_{{j = 0}}^{t}{\overrightarrow{\Delta }}_{\text{Position }}\left( j\right) , \tag{3}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{\tau }_{\text{Palm }}^{ \rightarrow }\left( t\right) = {k}_{p}^{\prime \prime }{\overrightarrow{\Delta }}_{\text{Rotation }}\left( t\right) - {k}_{d}^{\prime \prime }\overrightarrow{\omega }\left( t\right) + {k}_{i}^{\prime \prime }\mathop{\sum }\limits_{{j = 0}}^{t}{\overrightarrow{\Delta }}_{\text{Rotation }}\left( j\right) \tag{4}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where ${k}_{i}^{\prime }$ and ${k}_{i}^{\prime \prime }$ are spring-damper coefficients. The added summation term applies force and torque proportional to the accumulation of ${\overrightarrow{\Delta }}_{\text{Position }}\left( t\right)$ and ${\overrightarrow{\Delta }}_{\text{Rotation }}\left( t\right)$ over time. Therefore, when the user holds an object, $\overrightarrow{{F}_{\text{Palm }}}\left( t\right)$ and $\overrightarrow{{\tau }_{\text{Palm }}}\left( t\right)$ increase until the virtual palm's orientation and position match the tracked hand palm in the steady-state. ${k}_{i}^{\prime }$ and ${k}_{i}^{\prime \prime }$ are set during the preliminary experiments so that position and orientation of the coupled palms quickly match when the hand is not accelerating. Also, ${k}_{p}^{\prime },{k}_{p}^{\prime \prime },{k}_{d}^{\prime }$ and ${k}_{d}^{\prime \prime }$ are set independently for the palm compared to the phalanges since it has different physical properties.
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
Figure 2: A weak and a strong virtual grip and the corresponding actual hands.
|
| 82 |
+
|
| 83 |
+
Using a force-controlled virtual hand should give a sense of mass perception and allow mass discrimination between virtual objects. However, we suspect that this claim is stronger in some scenarios and weaker in others. While grasping and moving a light object, the spring-damper forces counteract the force of gravity and inertia on the object. Therefore, using our virtual hand, if a user grasps an object with a low virtual mass, they can easily pick it up and quickly move it around the workspace with high acceleration without it coming out of their grip. However, for a heavier object, the user can still pick it up, but they have to increase their effort, such as using more fingers for grasping or closing their grip further so spring-dampers would apply more force on the object (Fig. 2). Also, it is not possible to accelerate it as fast as lighter objects since the inertial forces are higher and can overcome the spring-dampers in the virtual hand and open the virtual grasp. Depending on the spring dampers' coefficients, after a certain point in mass, it would be really difficult or eventually impossible for the user to move or pick up the object. We hypothesize that the limit on how fast the user can accelerate the virtual object in hand and how challenging it is to pick it up gives the user a sense of the virtual object's mass and enable them to discriminate two objects based on their mass. However, using this technique, it is hard to perceive the difference in mass between two light objects $\left( { < 1\mathrm{\;{kg}}}\right)$ since it would be almost effortless to pick both of them up off the ground and move them quickly without dropping them. To overcome this problem, we introduce a vibration feedback effect to complement our VR hand.
|
| 84 |
+
|
| 85 |
+
## 4 VIBROTACTILE FEEDBACK
|
| 86 |
+
|
| 87 |
+
In day-to-day physical interactions with real-world objects, we can feel the object's mass and compare it to other heavier or lighter objects through our sense of touch. Virtual experiences that do not provide haptic feedback lack realism compared to real-world experiences. One of the modalities of haptic feedback is vibrotactile feedback in the form of mechanical waves or vibrations.
|
| 88 |
+
|
| 89 |
+
Our goal is to complement the VR hand in giving the user a perception of an object's mass by communicating the net force they apply to the object. To achieve this without limiting the hand and finger movements, we use one actuator to render our haptic feedback. We use sinusoidal vibration feedback with a frequency range between ${100}\mathrm{{Hz}}$ and ${150}\mathrm{{Hz}}$ , making it perceivable only by the Pacini mechanoreceptors in the fingertip skin. The Pacini mechanorecep-tors cannot detect the direction of the mechanical waves; therefore, only one actuator is sufficient to render our haptic feedback in all directions.
|
| 90 |
+
|
| 91 |
+
We strap a VCA (voice-coil actuator) to the fingertip of the index finger. We chose the index finger because it has a critical role in picking up objects with a pinch grasp. Other fingers, such as the thumb and the middle finger, can have an important role in grasping as well; However, attaching voice-coil actuators to multiple fingers limits the relative movement of fingertips and manual dexterity.
|
| 92 |
+
|
| 93 |
+
While a user grasps an object, we render the vibration feedback $O\left( t\right)$ with frequency $O{\left( t\right) }_{F}$ . The amplitude of $O\left( t\right)$ is proportional to the object’s mass $M$ and acceleration $A\left( t\right)$ . This results:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
O\left( t\right) = {\alpha MA}\left( t\right) \sin \left( {{2\pi tO}{\left( t\right) }_{F}}\right) , \tag{5}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where $\alpha$ is a scaling constant to control the range for the vibration energy perceived by the user. The vibration feedback should be only strong enough so that users can perceive the vibration when slowly moving the lightest weight in the scene. The value of $\alpha$ also depends on the hardware components of the haptic chain, such as the signal amplifier and the haptic actuator. For our setup, we set the $\alpha$ value in a way that, if the user accelerates a $1\mathrm{\;{kg}}$ object at $1\mathrm{\;g}$ , the measured vibration at the fingertip is on average ${0.32}\mathrm{\;g}$ , which allows users to perceive the vibration feedback when slowly moving the lightest weight(0.25kg)in our experiments. The frequency of the output signal $O{\left( t\right) }_{F}$ dynamically changes from ${100}\mathrm{{Hz}}$ to ${150}\mathrm{{Hz}}$ based on the velocity of the virtual object $V\left( t\right)$ , that gives:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
O{\left( t\right) }_{F} = \max \left( {{150},{100}\frac{\left| {V\left( t\right) }\right| + 2}{2}}\right) , \tag{6}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where at speeds near zero, the signal’s frequency is ${100}\mathrm{\;{Hz}}$ , and as the speed increases to about one $\mathrm{m}/\mathrm{s}$ , it goes up to ${150}\mathrm{{Hz}}$ . To ensure a smooth vibration signal, we apply a second-order Butterworth lowpass filter to $V\left( t\right)$ and $A\left( t\right)$ . The filter has a sample rate of ${1000}\mathrm{\;{Hz}}$ , and the corner frequency is ${20}\mathrm{\;{Hz}}( - 3\mathrm{\;{db}}$ amplification at ${20}\mathrm{{Hz}}$ ).
|
| 106 |
+
|
| 107 |
+
We set the signal’s amplitude proportional to ${MA}\left( t\right)$ which, according to Newton's second law of motion, represents the net force acting on the virtual object. In our method, we ignore balanced or counteracted forces acting on an object since the counteracted forces from grasping can be similar between a light and a heavy object. As an example, we can grip a light object just as hard as a heavier one.
|
| 108 |
+
|
| 109 |
+
During a virtual experience, the voice-coil actuator is always strapped to the user's index fingertip. However, the vibration feedback renders only when the user's virtual hand grasps a virtual object and not during their free-hand motions in the scene. To detect if the user is grasping a virtual object, we check whether the virtual object is off the ground and touching the virtual hand's palm and the distal joint of the thumb, index, or middle finger. If grasping is detected, the vibration feedback is rendered for the user through the voice coil actuator.
|
| 110 |
+
|
| 111 |
+
Whenever the system detects that the user is no longer grasping a virtual object, the vibration feedback rendering stops. However, in a physical simulation, even when the user is grasping the object, the hand parts may momentarily lose contact with the virtual object for a few cycles, and this might cause on/off pulses in our vibration feedback. To avoid these impulse noises in our signal, we stop the vibration feedback after no grasping is detected for ten milliseconds.
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 3: The output voltage of the vibration feedback for two virtual objects with mass values ${0.5}\mathrm{\;{kg}},{O}_{1}\left( t\right)$ , and $1\mathrm{\;{kg}},{O}_{2}\left( t\right)$ , during an arbitrary shaking movement with acceleration, $A\left( t\right)$ , and velocity $V\left( t\right)$ .
|
| 116 |
+
|
| 117 |
+
When the user picks two virtual objects with different mass values and moves them around the scene with the same motion, the vibration effect is more substantial for the heavier object than the lighter object, proportional to their mass difference. In other words, the user feels more energetic mechanical vibrations on their skin when interacting with a heavier object. We suspect users perceive these vibrations as a resistance force to acceleration (similar to the force of inertia), which leads them to perceive the mass of virtual objects.
|
| 118 |
+
|
| 119 |
+
The limitation of the force-controlled hand is that if we take two light virtual objects such that one object is twice as heavy as the other, it would be difficult to perceive the mass difference since both masses are well within the threshold of what the virtual hand can grasp and move around in the VR scene. However, with the presented vibration feedback, the vibration at the user's skin for the heavier object has twice the amplitude (Fig. 3). As a result, we expect that the user perceives the mass difference between the objects based on the vibration feedback.
|
| 120 |
+
|
| 121 |
+
## 5 EVALUATION
|
| 122 |
+
|
| 123 |
+
We evaluate our VR mass rendering techniques and verify our claims using both qualitative and quantitative measurements. We conducted a user study in which participants interact with virtual objects using the force-controlled co-located virtual hand and perform several object manipulation and comparison tasks. Moreover, we study the effect of the proposed vibration feedback on participants' ability to perceive virtual objects' masses and compare them based on the heaviness. More specifically, we look to assess these two hypotheses in our evaluations:
|
| 124 |
+
|
| 125 |
+
- Grasping and manipulating virtual objects using a co-located physically-based hand model in virtual reality gives a sense of mass perception and allows some degree of mass discrimination between virtual objects.
|
| 126 |
+
|
| 127 |
+
- The proposed vibration feedback can improve the sense of mass perception and enhance mass discrimination precision during virtual interactions between a physically-based virtual hand and virtual objects.
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+
Figure 4: The voice coil actuator is strapped to the index fingertip of the user's dominant hand
|
| 132 |
+
|
| 133 |
+
To examine the validity of the first hypothesis, participants perform virtual tasks involving interactions with objects with different mass values using the VR hand. However, evaluating these results of the VR hand interactions is not enough to validate our first hypothesis. The virtual environment runs in a physics engine, and users might get other clues to detect the difference in mass between objects that are not from the VR hand interactions only. These clues include: how the object interacts with each other, how they bounce when dropped on the virtual ground, and the speed at which they fall in the presence of air friction. To control the experiment for these additional cues, we ask participants to interact with each object individually and not push or touch an object using another. Additionally, we add a control interaction mode to our platform, called the spherical cursor. In this mode, instead of a co-located hand, users only see a spherical cursor co-located with the center of their palms. If the spherical cursor is within an object and the user puts their hand in a grasp pose, that object follows the cursor around the virtual scene until the user opens their hand. During grasping using the spherical cursor, we move the object by applying force to it in the cursor's direction. However, this force is proportional to the object's mass. As a result, objects with different mass follow the cursor at the same speed and acceleration. Therefore, comparing the quantitative and qualitative results from user interactions using a force-controlled hand versus the spherical cursor as a baseline allows us to validate the first hypothesis.
|
| 134 |
+
|
| 135 |
+
To test the second hypothesis, participants interact with virtual objects using the force-controlled hand both with and without the vibration feedback, which allows us to compare the results and analyze the effectiveness of the vibrotactile feedback in mass perception and discrimination.
|
| 136 |
+
|
| 137 |
+
### 5.1 Setup
|
| 138 |
+
|
| 139 |
+
In this subsection, we describe the study setup's hardware and software components and the range of mass values we use for our virtual objects. We use the MMXC-HF VCA by Tactile Labs, a relatively compact tactile actuator $\left( {{36}\mathrm{\;{mm}} \times {9.5}\mathrm{\;{mm}} \times {9.5}\mathrm{\;{mm}}}\right)$ , and the Tactile Labs QuadAmp multi-channel signal amplifier. A pair of thin wires attached the VCA to the signal amplifier placed on a nearby table. The cables from the actuator point outwards from the user's finger, limiting the chance of cables touching the user's hands during virtual interactions. Using a $3\mathrm{\;d}$ printed mount, we attach the voice coil actuator to the user's index fingertip (Fig. 4). We use the PC-powered Oculus Rift as our VR interface, which allows for external PC-based graphical computation. For tracking the user's hands, we attach a Leap Motion controller on the front side of the Oculus Rift VR headset for hand tracking.
|
| 140 |
+
|
| 141 |
+
In our system, we use the Bullet physics simulation [8] as our physics engine. One desirable feature of the Bullet library is that it permits the virtual hand's control by applying virtual force and torque from an external source. This feature enables us to implement the virtual coupling between our virtual hand and the tracked hand.
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 5: A participant interacting with a virtual object while wearing the VR headset with the Leap Motion hand tracker, VCA and nose-canceling headphones.
|
| 146 |
+
|
| 147 |
+
To render the virtual scene to the VR headset and work with the Bullet physics simulation, we use the Chai3D library. Chai3D [6] is a platform-agnostic haptics, visualization, and interactive real-time simulation library. Moreover, it supports visualizing using the Oculus Rift headset and has built-in Bullet physics integration, making it ideal for immersive and physically realistic haptic experiences.
|
| 148 |
+
|
| 149 |
+
In our study, we use cubes as our virtual object's shape since they are easier to grasp. During our experiments, there may be multiple virtual cubes in the scene with different mass ranges. For setting the mass range in our experiments, we should consider the physics engine that we use. The Bullet physics engine recommends keeping the mass of objects around $1\mathrm{\;{kg}}$ and avoid very large or small values [7]. Therefore, during our preliminary experiments, we set the virtual coupling coefficients so that users could pick up virtual cubes with masses up to $4\mathrm{\;{kg}}$ . However, past that mass point, it becomes too difficult to pick up the virtual cubes. Since we expect users to be able to interact and pick up any virtual cube in the scene, we chose ${2.5}\mathrm{\;{kg}}$ as our upper mass limit in our user studies for the heaviest objects and ${0.25}\mathrm{\;{kg}}$ as our lower mass limit for the lightest objects.
|
| 150 |
+
|
| 151 |
+
### 5.2 Participants
|
| 152 |
+
|
| 153 |
+
Ten participants ( 5 female, 5 male) took part in this study. All participants were right-handed. Three participants had never used VR headsets before; one participant used them few times per week and the rest at most a few times per year. Seven of them had interacted with virtual objects during their VR experiences, and three had used haptic devices in VR games and applications. This study was approved by the University of Calgary Conjoint Faculties Research Ethics Board (REB18-0708). Participants received 20\$ compensation for taking part in this user study.
|
| 154 |
+
|
| 155 |
+
### 5.3 Study
|
| 156 |
+
|
| 157 |
+
We begin the study by spending a few minutes $\left( { < 8}\right)$ familiarizing the participants with the VR headset, Leap Motion hand tracker, and the virtual study environment. After placing the haptic actuator on their dominant hand's fingertip, they practice how to pick up and move a virtual cube (1.25 kilograms) using the virtual co-located hand. We ask participants to always use their index fingers in grasping since the haptic actuator is attached to it. They are also encouraged to engage more fingers or tighten their grip to increase the grasping strength and move the training object around the scene both slowly and quickly. For consistency, we ask the participants only to use their dominant hand to interact with the virtual elements in the scene when the tasks start. During the virtual tasks, participants wear active noise-canceling headphones while white-noise is played through them to block any audible signal from the haptic actuator (Fig. 5).
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
|
| 161 |
+
Figure 6: Two virtual cubes with random weights are placed in front of the participant to compare. The co-located spherical cursor mode is active, and the "Vibration Off" label indicates to the participant that they should not expect any vibration from the voice-coil actuator.
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
|
| 165 |
+
Figure 7: Three virtual cubes with random weights are placed in front of the participant to sort in ascending order from left to right. The "Vibration On" label indicates to the participant that they should expect vibration from the voice-coil actuator when picking up objects.
|
| 166 |
+
|
| 167 |
+
In the first task, we present participants with six pairs of cubes and ask them to interact, grasp, move the objects, and think aloud about the experience. Furthermore, we ask them to compare the two cubes based on their mass and say if they feel they have the same mass or if one is slightly or considerably (or to whatever degree they perceive it) heavier than the other. Participants interact with virtual objects using the three interaction modes in the following order: spherical cursor, virtual hand without the vibration feedback, and virtual hand with the vibration feedback. As an example, Fig. 6 shows this task's setup while the interaction mode is set to the spherical cursor. For each interaction mode, participants compare two pairs of cubes. One pair has the largest mass difference given our mass range (0.25 and ${2.5}\mathrm{\;{kg}}$ ), and the other pair has a smaller mass difference (0.25 and ${0.5}\mathrm{\;{kg}}$ ). The system randomly decides if the smaller or larger mass difference pair is first presented to the user and randomly places the two cubes on the table for each set to avoid learning from the previous rounds.
|
| 168 |
+
|
| 169 |
+
In the next part, we ask participants to sort virtual cubes based on their mass. In sorting, a higher number of objects to sort means the participant spends more time picking up and moving objects around the scene, which results in a fuller user experience in comparing weights. However, a higher number of objects to sort increases the average time to complete the task, limiting the number of sorting rounds users can perform during a study session. Our preliminary experiments concluded that three cubes could offer a reasonable balance between sorting time and user interaction with objects.
|
| 170 |
+
|
| 171 |
+
We quantized our mass range ( ${0.25}\mathrm{\;{kg}}$ to ${2.5}\mathrm{\;{kg}}$ ) into two weight sets of size three. Having more than one weight-set allows a more in-depth analysis of the interaction modes across our mass range. Weber's law states that the difference in magnitude needed to discriminate between a base stimulus and other stimuli increases proportionally to the intensity of the base stimulus [12]. We can easily differentiate a ${0.5}\mathrm{\;{kg}}$ mass versus a $1\mathrm{\;{kg}}$ mass, but it is harder to distinguish a ${10}\mathrm{\;{kg}}$ mass from a ${10.5}\mathrm{\;{kg}}$ even though both pairs have the same weight difference. Therefore we chose our mass values with equal ratios between them using a geometric series. That gives a light weight-set(0.25kg,0.44kg,0.79kg)and a heavy weight-set (0.79kg,1.4kg,2.5kg).
|
| 172 |
+
|
| 173 |
+
Participants sort random permutations of the light and the heavy weight-set, using the three different interaction modes (spherical cursor, virtual hand without vibration feedback, the virtual hand with vibration feedback). Therefore we have six modes of sorting. As an example, Fig. 7 shows this task's setup while the interaction mode is set to the virtual hand with vibration feedback. In all sorting modes, three virtual cubes with similar appearance and size are placed on a virtual surface, and participants have to place them from left to right in ascending order based on the perceived mass. Participants perform six rounds of sorting for each mode. During each round, sorting modes are ordered randomly to remove the learning effect between the modes. Before the sorting task begins, we rotate between the modes to familiarize the participant with the scene. Furthermore, we ask participants to grasp each object at least once before finalizing their decision. Also, we recommend keeping each sorting under a minute; however, this is not a hard limit.
|
| 174 |
+
|
| 175 |
+
When the sorting task finishes, participants fill out a questionnaire regarding their experience during the two virtual tasks. After participants fill out the questionnaire, we ask them to elaborate on their answers during a semi-structured interview. Our post-session questionnaire is as follows: (each question is repeated for each of the interaction modes)
|
| 176 |
+
|
| 177 |
+
- While interacting with objects, I could perceive their mass. 1 to 5 (Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree)
|
| 178 |
+
|
| 179 |
+
- I could feel one cube was heavier than the other. I to 5 (Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree)
|
| 180 |
+
|
| 181 |
+
- How was your confidence level in sorting objects? 1 to 5 (Not confident at All, , , , Very Confident)
|
| 182 |
+
|
| 183 |
+
- How realistic were the interactions with objects? 1 to 5 (Very Unrealistic, Unrealistic, Neutral, Realistic, Very Realistic)
|
| 184 |
+
|
| 185 |
+
- Would you recommend experiencing the "" in VR games during interactions with virtual objects? 1 to 5 (Do Not Recommend at All, , Neutral, , Highly Recommend)
|
| 186 |
+
|
| 187 |
+
### 5.4 Results
|
| 188 |
+
|
| 189 |
+
We show the sorting results in the form of confusion matrices in Fig 8. Using the nonparametric Kruskal-Wallis test, we analyze the statistical significance of the difference between placement distributions of light, medium, and heavy objects for each of the sorting modes. For the spherical cursor (control mode), we observe statistically insignificant p-values of 0.463 for the heavy weight-set and 0.800 for the light weight-set, showing that the user could not discriminate between weights in this mode. For the virtual hand with no vibration feedback, we see statistically insignificant results for the light weight-set (p-value 0.928). However, for the heavy set, we see a significant effect of the virtual hand on sorting (p-value $< {0.001})$ . In the case of sorting using the virtual hand with vibration feedback, we see a significant effect on sorting both for the light (p-value <0.001) and heavy (p-value <0.001) weight sets. To check the validation of the first hypothesis, we see a significant improvement for the heavy weight-set compared to the control mode (spherical cursor). However, the same cannot be said for the light weight-set. To check for the second hypothesis, we see a statistically significant improvement in the light weight-set with the vibration feedback compared to only using the virtual hand. However, for the heavy set, we see significant effects both from virtual hand with and without the vibration feedback. Therefore, to check if the observed improvements in the precision of sorting for the light, medium and heavy objects are significant, we perform row by row comparison between the two confusion matrices using the Wilcoxon rank-sum test. Comparing the number of correct sorts for the heavy weight (54 correct sorts versus 33) gives a statistically significant p-value of $< {0.001}$ , for the medium weight (44 correct sorts versus 25) p-value is $< {0.001}$ , and for the light weight ( 48 correct sorts versus 34) p-value is $< {0.01}$ , which shows that for the heavy weight-set the vibration feedback improvement is statistically significant as well.
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
|
| 193 |
+
Figure 8: Sorting results of the six different sort modes in the form of confusion matrices. The top three matrices show the sorting results for the light weight-set(0.25kg,0.44kg,0.79kg), and the bottom three show the sorting results for the heavy weight-set(0.79kg,1.4kg,2.5kg). From left to right, matrices represent the three interaction modes (spherical cursor, virtual hand with no vibration, virtual hand with vibration). The matrices diagonals show the number of times the objects were sorted correctly.
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
|
| 197 |
+
Figure 9: Users compare the sense of mass perception and discrimination between the three interaction modes in the post-session questionnaire. The bars represent the mean answer, and the black lines show the standard deviation.
|
| 198 |
+
|
| 199 |
+
The results of the questionnaire in Fig. 9 show that participants declared an improvement in mass perception and discrimination when the vibration feedback was enabled compare to only using the virtual hand. P6 (Participant #6) mentioned "With the hand no vibration, it was harder to tell the difference in mass, but I think you could still, it was realistic enough that it was engaging, but the vibration one I'm not if it's like a mental thing, it just helps a lot more with the differentiating between the different masses and the movements". We also see neutral results for the spherical cursor. Generally, participants mentioned they could not differentiate between the objects using the spherical cursor. P2 mentioned, "It was harder for me to use the cursor to compare the weights, most of the time I thought they were like identical". For the virtual hand without the vibration feedback, participants on average expressed neutral opinions regarding its ability to give them the sense of mass perception and discrimination. However, the results from the sorting task show they performed better than the control. Also, some participants mentioned different encounters that enabled them to differentiate between weights. P5 mentioned "I'm picking it up, how long would it slide, ok hold it, I shake it around it slides faster ... if I hold it, it slips faster then it's heavier", and P6 said "(with the virtual hand) if I grab it loose the heavy one just drops as opposed to the light one stays in even if I'm shaking it", and "looking at the movement, if I'm moving my hand it's a bit slower it just feels heavier versus if it's a quick it just feels lighter"
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+
Figure 10: Users compare the sorting confidence, sense of realism, and gaming experience between the three interaction modes in the post-session questionnaire. The bars represent the mean answer, and the black lines show the standard deviation.
|
| 204 |
+
|
| 205 |
+
Fig. 10 shows that participants expressed having more confidence in sorting when the vibration feedback was enabled. However, without the vibration feedback, they expressed neutral confidence. Furthermore, participants generally stated that the vibration feedback added to the interaction's realism and that the virtual hand's interactions were realistic. P4 said "For the vibration also, I felt like it helped me, felt like it's more real, I'm touching things, not just I'm seeing that I'm touching things". Furthermore, participants expressed interest in experiencing the vibration effect in virtual reality games.
|
| 206 |
+
|
| 207 |
+
Finally, we asked the participants how did interaction with virtual objects feel when they vibrated. P2 said: "if felt like it has resistancy to move, based on that I felt like it's heavier, might be heavier" and P7 mentioned "When I picked a cube with vibration, I could feel that something is trying to, I don't know, annoy me bother me, might be something like the gravity taking it back to the ground, it feels that I should put more energy to pick it up" and further elaborated "the one that without vibration I just pick it with two fingers I played with that, but the one with vibration when I tried to pick it with two fingers, suddenly I tried to keep it with all my fingers because I thought that it might slides and drops."
|
| 208 |
+
|
| 209 |
+
Overall our findings indicate that the presence of the force-controlled virtual hand both with and without the vibration effect gives a sense of weight discrimination and perception. However, the virtual hand without vibration feedback is only effective for heavier objects closer to the hand strength threshold. Furthermore, the virtual hand with the vibration effect improves the weight perception and discrimination sense for both lighter and heavier objects without having a negative effect on the realism of the experience. Therefore, our results validate our hypotheses.
|
| 210 |
+
|
| 211 |
+
## 6 CONCLUSION
|
| 212 |
+
|
| 213 |
+
Rendering the mass of objects in virtual reality without limiting the hand movements is a challenging task. In this paper, we propose using a force-controlled hand in VR to give a sense of mass perception and discrimination by enabling physically realistic hand-object interactions. We also propose a complementary vibration effect proportional to the object's mass and acceleration to improve the sense of mass perception and discrimination. We conducted a user study and performed qualitative and quantitative analysis, which indicates that our hypotheses are valid. The physically-based virtual hand can give a sense of mass perception and discrimination for heavier objects closer to the upper limit of its grasping strength. Furthermore, the vibration feedback greatly enhances the mass perception and discrimination for a wider mass range in our study while improving the interaction's realism.
|
| 214 |
+
|
| 215 |
+
## 7 FUTURE WORKS
|
| 216 |
+
|
| 217 |
+
One potential future direction for this research is to analyze the mass discrimination ability for the virtual hand and the vibration effect for a broader mass range and different mass ratios between the objects. Moreover, we are interested in analyzing the vibration effect's behavioral effects on the user's movements during virtual interactions.
|
| 218 |
+
|
| 219 |
+
## REFERENCES
|
| 220 |
+
|
| 221 |
+
[1] E. BÄCKSTRÖM. Do you even lift? an exploratory study of heaviness perception in virtual reality. Master's thesis, 2018. Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230681.
|
| 222 |
+
|
| 223 |
+
[2] C. W. Borst and A. P. Indugula. A spring model for whole-hand virtual grasping. Presence: Teleoperators and Virtual Environments, 15(1):47-61, 2006. doi: 10.1162/pres.2006.15.1.47
|
| 224 |
+
|
| 225 |
+
[3] D. A. Bowman and L. F. Hodges. Evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments. Proceedings of the Symposium on Interactive 3D Graphics, pp. 35-38, 1997. doi: 10.1145/253284.253301
|
| 226 |
+
|
| 227 |
+
[4] F. Chinello, C. Pacchierotti, M. Malvezzi, and D. Prattichizzo. A Three Revolute-Revolute-Spherical Wearable Fingertip Cutaneous Device for Stiffness Rendering. IEEE Transactions on Haptics, 11(1):39-50, 2018. doi: 10.1109/TOH.2017.2755015
|
| 228 |
+
|
| 229 |
+
[5] I. Choi, H. Culbertson, M. R. Miller, A. Olwal, and S. Follmer. Grabity. pp. 119-130, 2017. doi: 10.1145/3126594.3126599
|
| 230 |
+
|
| 231 |
+
[6] F. Conti, F. Barbagli, R. Balaniuk, M. Halg, C. Lu, D. Morris, L. Sen-tis, J. Warren, O. Khatib, and K. Salisbury. The chai libraries. In Proceedings of Eurohaptics 2003, pp. 496-500. Dublin, Ireland, 2003.
|
| 232 |
+
|
| 233 |
+
[7] E. Coumans. Bullet 2.83 physics sdk manual, 2015.
|
| 234 |
+
|
| 235 |
+
[8] E. Coumans. Bullet physics simulation. In ACM SIGGRAPH 2015 Courses, p. 1. 2015.
|
| 236 |
+
|
| 237 |
+
[9] L. Dominjon, A. Lécuyer, J. M. Burkhardt, P. Richard, and S. Richir. Influence of control/display ratio on the perception of mass of manipulated objects in virtual environments. Proceedings - IEEE Virtual Reality, 2005:19-26, 2005.
|
| 238 |
+
|
| 239 |
+
[10] T. Endo, H. Kawasaki, T. Mouri, Y. Ishigure, H. Shimomura, M. Mat-sumura, and K. Koketsu. Five-fingered haptic interface robot: HIRO III. IEEE Transactions on Haptics, 4(1):14-27, 2011. doi: 10.1109/TOH. 2010.62
|
| 240 |
+
|
| 241 |
+
[11] F. G. Hamza-Lup, C. M. Bogdan, D. M. Popovici, and O. D. Costea. A survey of visuo-haptic simulation in surgical training. eLmL - International Conference on Mobile, Hybrid, and On-line Learning, pp. 57-62, 2011.
|
| 242 |
+
|
| 243 |
+
[12] E. Kandel, S. Mack, T. Jessell, J. Schwartz, S. Siegelbaum, and A. Hud-speth. Principles of Neural Science, Fifth Edition, chap. 21, p. 451. McGraw-Hill's AccessMedicine. McGraw-Hill Education, 2013.
|
| 244 |
+
|
| 245 |
+
[13] J. Kildal. Kooboh: Variable Tangible Properties in a Handheld Haptic-Illusion Box 1 Introduction and Motivation. EuroHaptics 2012, Part II LNCS, 7283:191-194, 2012.
|
| 246 |
+
|
| 247 |
+
[14] H. Kim, R. C. Park, H. B. Yi, and W. Lee. HapCube: A tactile actuator providing tangential and normal pseudo-force feedback on a fingertip. ACM SIGGRAPH 2018 Emerging Technologies, SIGGRAPH 2018, pp. 1-13, 2018. doi: 10.1145/3214907.3214922
|
| 248 |
+
|
| 249 |
+
[15] S. Kim and G. Lee. Haptic feedback design for a virtual button along force-displacement curves. UIST 2013 - Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, pp. 91- 96, 2013. doi: 10.1145/2501988.2502041
|
| 250 |
+
|
| 251 |
+
[16] L. Lin and S. Jorg. The effect of realism on the virtual hand illusion. Proceedings - IEEE Virtual Reality, 2016-July:217-218, 2016. doi: 10. 1109/VR.2016.7504731
|
| 252 |
+
|
| 253 |
+
[17] J. R. Lykke, A. B. Olsen, P. Berman, J. A. Bærentzen, and J. R. Frisvad. Accounting for Object Weight in Interaction Design for Virtual Reality. (May), 2019.
|
| 254 |
+
|
| 255 |
+
[18] R. McCloy and R. Stone. Science, medicine, and the future: Virtual reality in surgery. Bmj, 323(7318):912-915, 2001. doi: 10.1136/bmj. 323.7318.912
|
| 256 |
+
|
| 257 |
+
[19] K. Minamizawa, S. Fukamachi, H. Kajimoto, N. Kawakami, and S. Tachi. Gravity grabber: Wearable haptic display to present virtual mass sensation. ACM SIGGRAPH 2007: Emerging Technologies, SIGGRAPH'07, pp. 3-6, 2007. doi: 10.1145/1278280.1278289
|
| 258 |
+
|
| 259 |
+
[20] D. Prattichizzo, F. Chinello, C. Pacchierotti, and M. Malvezzi. Towards wearability in fingertip haptics: A 3-DoF wearable device for cutaneous force feedback. IEEE Transactions on Haptics, 6(4):506-516, 2013. doi: 10.1109/TOH.2013.53
|
| 260 |
+
|
| 261 |
+
[21] M. Samad, E. Gatti, A. Hermes, H. Benko, and C. Parise. Pseudo-haptic weight: Changing the perceived weight of virtual objects by manipulating control-display ratio. Conference on Human Factors in Computing Systems - Proceedings, pp. 1-13, 2019. doi: 10.1145/ 3290605.3300550
|
| 262 |
+
|
| 263 |
+
[22] J. Seo, S. Mun, J. Lee, and S. Choi. Substituting motion effects with vibrotactile effects for 4D experiences. In Conference on Human Factors in Computing Systems - Proceedings, vol. 2018-April. Association for Computing Machinery, apr 2018. doi: 10.1145/3173574.3174002
|
| 264 |
+
|
| 265 |
+
[23] P. Song, W. B. Goh, W. Hutama, C.-W. Fu, and X. Liu. A handle bar metaphor for virtual object manipulation with mid-air interaction. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 1297-1306, 2012.
|
| 266 |
+
|
| 267 |
+
[24] P. van der Straaten. Interaction affecting the sense of presence in virtual reality. 2000.
|
| 268 |
+
|
| 269 |
+
[25] A. Zenner and A. Kruger. Shifty: A Weight-Shifting Dynamic Passive Haptic Proxy to Enhance Object Perception in Virtual Reality. IEEE Transactions on Visualization and Computer Graphics, 23(4):1312- 1321, 2017. doi: 10.1109/TVCG.2017.2656978
|
| 270 |
+
|
| 271 |
+
[26] A. Zenner and A. Krüger. Drag: On - A virtual reality controller providing haptic feedback based on drag and weight shif. Conference on Human Factors in Computing Systems - Proceedings, 2019. doi: 10. 1145/3290605.3300441
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/NDtf0xHcIem/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SIMULATING MASS IN VIRTUAL REALITY USING PHYSICALLY-BASED HAND-OBJECT INTERACTIONS WITH VIBRATION FEEDBACK
|
| 2 |
+
|
| 3 |
+
< g r a p h i c s >
|
| 4 |
+
|
| 5 |
+
Figure 1: Physics-based interactions with virtual objects using a co-located virtual hand (the left figure) are augmented using vibrational feedback proportional to objects' mass and acceleration (the right figure).
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Providing the sense of mass for virtual objects using un-grounded haptic interfaces has proven to be a complicated task in virtual reality. This paper proposes using a physically-based virtual hand and a complementary vibrotactile effect on the index fingertip to give the sensation of mass to objects in virtual reality. The vibro-tactile feedback is proportional to the balanced forces acting on the virtual object and is modulated based on the object's velocity. For evaluating this method, we set an experiment in a virtual environment where participants wear a VR headset and attempt to pick up and move different virtual objects using a virtual physically-based force-controlled hand while a voice-coil actuator attached to their index fingertip provides the vibrotactile feedback. Our experiments indicate that the virtual hand and our vibration effect give the ability to discriminate and perceive the mass of virtual objects.
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction devices-Haptic devices; Human-centered computing-Human computer interaction (HCI)— Interaction paradigms-Virtual reality
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Virtual Reality (VR) has significantly revolutionized simulated human experiences. VR enables an immersive virtual experience by simulating and triggering most of our senses as if we are present in another environment. Notably, in VR it is possible to see one's own co-located virtual hands, perceive them as their own real hands and interact with virtual objects [16]. However, virtual objects have no real mass, and the problem is including touch and visual cues that we rely on for mass perception. The physical cues include skin stretch and contact pressure at the fingertips (cutaneous feedback) and proprioceptive feedback from multiple muscles and joints (kinesthetic feedback).
|
| 16 |
+
|
| 17 |
+
Grounded haptic devices can render the necessary forces for kinesthetic and cutaneous haptic feedback. However, their size, weight, and limited workspace restrict free-hand movements, making them less desirable in various VR applications.
|
| 18 |
+
|
| 19 |
+
Alternatively, ungrounded haptic devices (such as finger-mounted or hand-held devices) can be built more compactly and lighter, making them more convenient to use in a larger workspace. Sensing the mass of a virtual object in every direction needs more complex ungrounded hardware with higher degrees of freedom. However, such devices require multiple actuators and can limit hand and finger movements.
|
| 20 |
+
|
| 21 |
+
Another approach to overcome the hardware limitations is to use visio-haptic illusions. These methods aim to trick the brain into perceiving the mass by manipulating the objects' visual cues. For example, limiting the virtual object's velocity [1], or scaling its displacement compared to the user's hand [21] are shown to give a sense of mass to the objects. However, these methods are not physically realistic or decrease the co-location between the actual and virtual hands.
|
| 22 |
+
|
| 23 |
+
In this paper, we introduce a novel mass rendering method that combines a visio-haptic technique with a simple finger-mounted vibration actuator. For the visio-haptic part, we replicate the visual cues that humans perceive during a real-world hand interaction with physical objects. We use a force-controlled physically-based virtual hand in VR to interact with virtual objects, which results in a limit on the heaviness of the objects that the user can pick up and how fast they can accelerate them based on their mass. However, it is difficult to distinguish between light objects using this technique. We complement our visio-haptic method with haptic feedback. The haptic actuator that renders the feedback should be small and compact enough to allow individual fingers to move independently and perform dexterous interactions. Also, we prefer an ungrounded device since it allows a larger workspace. One method to reduce the device's size is to use haptic feedback that is directionally invariant to our sense of touch. If the haptic stimulus's direction is detectable by the sense of touch, we need multiple actuators to render the haptic effect in different directions during a virtual interaction. Therefore we employ an ungrounded, direction invariant haptic effect to complement our physically-based virtual hand. We explore using a mechanical vibration feedback effect to achieve ungrounded mass rendering for virtual objects. In our work, while the virtual object is in the user's grasp, a sinusoid vibration proportional to the object's mass and acceleration is played through an ungrounded voice-coiled actuator at the tip of the user's index finger. An overview of the proposed method is shown in Fig. 1.
|
| 24 |
+
|
| 25 |
+
When moving two objects with different mass, in addition to the physically-based visio-haptic feedback, users feel proportionally stronger vibration while grasping the heavier object. This vibro-tactile feedback gives the user a clue to the net force acting on the virtual object. To make this a direction-invariant haptic feedback, we use frequencies above ${100}\mathrm{\;{Hz}}$ . These frequencies are sensed by Pacini mechanoreceptors, which are not sensitive to the stimuli's directions.
|
| 26 |
+
|
| 27 |
+
To evaluate the proposed physically-based virtual hand and the vibration feedback, we conducted a user study where participants interact with virtual objects with different masses and perform virtual tasks. Using qualitative and quantitative methods, we show that the physically-based hand gives a sense of mass to virtual objects, and adding the vibration feedback does improve mass perception and discrimination.
|
| 28 |
+
|
| 29 |
+
The main contribution of this work is the design, development, and evaluation of a novel mass rendering method for virtual objects using physically-based hand-object interactions and vibration feedback.
|
| 30 |
+
|
| 31 |
+
§ 2 RELATED WORKS
|
| 32 |
+
|
| 33 |
+
In this section, we review relevant literature on simulating the mass of virtual objects during a VR experience. Also, we discuss modes of interaction in VR, including physically realistic grasping and interaction.
|
| 34 |
+
|
| 35 |
+
Grounded haptic devices are highly sought-after in tool-mediated applications where precision and fidelity are essential such as surgical training $\left\lbrack {{11},{18}}\right\rbrack$ . Hand wearable grounded devices have also been developed. HIRO III [10] is an example of a five-fingered grounded haptic interface, with three DoF for each of its haptic fingers and a $6\mathrm{{DoF}}$ base capable of providing high precision force feedback to a hand while it is attached to each of the fingertips. The main challenge with grounded devices is their limited workspace size.
|
| 36 |
+
|
| 37 |
+
Ungrounded haptic devices are attached to the user's body instead of a fixed point in the room, which allows a larger workspace. These devices are either hand-held or attached to the user's fingers, hands, or body. Minamizawa et al. [19] introduce a fingertip mounted ungrounded haptic device called the Gravity Grabber that can create a sense of weight when grabbing virtual objects in specific orientations. Gravity Grabber achieves this using one degree of freedom for shear force feedback and another degree of freedom in the normal direction of the fingertip skin. However, since our skin can detect the direction of skin stretch, this method cannot give a sense of weight to a virtual object in all orientations. Sensing the weight and inertia of a virtual object in all directions requires an ungrounded device with more complex hardware and higher degrees of freedom. Such as the works of Chinello et al. [4], and Prattichizzo et al. [20]. However, such devices are mechanically complicated since they require multiple actuators and limit hand and finger movements. In our method, we use one haptic actuator to render the mass of objects in all directions since we use sinusoidal vibration feedback.
|
| 38 |
+
|
| 39 |
+
Hand-held ungrounded devices are desirable for simulating interactions with hand-held tools such as a hammer or a baseball bat. However, they limit the movement of fingers and the hand. Zenner in [26] introduced Drag:on a custom VR hand controller with two actuated fans, which can dynamically adjust the controller's aerodynamic properties, therefore changing the sensed inertia of a virtual object. Zenner et al. [25] introduce Shifty, a hand-held VR controller with an internal prismatic joint connected to a weight that shifts the center of mass of the device, resulting in different rotational inertia and resistance as the user interacts with various virtual tools. In the work of Lykke et al. [17], users have two hand controllers to pick up round virtual objects (scooping), and they should keep their hands closer together when the objects are heavier. Our method tracks the user's own hand instead of using a VR controller, which increases the sense of ownership and realism of the virtual hand [16] while not limiting the fingers' and hand's mobility.
|
| 40 |
+
|
| 41 |
+
Humans can use visual cues to determine the weight of a virtual object. Backstrom [1] gives the sensation of mass to virtual objects in VR by limiting the velocity of a virtual object based on how heavy it is. Such constraints on the object's movements are not physically realistic. Dominjon et al. [9] show that manipulating the control-display ratios of virtual objects can change the perceived mass in virtual environments. In other words, if a virtual object's displacement is proportionally increased compared to the user's actual hand, its mass is perceived as lighter than it is and vice versa. Samad et al. [21] utilize the same technique in VR to change the perceived weight of wooden cubes. However, one downside of changing the control-display ratios is that the offset between the actual and the virtual representation of the object increases as the hand gets further away from the initial contact point. Therefore, bi-manual coordination and interaction could become difficult since the virtual hand's relative position is different from the actual hand's, even if it is not moving. Our approach aims to give a sense of mass to objects by using a physically-based virtual hand that enables realistic interactions with virtual objects and preserves the co-location between the virtual and actual palm when the hands are steady or when their acceleration is not changing.
|
| 42 |
+
|
| 43 |
+
Interaction is an important part of an immersive virtual experience and increases the user's sense of presence [3] [24]. There are various ways to enable interactions between a virtual hand and virtual objects. In gesture and metaphor-based approaches, the interaction uses specified hand commands. For example, if the virtual hand is in a grasping pose and near a virtual object, that object's orientation follows the virtual hand. Song et al. [23] enables 9 DoF control of a virtual tool using bi-manual gestures. Gesture-based approaches have proven to be robust and effective. However, they are unintuitive and artificial by nature; therefore, they are not suitable for a physically realistic interaction. Another approach is to use physically-based manipulation techniques. For example, Borst and Indugula in [2] propose virtual coupling of the tracked hand to a rigid virtual hand that enables whole hand grasping. In this method, the palm and finger joints of the tracked hand and the virtual hand are connected to the corresponding parts using linear and torsional virtual spring-dampers. Moreover, since the spring damper links work based on applying a limited and proportional amount of force, this method shares the same physical limitations that a realistic interaction has. We modify this method to preserve the co-location between the virtual and actual palms and evaluate it for mass rendering in VR.
|
| 44 |
+
|
| 45 |
+
Vibration feedback can be used to simulate different touch stimuli. We use sinusoidal vibrations to render the mass of a virtual object. Asymmetrical vibration is another type of vibration feedback that has been used by Choi et al. [5] to simulate weight in VR. These vibrations cause skin-stretch, and the user can detect their direction. Therefore, multiple actuators are required for simulating weight and inertia in all directions. Moreover, the intensity of these asymmetrical vibrations is much stronger (up to ${20}\mathrm{\;g}\left( {{9.8}{\mathrm{\;{ms}}}^{-2}}\right)$ ) compare to our vibration feedback (less than $1\mathrm{\;g}$ ). Kildal [13] uses grain mechanical vibrations to create the illusion of compliance for a rigid box. Sinusoidal vibration feedback has been used in other haptic applications such as simulating a button press on a rigid box [15] and a virtual button in VR [14]. Moreover, Seo et al. [22] simulate a moving cart by adding vibration feedback to a chair and changing the amplitude and frequency of the vibration feedback proportional to the simulated cart's angular velocity.
|
| 46 |
+
|
| 47 |
+
Mass rendering methods in VR limit the hand and finger movements or engage users in unrealistic interactions. Our physically-based interaction is realistic and preserves the co-location of actual and virtual palms when the hands are under no or constant acceleration, and our vibration feedback works with a single actuator on the fingertip without limiting the hand and finger movements.
|
| 48 |
+
|
| 49 |
+
§ 3 FORCE-CONTROLLED VIRTUAL HAND
|
| 50 |
+
|
| 51 |
+
One of the goals of this paper is to explore the effect of a physically-based interaction on mass perception and discrimination. There is a weight limit on objects in the real world that we can pick up using our hands. Our grip strength and the force that we can apply to a grasped object are bounded. Therefore, there is a limit to how fast we can accelerate an object based on its mass. In VR, we hypothesize that physically-based interaction between the user's virtual hand and object creates a sense of mass for that object. For this purpose, we track the user's hand, couple it with a 3D model of a hand, and use a physically-based simulation for hand-object interactions. We use a vision-based hand tracking system (Leap Motion hand tracker) to allow the user's hand and fingers to move freely, providing a virtual experience analogous to real-world interaction.
|
| 52 |
+
|
| 53 |
+
For modeling the hand, we consider one rigid palm and five fingers, each of which has three rigid phalanges. Interaction between VR objects and the force-controlled virtual hand is more realistic than interactions between the tracked hand and VR objects. For example, when grasping an object, the tracked hand can go inside the object, but the virtual hand grasps around the object. Therefore, we only display the force-controlled virtual hand (VR hand). The VR hand must be co-located and coupled with the tracked hand. To achieve this, rather than a purely geometric approach, we modify the physically-based method described by Borst and Indugula in [2]. The physically-based coupling helps us to efficiently prevent unrealistic collisions and interactions between the VR hand and objects. In the physically-based coupling method, we associate one spring-damper to each rigid component of fingers. The spring-dampers apply force to the VR hand's components to match their positions and orientations to the tracked hand's corresponding components. To achieve consistent behavior from the physical simulation, we use a fixed size VR hand. Having a fixed size for the VR hand does not directly influence efficiency in virtual object manipulation tasks, sense of hand ownership, realism, or immersion in VR [16].
|
| 54 |
+
|
| 55 |
+
The spring-damper coupling applies both force and torque to the virtual part. The force at time $t,\overrightarrow{F}\left( t\right)$ , is proportional to ${\Delta }_{\text{ Position }}\left( t\right)$ , the distance between the center of the mass of the two corresponding parts and the torque at time $t,\overrightarrow{\tau }\left( t\right)$ , is proportional to ${\Delta }_{\text{ Rotation }}\left( t\right)$ , the difference in their rotation. To prevent the virtual part from overshooting its target position and orientation, the spring-damper applies another force to the virtual object proportional to $\overrightarrow{V}\left( t\right)$ , its linear velocity and torque proportional to $\overrightarrow{\omega }\left( t\right)$ , its angular velocity. That gives:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\overrightarrow{F}\left( t\right) = {k}_{p}^{\prime }{\overrightarrow{\Delta }}_{\text{ Position }}\left( t\right) - {k}_{d}^{\prime }\overrightarrow{V}\left( t\right) , \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\overrightarrow{\tau }\left( t\right) = {k}_{p}^{\prime \prime }{\overrightarrow{\Delta }}_{\text{ Rotation }}\left( t\right) - {k}_{d}^{\prime \prime }\overrightarrow{\omega }\left( t\right) \tag{2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where ${k}_{p}^{\prime },{k}_{p}^{\prime \prime },{k}_{d}^{\prime }$ and ${k}_{d}^{\prime \prime }$ are the spring-damper coefficients. These parameters, are set during the preliminary experiments to ensure that the VR hand is responsive and closely and smoothly follows the actual hand and can pick up virtual mass up to $4\mathrm{\;{kg}}$ .
|
| 66 |
+
|
| 67 |
+
If we use a similar spring-damper to couple the palms, when the user holds an object using the VR hand, the distance between the VR hand and the actual hand increases until the spring-dampers' forces equal the weight of the VR hand and the object that it is holding. This causes a discrepancy between the visual and the proprioceptive sense. To solve this problem, we introduce an additional term in the spring-damper for the palms:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\overrightarrow{{F}_{\text{ Palm }}}\left( t\right) = {k}_{p}^{\prime }{\overrightarrow{\Delta }}_{\text{ Position }}\left( t\right) - {k}_{d}^{\prime }\overrightarrow{V}\left( t\right) + {k}_{i}^{\prime }\mathop{\sum }\limits_{{j = 0}}^{t}{\overrightarrow{\Delta }}_{\text{ Position }}\left( j\right) , \tag{3}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{\tau }_{\text{ Palm }}^{ \rightarrow }\left( t\right) = {k}_{p}^{\prime \prime }{\overrightarrow{\Delta }}_{\text{ Rotation }}\left( t\right) - {k}_{d}^{\prime \prime }\overrightarrow{\omega }\left( t\right) + {k}_{i}^{\prime \prime }\mathop{\sum }\limits_{{j = 0}}^{t}{\overrightarrow{\Delta }}_{\text{ Rotation }}\left( j\right) \tag{4}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where ${k}_{i}^{\prime }$ and ${k}_{i}^{\prime \prime }$ are spring-damper coefficients. The added summation term applies force and torque proportional to the accumulation of ${\overrightarrow{\Delta }}_{\text{ Position }}\left( t\right)$ and ${\overrightarrow{\Delta }}_{\text{ Rotation }}\left( t\right)$ over time. Therefore, when the user holds an object, $\overrightarrow{{F}_{\text{ Palm }}}\left( t\right)$ and $\overrightarrow{{\tau }_{\text{ Palm }}}\left( t\right)$ increase until the virtual palm's orientation and position match the tracked hand palm in the steady-state. ${k}_{i}^{\prime }$ and ${k}_{i}^{\prime \prime }$ are set during the preliminary experiments so that position and orientation of the coupled palms quickly match when the hand is not accelerating. Also, ${k}_{p}^{\prime },{k}_{p}^{\prime \prime },{k}_{d}^{\prime }$ and ${k}_{d}^{\prime \prime }$ are set independently for the palm compared to the phalanges since it has different physical properties.
|
| 78 |
+
|
| 79 |
+
< g r a p h i c s >
|
| 80 |
+
|
| 81 |
+
Figure 2: A weak and a strong virtual grip and the corresponding actual hands.
|
| 82 |
+
|
| 83 |
+
Using a force-controlled virtual hand should give a sense of mass perception and allow mass discrimination between virtual objects. However, we suspect that this claim is stronger in some scenarios and weaker in others. While grasping and moving a light object, the spring-damper forces counteract the force of gravity and inertia on the object. Therefore, using our virtual hand, if a user grasps an object with a low virtual mass, they can easily pick it up and quickly move it around the workspace with high acceleration without it coming out of their grip. However, for a heavier object, the user can still pick it up, but they have to increase their effort, such as using more fingers for grasping or closing their grip further so spring-dampers would apply more force on the object (Fig. 2). Also, it is not possible to accelerate it as fast as lighter objects since the inertial forces are higher and can overcome the spring-dampers in the virtual hand and open the virtual grasp. Depending on the spring dampers' coefficients, after a certain point in mass, it would be really difficult or eventually impossible for the user to move or pick up the object. We hypothesize that the limit on how fast the user can accelerate the virtual object in hand and how challenging it is to pick it up gives the user a sense of the virtual object's mass and enable them to discriminate two objects based on their mass. However, using this technique, it is hard to perceive the difference in mass between two light objects $\left( { < 1\mathrm{\;{kg}}}\right)$ since it would be almost effortless to pick both of them up off the ground and move them quickly without dropping them. To overcome this problem, we introduce a vibration feedback effect to complement our VR hand.
|
| 84 |
+
|
| 85 |
+
§ 4 VIBROTACTILE FEEDBACK
|
| 86 |
+
|
| 87 |
+
In day-to-day physical interactions with real-world objects, we can feel the object's mass and compare it to other heavier or lighter objects through our sense of touch. Virtual experiences that do not provide haptic feedback lack realism compared to real-world experiences. One of the modalities of haptic feedback is vibrotactile feedback in the form of mechanical waves or vibrations.
|
| 88 |
+
|
| 89 |
+
Our goal is to complement the VR hand in giving the user a perception of an object's mass by communicating the net force they apply to the object. To achieve this without limiting the hand and finger movements, we use one actuator to render our haptic feedback. We use sinusoidal vibration feedback with a frequency range between ${100}\mathrm{{Hz}}$ and ${150}\mathrm{{Hz}}$ , making it perceivable only by the Pacini mechanoreceptors in the fingertip skin. The Pacini mechanorecep-tors cannot detect the direction of the mechanical waves; therefore, only one actuator is sufficient to render our haptic feedback in all directions.
|
| 90 |
+
|
| 91 |
+
We strap a VCA (voice-coil actuator) to the fingertip of the index finger. We chose the index finger because it has a critical role in picking up objects with a pinch grasp. Other fingers, such as the thumb and the middle finger, can have an important role in grasping as well; However, attaching voice-coil actuators to multiple fingers limits the relative movement of fingertips and manual dexterity.
|
| 92 |
+
|
| 93 |
+
While a user grasps an object, we render the vibration feedback $O\left( t\right)$ with frequency $O{\left( t\right) }_{F}$ . The amplitude of $O\left( t\right)$ is proportional to the object’s mass $M$ and acceleration $A\left( t\right)$ . This results:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
O\left( t\right) = {\alpha MA}\left( t\right) \sin \left( {{2\pi tO}{\left( t\right) }_{F}}\right) , \tag{5}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where $\alpha$ is a scaling constant to control the range for the vibration energy perceived by the user. The vibration feedback should be only strong enough so that users can perceive the vibration when slowly moving the lightest weight in the scene. The value of $\alpha$ also depends on the hardware components of the haptic chain, such as the signal amplifier and the haptic actuator. For our setup, we set the $\alpha$ value in a way that, if the user accelerates a $1\mathrm{\;{kg}}$ object at $1\mathrm{\;g}$ , the measured vibration at the fingertip is on average ${0.32}\mathrm{\;g}$ , which allows users to perceive the vibration feedback when slowly moving the lightest weight(0.25kg)in our experiments. The frequency of the output signal $O{\left( t\right) }_{F}$ dynamically changes from ${100}\mathrm{{Hz}}$ to ${150}\mathrm{{Hz}}$ based on the velocity of the virtual object $V\left( t\right)$ , that gives:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
O{\left( t\right) }_{F} = \max \left( {{150},{100}\frac{\left| {V\left( t\right) }\right| + 2}{2}}\right) , \tag{6}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where at speeds near zero, the signal’s frequency is ${100}\mathrm{\;{Hz}}$ , and as the speed increases to about one $\mathrm{m}/\mathrm{s}$ , it goes up to ${150}\mathrm{{Hz}}$ . To ensure a smooth vibration signal, we apply a second-order Butterworth lowpass filter to $V\left( t\right)$ and $A\left( t\right)$ . The filter has a sample rate of ${1000}\mathrm{\;{Hz}}$ , and the corner frequency is ${20}\mathrm{\;{Hz}}( - 3\mathrm{\;{db}}$ amplification at ${20}\mathrm{{Hz}}$ ).
|
| 106 |
+
|
| 107 |
+
We set the signal’s amplitude proportional to ${MA}\left( t\right)$ which, according to Newton's second law of motion, represents the net force acting on the virtual object. In our method, we ignore balanced or counteracted forces acting on an object since the counteracted forces from grasping can be similar between a light and a heavy object. As an example, we can grip a light object just as hard as a heavier one.
|
| 108 |
+
|
| 109 |
+
During a virtual experience, the voice-coil actuator is always strapped to the user's index fingertip. However, the vibration feedback renders only when the user's virtual hand grasps a virtual object and not during their free-hand motions in the scene. To detect if the user is grasping a virtual object, we check whether the virtual object is off the ground and touching the virtual hand's palm and the distal joint of the thumb, index, or middle finger. If grasping is detected, the vibration feedback is rendered for the user through the voice coil actuator.
|
| 110 |
+
|
| 111 |
+
Whenever the system detects that the user is no longer grasping a virtual object, the vibration feedback rendering stops. However, in a physical simulation, even when the user is grasping the object, the hand parts may momentarily lose contact with the virtual object for a few cycles, and this might cause on/off pulses in our vibration feedback. To avoid these impulse noises in our signal, we stop the vibration feedback after no grasping is detected for ten milliseconds.
|
| 112 |
+
|
| 113 |
+
< g r a p h i c s >
|
| 114 |
+
|
| 115 |
+
Figure 3: The output voltage of the vibration feedback for two virtual objects with mass values ${0.5}\mathrm{\;{kg}},{O}_{1}\left( t\right)$ , and $1\mathrm{\;{kg}},{O}_{2}\left( t\right)$ , during an arbitrary shaking movement with acceleration, $A\left( t\right)$ , and velocity $V\left( t\right)$ .
|
| 116 |
+
|
| 117 |
+
When the user picks two virtual objects with different mass values and moves them around the scene with the same motion, the vibration effect is more substantial for the heavier object than the lighter object, proportional to their mass difference. In other words, the user feels more energetic mechanical vibrations on their skin when interacting with a heavier object. We suspect users perceive these vibrations as a resistance force to acceleration (similar to the force of inertia), which leads them to perceive the mass of virtual objects.
|
| 118 |
+
|
| 119 |
+
The limitation of the force-controlled hand is that if we take two light virtual objects such that one object is twice as heavy as the other, it would be difficult to perceive the mass difference since both masses are well within the threshold of what the virtual hand can grasp and move around in the VR scene. However, with the presented vibration feedback, the vibration at the user's skin for the heavier object has twice the amplitude (Fig. 3). As a result, we expect that the user perceives the mass difference between the objects based on the vibration feedback.
|
| 120 |
+
|
| 121 |
+
§ 5 EVALUATION
|
| 122 |
+
|
| 123 |
+
We evaluate our VR mass rendering techniques and verify our claims using both qualitative and quantitative measurements. We conducted a user study in which participants interact with virtual objects using the force-controlled co-located virtual hand and perform several object manipulation and comparison tasks. Moreover, we study the effect of the proposed vibration feedback on participants' ability to perceive virtual objects' masses and compare them based on the heaviness. More specifically, we look to assess these two hypotheses in our evaluations:
|
| 124 |
+
|
| 125 |
+
* Grasping and manipulating virtual objects using a co-located physically-based hand model in virtual reality gives a sense of mass perception and allows some degree of mass discrimination between virtual objects.
|
| 126 |
+
|
| 127 |
+
* The proposed vibration feedback can improve the sense of mass perception and enhance mass discrimination precision during virtual interactions between a physically-based virtual hand and virtual objects.
|
| 128 |
+
|
| 129 |
+
< g r a p h i c s >
|
| 130 |
+
|
| 131 |
+
Figure 4: The voice coil actuator is strapped to the index fingertip of the user's dominant hand
|
| 132 |
+
|
| 133 |
+
To examine the validity of the first hypothesis, participants perform virtual tasks involving interactions with objects with different mass values using the VR hand. However, evaluating these results of the VR hand interactions is not enough to validate our first hypothesis. The virtual environment runs in a physics engine, and users might get other clues to detect the difference in mass between objects that are not from the VR hand interactions only. These clues include: how the object interacts with each other, how they bounce when dropped on the virtual ground, and the speed at which they fall in the presence of air friction. To control the experiment for these additional cues, we ask participants to interact with each object individually and not push or touch an object using another. Additionally, we add a control interaction mode to our platform, called the spherical cursor. In this mode, instead of a co-located hand, users only see a spherical cursor co-located with the center of their palms. If the spherical cursor is within an object and the user puts their hand in a grasp pose, that object follows the cursor around the virtual scene until the user opens their hand. During grasping using the spherical cursor, we move the object by applying force to it in the cursor's direction. However, this force is proportional to the object's mass. As a result, objects with different mass follow the cursor at the same speed and acceleration. Therefore, comparing the quantitative and qualitative results from user interactions using a force-controlled hand versus the spherical cursor as a baseline allows us to validate the first hypothesis.
|
| 134 |
+
|
| 135 |
+
To test the second hypothesis, participants interact with virtual objects using the force-controlled hand both with and without the vibration feedback, which allows us to compare the results and analyze the effectiveness of the vibrotactile feedback in mass perception and discrimination.
|
| 136 |
+
|
| 137 |
+
§ 5.1 SETUP
|
| 138 |
+
|
| 139 |
+
In this subsection, we describe the study setup's hardware and software components and the range of mass values we use for our virtual objects. We use the MMXC-HF VCA by Tactile Labs, a relatively compact tactile actuator $\left( {{36}\mathrm{\;{mm}} \times {9.5}\mathrm{\;{mm}} \times {9.5}\mathrm{\;{mm}}}\right)$ , and the Tactile Labs QuadAmp multi-channel signal amplifier. A pair of thin wires attached the VCA to the signal amplifier placed on a nearby table. The cables from the actuator point outwards from the user's finger, limiting the chance of cables touching the user's hands during virtual interactions. Using a $3\mathrm{\;d}$ printed mount, we attach the voice coil actuator to the user's index fingertip (Fig. 4). We use the PC-powered Oculus Rift as our VR interface, which allows for external PC-based graphical computation. For tracking the user's hands, we attach a Leap Motion controller on the front side of the Oculus Rift VR headset for hand tracking.
|
| 140 |
+
|
| 141 |
+
In our system, we use the Bullet physics simulation [8] as our physics engine. One desirable feature of the Bullet library is that it permits the virtual hand's control by applying virtual force and torque from an external source. This feature enables us to implement the virtual coupling between our virtual hand and the tracked hand.
|
| 142 |
+
|
| 143 |
+
< g r a p h i c s >
|
| 144 |
+
|
| 145 |
+
Figure 5: A participant interacting with a virtual object while wearing the VR headset with the Leap Motion hand tracker, VCA and nose-canceling headphones.
|
| 146 |
+
|
| 147 |
+
To render the virtual scene to the VR headset and work with the Bullet physics simulation, we use the Chai3D library. Chai3D [6] is a platform-agnostic haptics, visualization, and interactive real-time simulation library. Moreover, it supports visualizing using the Oculus Rift headset and has built-in Bullet physics integration, making it ideal for immersive and physically realistic haptic experiences.
|
| 148 |
+
|
| 149 |
+
In our study, we use cubes as our virtual object's shape since they are easier to grasp. During our experiments, there may be multiple virtual cubes in the scene with different mass ranges. For setting the mass range in our experiments, we should consider the physics engine that we use. The Bullet physics engine recommends keeping the mass of objects around $1\mathrm{\;{kg}}$ and avoid very large or small values [7]. Therefore, during our preliminary experiments, we set the virtual coupling coefficients so that users could pick up virtual cubes with masses up to $4\mathrm{\;{kg}}$ . However, past that mass point, it becomes too difficult to pick up the virtual cubes. Since we expect users to be able to interact and pick up any virtual cube in the scene, we chose ${2.5}\mathrm{\;{kg}}$ as our upper mass limit in our user studies for the heaviest objects and ${0.25}\mathrm{\;{kg}}$ as our lower mass limit for the lightest objects.
|
| 150 |
+
|
| 151 |
+
§ 5.2 PARTICIPANTS
|
| 152 |
+
|
| 153 |
+
Ten participants ( 5 female, 5 male) took part in this study. All participants were right-handed. Three participants had never used VR headsets before; one participant used them few times per week and the rest at most a few times per year. Seven of them had interacted with virtual objects during their VR experiences, and three had used haptic devices in VR games and applications. This study was approved by the University of Calgary Conjoint Faculties Research Ethics Board (REB18-0708). Participants received 20$ compensation for taking part in this user study.
|
| 154 |
+
|
| 155 |
+
§ 5.3 STUDY
|
| 156 |
+
|
| 157 |
+
We begin the study by spending a few minutes $\left( { < 8}\right)$ familiarizing the participants with the VR headset, Leap Motion hand tracker, and the virtual study environment. After placing the haptic actuator on their dominant hand's fingertip, they practice how to pick up and move a virtual cube (1.25 kilograms) using the virtual co-located hand. We ask participants to always use their index fingers in grasping since the haptic actuator is attached to it. They are also encouraged to engage more fingers or tighten their grip to increase the grasping strength and move the training object around the scene both slowly and quickly. For consistency, we ask the participants only to use their dominant hand to interact with the virtual elements in the scene when the tasks start. During the virtual tasks, participants wear active noise-canceling headphones while white-noise is played through them to block any audible signal from the haptic actuator (Fig. 5).
|
| 158 |
+
|
| 159 |
+
< g r a p h i c s >
|
| 160 |
+
|
| 161 |
+
Figure 6: Two virtual cubes with random weights are placed in front of the participant to compare. The co-located spherical cursor mode is active, and the "Vibration Off" label indicates to the participant that they should not expect any vibration from the voice-coil actuator.
|
| 162 |
+
|
| 163 |
+
< g r a p h i c s >
|
| 164 |
+
|
| 165 |
+
Figure 7: Three virtual cubes with random weights are placed in front of the participant to sort in ascending order from left to right. The "Vibration On" label indicates to the participant that they should expect vibration from the voice-coil actuator when picking up objects.
|
| 166 |
+
|
| 167 |
+
In the first task, we present participants with six pairs of cubes and ask them to interact, grasp, move the objects, and think aloud about the experience. Furthermore, we ask them to compare the two cubes based on their mass and say if they feel they have the same mass or if one is slightly or considerably (or to whatever degree they perceive it) heavier than the other. Participants interact with virtual objects using the three interaction modes in the following order: spherical cursor, virtual hand without the vibration feedback, and virtual hand with the vibration feedback. As an example, Fig. 6 shows this task's setup while the interaction mode is set to the spherical cursor. For each interaction mode, participants compare two pairs of cubes. One pair has the largest mass difference given our mass range (0.25 and ${2.5}\mathrm{\;{kg}}$ ), and the other pair has a smaller mass difference (0.25 and ${0.5}\mathrm{\;{kg}}$ ). The system randomly decides if the smaller or larger mass difference pair is first presented to the user and randomly places the two cubes on the table for each set to avoid learning from the previous rounds.
|
| 168 |
+
|
| 169 |
+
In the next part, we ask participants to sort virtual cubes based on their mass. In sorting, a higher number of objects to sort means the participant spends more time picking up and moving objects around the scene, which results in a fuller user experience in comparing weights. However, a higher number of objects to sort increases the average time to complete the task, limiting the number of sorting rounds users can perform during a study session. Our preliminary experiments concluded that three cubes could offer a reasonable balance between sorting time and user interaction with objects.
|
| 170 |
+
|
| 171 |
+
We quantized our mass range ( ${0.25}\mathrm{\;{kg}}$ to ${2.5}\mathrm{\;{kg}}$ ) into two weight sets of size three. Having more than one weight-set allows a more in-depth analysis of the interaction modes across our mass range. Weber's law states that the difference in magnitude needed to discriminate between a base stimulus and other stimuli increases proportionally to the intensity of the base stimulus [12]. We can easily differentiate a ${0.5}\mathrm{\;{kg}}$ mass versus a $1\mathrm{\;{kg}}$ mass, but it is harder to distinguish a ${10}\mathrm{\;{kg}}$ mass from a ${10.5}\mathrm{\;{kg}}$ even though both pairs have the same weight difference. Therefore we chose our mass values with equal ratios between them using a geometric series. That gives a light weight-set(0.25kg,0.44kg,0.79kg)and a heavy weight-set (0.79kg,1.4kg,2.5kg).
|
| 172 |
+
|
| 173 |
+
Participants sort random permutations of the light and the heavy weight-set, using the three different interaction modes (spherical cursor, virtual hand without vibration feedback, the virtual hand with vibration feedback). Therefore we have six modes of sorting. As an example, Fig. 7 shows this task's setup while the interaction mode is set to the virtual hand with vibration feedback. In all sorting modes, three virtual cubes with similar appearance and size are placed on a virtual surface, and participants have to place them from left to right in ascending order based on the perceived mass. Participants perform six rounds of sorting for each mode. During each round, sorting modes are ordered randomly to remove the learning effect between the modes. Before the sorting task begins, we rotate between the modes to familiarize the participant with the scene. Furthermore, we ask participants to grasp each object at least once before finalizing their decision. Also, we recommend keeping each sorting under a minute; however, this is not a hard limit.
|
| 174 |
+
|
| 175 |
+
When the sorting task finishes, participants fill out a questionnaire regarding their experience during the two virtual tasks. After participants fill out the questionnaire, we ask them to elaborate on their answers during a semi-structured interview. Our post-session questionnaire is as follows: (each question is repeated for each of the interaction modes)
|
| 176 |
+
|
| 177 |
+
* While interacting with objects, I could perceive their mass. 1 to 5 (Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree)
|
| 178 |
+
|
| 179 |
+
* I could feel one cube was heavier than the other. I to 5 (Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree)
|
| 180 |
+
|
| 181 |
+
* How was your confidence level in sorting objects? 1 to 5 (Not confident at All, , ,, Very Confident)
|
| 182 |
+
|
| 183 |
+
* How realistic were the interactions with objects? 1 to 5 (Very Unrealistic, Unrealistic, Neutral, Realistic, Very Realistic)
|
| 184 |
+
|
| 185 |
+
* Would you recommend experiencing the "" in VR games during interactions with virtual objects? 1 to 5 (Do Not Recommend at All,, Neutral,, Highly Recommend)
|
| 186 |
+
|
| 187 |
+
§ 5.4 RESULTS
|
| 188 |
+
|
| 189 |
+
We show the sorting results in the form of confusion matrices in Fig 8. Using the nonparametric Kruskal-Wallis test, we analyze the statistical significance of the difference between placement distributions of light, medium, and heavy objects for each of the sorting modes. For the spherical cursor (control mode), we observe statistically insignificant p-values of 0.463 for the heavy weight-set and 0.800 for the light weight-set, showing that the user could not discriminate between weights in this mode. For the virtual hand with no vibration feedback, we see statistically insignificant results for the light weight-set (p-value 0.928). However, for the heavy set, we see a significant effect of the virtual hand on sorting (p-value $< {0.001})$ . In the case of sorting using the virtual hand with vibration feedback, we see a significant effect on sorting both for the light (p-value <0.001) and heavy (p-value <0.001) weight sets. To check the validation of the first hypothesis, we see a significant improvement for the heavy weight-set compared to the control mode (spherical cursor). However, the same cannot be said for the light weight-set. To check for the second hypothesis, we see a statistically significant improvement in the light weight-set with the vibration feedback compared to only using the virtual hand. However, for the heavy set, we see significant effects both from virtual hand with and without the vibration feedback. Therefore, to check if the observed improvements in the precision of sorting for the light, medium and heavy objects are significant, we perform row by row comparison between the two confusion matrices using the Wilcoxon rank-sum test. Comparing the number of correct sorts for the heavy weight (54 correct sorts versus 33) gives a statistically significant p-value of $< {0.001}$ , for the medium weight (44 correct sorts versus 25) p-value is $< {0.001}$ , and for the light weight ( 48 correct sorts versus 34) p-value is $< {0.01}$ , which shows that for the heavy weight-set the vibration feedback improvement is statistically significant as well.
|
| 190 |
+
|
| 191 |
+
< g r a p h i c s >
|
| 192 |
+
|
| 193 |
+
Figure 8: Sorting results of the six different sort modes in the form of confusion matrices. The top three matrices show the sorting results for the light weight-set(0.25kg,0.44kg,0.79kg), and the bottom three show the sorting results for the heavy weight-set(0.79kg,1.4kg,2.5kg). From left to right, matrices represent the three interaction modes (spherical cursor, virtual hand with no vibration, virtual hand with vibration). The matrices diagonals show the number of times the objects were sorted correctly.
|
| 194 |
+
|
| 195 |
+
< g r a p h i c s >
|
| 196 |
+
|
| 197 |
+
Figure 9: Users compare the sense of mass perception and discrimination between the three interaction modes in the post-session questionnaire. The bars represent the mean answer, and the black lines show the standard deviation.
|
| 198 |
+
|
| 199 |
+
The results of the questionnaire in Fig. 9 show that participants declared an improvement in mass perception and discrimination when the vibration feedback was enabled compare to only using the virtual hand. P6 (Participant #6) mentioned "With the hand no vibration, it was harder to tell the difference in mass, but I think you could still, it was realistic enough that it was engaging, but the vibration one I'm not if it's like a mental thing, it just helps a lot more with the differentiating between the different masses and the movements". We also see neutral results for the spherical cursor. Generally, participants mentioned they could not differentiate between the objects using the spherical cursor. P2 mentioned, "It was harder for me to use the cursor to compare the weights, most of the time I thought they were like identical". For the virtual hand without the vibration feedback, participants on average expressed neutral opinions regarding its ability to give them the sense of mass perception and discrimination. However, the results from the sorting task show they performed better than the control. Also, some participants mentioned different encounters that enabled them to differentiate between weights. P5 mentioned "I'm picking it up, how long would it slide, ok hold it, I shake it around it slides faster ... if I hold it, it slips faster then it's heavier", and P6 said "(with the virtual hand) if I grab it loose the heavy one just drops as opposed to the light one stays in even if I'm shaking it", and "looking at the movement, if I'm moving my hand it's a bit slower it just feels heavier versus if it's a quick it just feels lighter"
|
| 200 |
+
|
| 201 |
+
< g r a p h i c s >
|
| 202 |
+
|
| 203 |
+
Figure 10: Users compare the sorting confidence, sense of realism, and gaming experience between the three interaction modes in the post-session questionnaire. The bars represent the mean answer, and the black lines show the standard deviation.
|
| 204 |
+
|
| 205 |
+
Fig. 10 shows that participants expressed having more confidence in sorting when the vibration feedback was enabled. However, without the vibration feedback, they expressed neutral confidence. Furthermore, participants generally stated that the vibration feedback added to the interaction's realism and that the virtual hand's interactions were realistic. P4 said "For the vibration also, I felt like it helped me, felt like it's more real, I'm touching things, not just I'm seeing that I'm touching things". Furthermore, participants expressed interest in experiencing the vibration effect in virtual reality games.
|
| 206 |
+
|
| 207 |
+
Finally, we asked the participants how did interaction with virtual objects feel when they vibrated. P2 said: "if felt like it has resistancy to move, based on that I felt like it's heavier, might be heavier" and P7 mentioned "When I picked a cube with vibration, I could feel that something is trying to, I don't know, annoy me bother me, might be something like the gravity taking it back to the ground, it feels that I should put more energy to pick it up" and further elaborated "the one that without vibration I just pick it with two fingers I played with that, but the one with vibration when I tried to pick it with two fingers, suddenly I tried to keep it with all my fingers because I thought that it might slides and drops."
|
| 208 |
+
|
| 209 |
+
Overall our findings indicate that the presence of the force-controlled virtual hand both with and without the vibration effect gives a sense of weight discrimination and perception. However, the virtual hand without vibration feedback is only effective for heavier objects closer to the hand strength threshold. Furthermore, the virtual hand with the vibration effect improves the weight perception and discrimination sense for both lighter and heavier objects without having a negative effect on the realism of the experience. Therefore, our results validate our hypotheses.
|
| 210 |
+
|
| 211 |
+
§ 6 CONCLUSION
|
| 212 |
+
|
| 213 |
+
Rendering the mass of objects in virtual reality without limiting the hand movements is a challenging task. In this paper, we propose using a force-controlled hand in VR to give a sense of mass perception and discrimination by enabling physically realistic hand-object interactions. We also propose a complementary vibration effect proportional to the object's mass and acceleration to improve the sense of mass perception and discrimination. We conducted a user study and performed qualitative and quantitative analysis, which indicates that our hypotheses are valid. The physically-based virtual hand can give a sense of mass perception and discrimination for heavier objects closer to the upper limit of its grasping strength. Furthermore, the vibration feedback greatly enhances the mass perception and discrimination for a wider mass range in our study while improving the interaction's realism.
|
| 214 |
+
|
| 215 |
+
§ 7 FUTURE WORKS
|
| 216 |
+
|
| 217 |
+
One potential future direction for this research is to analyze the mass discrimination ability for the virtual hand and the vibration effect for a broader mass range and different mass ratios between the objects. Moreover, we are interested in analyzing the vibration effect's behavioral effects on the user's movements during virtual interactions.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,303 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Generating Rough Stereoscopic 3D Line Drawings from 3D Images
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
We present a method to produce stylized drawings from S3D images. Taking advantage of the information provided by the disparity map, we extract object contours and determine their visibility. The discovered contours are stylized and warped to produce an S3D line drawing. Since the produced line drawing can be ambiguous in shape, we add stylized shading to provide monocular depth cues. We investigate using both consistently rendered shading, and inconsistently rendered shading to determine the importance of lines and shading to depth perception.
|
| 8 |
+
|
| 9 |
+
Index Terms: Computing methodologies-Non-photorealistic rendering一;
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
Stereoscopic 3D is used in a variety of art forms, such as photography and film, to create the effect of depth. The perceived depth can provide a greater sense of reality, create an immersive or engaging experience, and serve as an artistic medium to induce emotional responses in the viewer. S3D creates a sense of depth by presenting a slightly different image to each eye. The left and right images exhibit horizontal separation between objects, which is interpreted as depth by the brain. Producing S3D content is challenging and emphasis must be placed on consistency, ensuring that the object(s)/scene visible in both views matches exactly to produce a comfortable viewing experience and correct depiction of depth $\left\lbrack {{12},{18}}\right\rbrack$ .
|
| 14 |
+
|
| 15 |
+
Line, or pen-and-ink, drawings are one of the oldest S3D art forms, dating back to Sir Charles Wheatstone's original drawings in the ${1830}\mathrm{\;s}$ [26]. This format persists today in comics and diagrams. Although S3D line drawings can be produced from 3D meshes using automated algorithms, producing S3D line drawings from S3D photos has not received significant attention.
|
| 16 |
+
|
| 17 |
+
One possibility for producing line drawings from S3D photos is to use a S3D stylization algorithm such as the layer-based method presented by Northam et al. $\left\lbrack {{17},{18}}\right\rbrack$ . This approach divides the S3D image and disparity ${}^{1}$ maps into layers by disparity, such that each layer only contains pixels from a single disparity, then applies stylization to these layers. While their approach can be used with a variety of artistic styles and filters, contours and line drawing were not considered.
|
| 18 |
+
|
| 19 |
+
If we try to produce a line drawing using this method, the results are displeasing. This is because object contours will be conflated with other edges, such as lighting and texture boundaries. Thus, the final result contains lines that do not convey shape. We could, however, use the additional information provided by a disparity map to isolate object contours. However, the layer-based approach cannot be used to extract these contours from the disparity map, because layers contain pixels with the same disparity value. Figure 1 illustrates this issue. Note how the edges found for the disparity layer do not correspond to object contours, which can be found in the disparity map. Instead, they correspond to the edges of the region with the given disparity.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: Line drawing from disparity layers. Note how each disparity layer only contains pixels of one value. Hence, extracting lines from such layers produces the edges of the layer pixels instead of the desired object contours.
|
| 24 |
+
|
| 25 |
+
In this paper, we present a method to produce stylized stereoscopic 3D line drawings from 3D photos using stereoscopic warping instead of layers. While constructing this method, we observed that some drawings of simple objects were ambiguous and did not uniquely identify the $3\mathrm{D}$ shape. For example, the contour of a sphere is a circle and could represent either a flat circle or a sphere. Shading, a monocular depth cue, can help resolve these ambiguities. Hence, we also investigate the effects of adding stylized shading to our produced drawings.
|
| 26 |
+
|
| 27 |
+
The line extraction and rendering algorithm is presented in Section 3, and our shading method is discussed in Section 4. Our results are presented in Section 5. Finally, we present an evaluation of our results to verify their 3D comfort and depth quality in Section 6.
|
| 28 |
+
|
| 29 |
+
## 2 Background
|
| 30 |
+
|
| 31 |
+
A line drawing is a simplistic representation of an object or scene. It is comprised entirely of lines, which may be stylized, and which do not contain shading or colour. Despite the simplicity of such drawings, these drawings are capable of accurately conveying the subject that they depict. Hertzmann indicates line drawings "work" because they "approximate realistic renderings" [9].
|
| 32 |
+
|
| 33 |
+
Where do artists draw lines? A line drawing study by Cole et al. examined where artists draw lines for a variety of objects [5]. They observed that contours, creases and folds - which describe the shape of the object - were drawn, but lines depicting shadows or highlights were not. This was also observed by Hertzmann et al. while rendering line drawings for smooth meshes [10].
|
| 34 |
+
|
| 35 |
+
In a stereoscopic 3D line drawing, contours, creases and folds give the primary sense of an object’s $3\mathrm{D}$ shape and depth. Without other S3D cues from which to infer depth, it is important that these lines are as consistent as possible between left and right views. Inconsistencies can cause viewing discomfort from binocular rivalry ${}^{2}$ and double vision, detracting from the viewer's perception of object depth. Previous studies have shown that these S3D lines alone sufficiently convey object shape/depth for many images $\left\lbrack {1,{12},{14}}\right\rbrack$ .
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
${}^{2}$ binocular rivalry is a phenomena where the brain rapidly switches between left and right eyes because the images differ
|
| 40 |
+
|
| 41 |
+
${}^{1}$ disparity is inversely proportional to depth and conveys the horizontal separation between a point in left and right views
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
A number of algorithms have been proposed to produce stereoscopic 3D line drawings from meshes. Most notably, Kim et al. presented a method that produces 3D line drawings by generating contours for left and right eyes separately [12]. Contours are then pruned for view consistency by checking the visibility of points along the curve formed by creating an epipolar plane between a pair of points on the left and right contours. Kim et al. also describe a method for consistent stylization of lines by linking control points between matching contours and applying stylization to the linked pairs. However, this method can only be used with full 3D models.
|
| 46 |
+
|
| 47 |
+
Another paper by Kim et al. describes a method for producing stylized stereoscopic 3D line drawings from S3D photographs [13]. Their paper applies Canny edge detection [4] to the edge tangent field [11] of the left stereo image and warps the discovered edges to the right image using the disparity map. However, the rendered lines are from all edges that can be found in the actual image, not only object contours but also texture or lighting contours. By contrast, a hand-drawn stereoscopic 3D line drawing would be likely to include only object contours and creases. As their method is based on edge detection purely in the colour domain, they cannot differentiate between geometric discontinuities and colour discontinuities. Disparity maps, which indicate the horizontal separation between pixels of the left and right image, isolate geometric information from colour information. Therefore, applying edge detection to the disparity map could uniquely produce object contours. Our method will harness the information in the disparity map to construct S3D line drawings.
|
| 48 |
+
|
| 49 |
+
### 2.1 Perception and Monoscopic Depth Cues
|
| 50 |
+
|
| 51 |
+
Our perception of depth arises from both monoscopic (2D) and 3D depth cues. Monoscopic depth cues include shading, relative size, occlusion, and motion $\left\lbrack {1,3}\right\rbrack$ . Shading an S3D line drawing can improve depth perception, but the amount of improvement is limited for images with rich detail [14]. However, for S3D line drawings of simple objects with few internal lines, shading provides the necessary information about object shape. For example, imagine a circular contour: is this a line drawing of a circle or a sphere?
|
| 52 |
+
|
| 53 |
+
Stereo-consistent shading is complicated by the fact that shading can be view-dependent. Apart from purely Lamber-tian surfaces, shading features such as specular highlights may be visible in only one eye due to the position of the eyes with respect to the light source and object [2]. This phenomenon can also be observed in S3D photographs, as demonstrated in Figure 2. Note that both specular highlights and reflections differ between left and right views and are circled in cyan for visibility. While these specular highlights are natural to the human visual system, they can be problematic for computer vision algorithms commonly used with stereo [2]. Additionally, it is believed in film that specular highlights can cause binocular rivalry if they are rendered inconsistently between views. Therefore, they are often removed or redrawn to be consistent [15]. We provide users the option of adding shading to our S3D line drawings, to improve the perception of shape. However, our shading will remain true-to-nature. That is, we will not remove or adjust the natural lighting of the S3D images to ensure consistency.
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+
Figure 2: Inconsistent specular highlights and reflections.
|
| 58 |
+
|
| 59 |
+
### 2.2 Stylization
|
| 60 |
+
|
| 61 |
+
Stylization can be applied to both the S3D lines and shaded regions. Surprisingly, it has been shown that simple line styles such as overdrawn lines, varying thickness, and jitter do not negatively impact depth perception and comfort in S3D images if rendered consistently [14].
|
| 62 |
+
|
| 63 |
+
The naive approach to create consistent stylized lines is to render them in the left view, then use the disparity map and warp (horizontally shift) them to the right. However, the rendered, stylized object contours may have pixels that bleed over onto other surfaces. Therefore, warping individual pixels would not produce smooth lines. Alternatively, the control points of curves or the endpoints of line segments could be warped. But if any of these points from the stylized lines lie on other objects, the lines rendered in the right view may be discontinuous or distorted. Instead, we will match the original control points to their underlying disparities prior to stylization or rendering, similar to the approach used by Kim et al. [12]. Although there are many ways to stylize lines, we focus on overdrawn and jittered styles.
|
| 64 |
+
|
| 65 |
+
In addition to stylized contours, we will also stylize shaded regions. Stereoscopic 3D image stylization has been well studied, although existing methods focus on stylizing the whole image consistently instead of a small region. Stavrakis et al. applied stylization to the left image and used the disparity map to warp it to the right, then did the reverse for occluded regions $\left\lbrack {{23},{24}}\right\rbrack$ . As discussed previously, Northam et al. applied stylization using disparity layers $\left\lbrack {{17},{18}}\right\rbrack$ . However, since lighting and specular highlights are view-dependent, these methods would enforce consistency where none exists. Hence, we will apply stylization algorithms to shaded regions in a view-dependent manner to preserve these inconsistencies. By preserving inconsistencies, we contradict Richardt et al. [20] and Northam et al. $\left\lbrack {{17},{18}}\right\rbrack$ , which focus on establishing or maintaining consistency at all cost. However, we believe that because shading is a monocular depth cue, binocular rivalry and randomness will have minimal effects on viewer perception.
|
| 66 |
+
|
| 67 |
+
## 3 Producing a Stereoscopic 3D Line Drawing
|
| 68 |
+
|
| 69 |
+
Our method is divided into several stages and requires left and right images and disparity maps as input. First, the object-depiction contours are extracted. Next, the contours are split into curves by view visibility: left-only, right-only, and shared. Curves are stylized, then warped from left-to-right. Finally, shading may be added to improve depth perception.
|
| 70 |
+
|
| 71 |
+
### 3.1 Extracting Contours
|
| 72 |
+
|
| 73 |
+
The shape of an object is described by its silhouette (contours) and interior creases and folds $\left\lbrack {8,{10}}\right\rbrack$ . Both contours and interior lines are needed to give a clear sense of shape [5]. While these lines may be found in the S3D image, they are more easily isolated using information in the disparity map.
|
| 74 |
+
|
| 75 |
+
There are two possible approaches to find contours (edges). The first approach to finding edges is to apply edge detection methods to the raw or preprocessed disparity map. The second approach is to perform a 3D reconstruction of the scene using the provided disparity maps and apply a silhouette finding algorithm, such as Hertzmann and Zorin, to identify the edges [10].
|
| 76 |
+
|
| 77 |
+
We use the first method, applying edge detection methods to the raw or preprocessed disparity map, instead of discovering silhouettes from a 3D reconstruction. While many of the edges can be found from the reconstruction using Hertzmann and Zorin's approach, more subtle edges occurring where two objects intersect at the same depth are not always identified, as shown in Figure 3 [10]. Secondly, after identifying the silhouette edges from the 3D reconstruction, the visibility of those edges must then be computed for each eye, as in Kim [12]. However, visibility is given in the disparity map, so recomputing this information is a waste. Finally, we do not assume that the baseline and focal length of the image is known or can be estimated such that a believable 3D reconstruction can be produced.
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
Figure 3: Hertzmann and Zorin's method vs our method.
|
| 82 |
+
|
| 83 |
+
#### 3.1.1 Finding Edges in a Disparity Map
|
| 84 |
+
|
| 85 |
+
There are two types of edges in a disparity map. The first type occurs at a depth discontinuity, where one object occludes another, creating a jump in neighbouring disparity values. The second type occurs where two surfaces meet at the same depth, or as creases and folds on an object's surface. The first type of edge can be found using a Laplacian or Canny edge detector, as shown in Figure 4.
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
|
| 89 |
+
Figure 4: The strong edges of a disparity map may b efound by a Canny edge detector or Laplacian edge detector but they are not ideal.
|
| 90 |
+
|
| 91 |
+
The second type of edge is more elusive. Adjusting the parameters of a Laplacian or Canny detector can find these edges, but not uniquely, as shown in Figure 5.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
|
| 95 |
+
Figure 5: The second type of edge is hard to detect uniquely using Canny or Laplace.
|
| 96 |
+
|
| 97 |
+
Our method does manage to successfully and uniquely identify these edges, as shown in Figure 6.
|
| 98 |
+
|
| 99 |
+

|
| 100 |
+
|
| 101 |
+
Figure 6: The second type of edge is easy to detect uniquely using our method.
|
| 102 |
+
|
| 103 |
+
To make these low-contrast edges more visible, we can convert disparity to depth and apply a bilateral filter to smooth the plateaus in the result, as suggested by [16]. However, while this improves the visibility of type one edges, it does not improve visibility for all type two edges, as shown in Figure 7.
|
| 104 |
+
|
| 105 |
+
(a) A stereoscopic 3D (b) Background (c) Background $n$ creases are not found creases are found
|
| 106 |
+
|
| 107 |
+
the background. using [16]'s method. using our method.
|
| 108 |
+
|
| 109 |
+
## Figure 7: Our method improves visibility for type two edges.
|
| 110 |
+
|
| 111 |
+
We note that finding type two edges, or finding edges in a low-contrast or noisy region, is known to be a difficult problem [19]. Savant indicates that Laplacian, Canny, and other detectors are not able to uniquely identify edges in low-contrast areas [21]. And while second-order derivative methods can identify some edges that are zero-crossings, not all type two edges are zero-crossings.
|
| 112 |
+
|
| 113 |
+
Hence, we propose the following method to identify edges in disparity maps. First, we use a Canny edge detector, as suggested by Gelautz and Markovic, to identify type one edges [6].
|
| 114 |
+
|
| 115 |
+
Next, we improve the visibility of type two edges. Hertz-mann suggested rendering a scene where different coloured directional lights are cast along positive and negative axes onto a 3D model to produce a brightly-coloured normal map from which edges could be found [10]. In order to apply this technique to our disparity maps, each pixel needs a surface normal. We assign surface normals by applying a simple surface triangulation to each map. Each pixel position(x, y) is a vertex with depth $z$ equal to the disparity at that position. Triangular "faces" are formed by a point $p$ in the disparity map and two of its immediate neighbours, $q$ and $r$ . A normal can then be calculated for each of these faces, as well as the vertex normal from the average of the eight adjacent triangular face normals.
|
| 116 |
+
|
| 117 |
+
The normals are multiplied by a directional light vector to enhance visualization, as illustrated in Figure 8. However, when directional lights are cast onto the lit normal map, we do not observe a smoothly shaded cat as expected. Instead, the normal map appears stepped - with rings of front-facing planes depicted in dark red. This stepped appearance is a consequence of the limited dynamic range of most disparity maps. A perfectly smooth surface cannot be depicted in this discrete space, resulting in many pixels being assigned the same integer disparity instead of their actual values. These artificial edges make discovery of actual edges, such as the interface between the wall and floor, difficult to achieve.
|
| 118 |
+
|
| 119 |
+
In order to remove the stepped or plateaued appearance, floating point disparities are needed, along with a smoothing operator to reduce the discretized appearance. Ideally, converting the disparity map from 8-bit to floating point and applying a simple out-of-the-box smoothing operation would smooth these plateaus out. However, directly applying a bilateral filter will preserve or enhance these edges and a Gaussian or box filter would soften all edges indiscriminately, effectively blending objects together. Instead, to smooth these plateaus and generate a smooth lit disparity with preserved edges, we:
|
| 120 |
+
|
| 121 |
+
1. Compute the strong edges using a Canny filter, dilating the result to produce an edge mask where edges have a diameter of 10 pixels.
|
| 122 |
+
|
| 123 |
+
2. Calculate the surface normals via triangulation as previously discussed. Use larger triangles in regions away from edges and smaller triangles along edges. This hybrid approach gives us clear, prominent lines corresponding to key edges, and additional smoothing elsewhere. Do this for both the discrete (un-smoothed) and floating point (smoothed) disparity maps.
|
| 124 |
+
|
| 125 |
+
3. Cast directional lights to colourize and produce the smoothed and un-smoothed maps. This yields Figure 8(a) and Figure 8(b), respectively.
|
| 126 |
+
|
| 127 |
+
4. Compute the complexity of the discrete (un-smoothed) disparity map, $\alpha$ , as the number of observed disparities. Apply a bilateral filter to the smoothed map $\frac{\alpha }{10}{}^{3}$ times.
|
| 128 |
+
|
| 129 |
+
5. To correct the blown-out edges caused by the previous step, extract the strong mask edges from the unsmoothed map (that is, the pixels of the unsmoothed map coinciding with the pixels of the mask generated in step 1) and superimpose them on the smoothed map. Apply a bilateral filter to the smoothed map $\frac{\alpha }{10}$ times to help blend the edges in.
|
| 130 |
+
|
| 131 |
+
6. Overlay the original, un-dilated version of the mask on top of the smoothed map, as seen in Figure 8(c), to aid in the identification of strong edges.
|
| 132 |
+
|
| 133 |
+
We can now apply the Canny edge operator to the smoothed and coloured map in an automated fashion.
|
| 134 |
+
|
| 135 |
+
In general, we seek to compensate for less detailed masks with more permissive Canny thresholds that yield a more detailed final edge set. Conversely, more detailed masks imply that a stricter threshold should be used, to prevent an overly noisy final edge set. Let the number of mask pixels be $\beta$ . Let $\beta$ divided by the total number of pixels be $x$ . We recognize that the level of detail in the mask, $x$ , is inversely proportional to the number of pixels desired in the final edge set.
|
| 136 |
+
|
| 137 |
+
We also want to take into account the aforementioned disparity map complexity, $\alpha$ . This is another inversely proportional relationship, between disparity map complexity and the target number of pixels in the final edge set. The more complex the disparity map is, the greater the number of easily identifiable edges, the less permissive the Canny threshold is required to be. Conversely, the less detailed the disparity map, the more difficulty Canny will have extracting edges from it, and the more permissive we should make the Canny threshold.
|
| 138 |
+
|
| 139 |
+
We will use these two complexity measures - and the direct, inversely proportional relationships that we observed - to select Canny thresholds automatically. In so doing, we are letting the image - not the user - do the talking.
|
| 140 |
+
|
| 141 |
+
Let $\phi = \frac{1}{\alpha x}$ be the target number of pixels in the final edge set.
|
| 142 |
+
|
| 143 |
+
We want the final edge set to contain at least as many pixels as the mask; only then can more edges be found to supplement the mask edges. Therefore, we modify our target to $\left( {1 + \sigma }\right) \beta$ where $\sigma = \min \left( {3,\phi }\right)$ . Notice that we set a cap of 3 on $\phi$ . This is because, experimentally, we have found that larger values introduce a lot of noise.
|
| 144 |
+
|
| 145 |
+
What we have established is a target number of final edge pixels, not a Canny threshold parameter. But each Canny threshold parameter will produce a certain number of edge pixels. The higher the threshold, the less pixels in the final result; the lower the threshold, the more pixels in the final result. The minimum threshold parameter is min=0 and the maximum is $\max = {255}$ . Using these boundaries, we can conduct a binary search for the ideal parameter. We start by using a threshold parameter midway between min and max. We then count the number of pixels in the resulting edge set. If the result is below target, we need to be more permissive, so we lower our max and try a lower threshold in the next iteration. If the result exceeds the target, we need to be less permissive, so we increase our min. We stop once max=min, or the edge pixel count equals the target.
|
| 146 |
+
|
| 147 |
+
Once edges are found, we use the findContours function in OpenCV to extract curve points from the raster image. The extracted curve points are processed to remove curve duplication. The curves are also split whenever adjacent point disparities differ by more than a small threshold, under the assumption that the adjacent points belong to separate surfaces. We note that some of the original detail may be lost in this process.
|
| 148 |
+
|
| 149 |
+
---
|
| 150 |
+
|
| 151 |
+
${}^{3}\frac{\alpha }{10}$ , and other parameter values, were selected after applying the method to our test set of 12 images and selecting the parameter value that produced the best results overall for all images
|
| 152 |
+
|
| 153 |
+
---
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
|
| 157 |
+
Figure 8: Directionally lit disparity maps.
|
| 158 |
+
|
| 159 |
+
#### 3.1.2 Splitting Lines for Consistency
|
| 160 |
+
|
| 161 |
+
Edges extracted from the disparity maps are rendered as smooth splines for the corresponding view. However, view dependent rendering introduces inconsistencies, which arise when edges are found in one disparity map but not the other due to the slight variation in the viewpoint.
|
| 162 |
+
|
| 163 |
+
To prevent these inconsistencies - which cause discomfort - we arbitrarily select the left view to be the "true" edges. Then, any line in the left view that is visible in the right is warped using the disparity map to the right view. Lines only visible in the left are rendered only in the left; likewise, lines only visible in the right are rendered only in the right. View visibility is determined by the disparity map. Any pixel $p\left( {x, y}\right)$ is visible in both left and right views if $L\left( {x, y}\right) = d$ and the corresponding pixel $R\left( {x - d, y}\right) = d$ , where $L$ and $R$ are the left and right disparity maps respectively, and $d$ is the disparity of pixel $p$ .
|
| 164 |
+
|
| 165 |
+
Contours are warped by their control points and then rendered in a view-dependent manner. While this method of rendering potentially introduces inconsistencies, warping the rendered lines would introduce noise as the lines may lie on surfaces with different disparities.
|
| 166 |
+
|
| 167 |
+
Finally, we note that long edges extracted from the disparity map may span multiple objects and both occluded and visible regions. Warping the entire stroke can result in partially occluded edges being visible in the wrong view. To prevent this, we split strokes whenever the visibilities of adjacent control points change, i.e. from visible to occluded. We also split strokes when the disparities of adjacent control points differ by more than some threshold ${\tau }_{d}{}^{4}$ . Strokes that cannot be warped, because they are only visible in one view, are rendered in only one view.
|
| 168 |
+
|
| 169 |
+
#### 3.1.3 Consistent Control Point Stylization
|
| 170 |
+
|
| 171 |
+
Monoscopic and S3D line drawings are often stylized and represent objects using rough, overdrawn and jittery lines. To increase visual interest, we provide the option of stylizing S3D lines with an overdrawn or jittered style.
|
| 172 |
+
|
| 173 |
+
Kim et al. discussed a method for stylizing stereoscopic 3D lines [12]. Their method performs stylization after lines have been discovered for both left and right images. Specifically, it links line segments in the left view to the matching segments in the right and consistently renders texture to these linked and parameterized curves.
|
| 174 |
+
|
| 175 |
+
Our approach is similar and stylizes lines prior to warping by replicating and transforming control points. To produce overdrawn lines, curves are duplicated a fixed number of times. Lines can then be scaled (about their centroids or the centre of the image) by a small random factor, so that the overdrawn lines are visibly distinct. A jittered or rough appearance is created by adding small random translation vectors to each control point of a line. Note that, prior to altering the control points of a line, it is important to store the original, pre-transformed disparities of those control points, so that they can be correctly warped after stylization
|
| 176 |
+
|
| 177 |
+
## 4 Shading
|
| 178 |
+
|
| 179 |
+
S3D line drawings depict the shapes of objects. However, these lines do not convey information such as surface texture or roundness - but shading and highlights do. Object shading and shadows are monocular depth cues [7]. Shading, particularly involving specular highlights, is view dependent [25]. Adding monocular depth cues to S3D line drawings can improve understanding of surface shape, and enhance depth perception. Also, because lighting is view dependent, the left and right views will be stylized independently to preserve their separate lighting characteristics.
|
| 180 |
+
|
| 181 |
+
To produce the stylized shading, the left and right input images are converted to grayscale and stylized using a variety of algorithms. While any stylization algorithm or filter could be used, we chose those that do not explicitly render contours, as our method will produce those separately. Finally, the stylized shading and S3D lines are combined to produce the final image.
|
| 182 |
+
|
| 183 |
+
## 5 Results
|
| 184 |
+
|
| 185 |
+
We tested our method on several S3D images, some of which are shown in Figure 9. Seventy-five percent of the images used as input to our method have high-quality or near-perfect disparity maps.
|
| 186 |
+
|
| 187 |
+
Figure 10 illustrates some of the $3\mathrm{D}$ line drawings produced by our method. Note that, since lines are generated from the disparity map, contours and interior lines are the only lines visible.
|
| 188 |
+
|
| 189 |
+
Stylizing the S3D lines yielded the images depicted in Figure 11. Note that even with jittered and overdrawn lines, the left and right views remain consistent.
|
| 190 |
+
|
| 191 |
+
We used three types of stylization for shading: toon-like shading produced by quantization of the RGB image, impressionist, and halftoning with large particles. None of these stylizations explicitly render contours, so there is no overlap between shading and the line drawing. We combined the stylized S3D line drawings with stylized shading to produce our final images, some of which are shown in Figure 12. To ensure the visibility of the S3D lines, we reduced the darkness of shaded regions by ${50}\%$ .
|
| 192 |
+
|
| 193 |
+
---
|
| 194 |
+
|
| 195 |
+
${}^{4}$ We used ${\tau }_{d} = 5$ , as we observed that curve points are close together, with no large jumps in depth.
|
| 196 |
+
|
| 197 |
+
---
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
|
| 201 |
+
Figure 9: Sample inputs to S3D line drawing algorithm.
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
|
| 205 |
+
Figure 10: S3D line drawings.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
|
| 209 |
+
Figure 11: S3D line drawings (red/cyan anaglyph).
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+
Figure 12: Final stylized S3D line drawings (red/cyan anaglyph).
|
| 214 |
+
|
| 215 |
+
We also applied our method to S3D photos with computed disparity maps. These maps contain noise, disparity mismatches, and obscured object contours, which pose a challenge for many S3D algorithms. Figure 13 demonstrates our method's performance on S3D photos with disparity maps of varying quality. Despite these disparity errors, our method is still able to produce line drawings, as demonstrated in Figure 13. Note how some lines appear to be missing, typically because they are not visible in the disparity map.
|
| 216 |
+
|
| 217 |
+
## 6 Evaluation
|
| 218 |
+
|
| 219 |
+
To evaluate our results for quality of depth reproduction and viewing comfort, we conducted a short study. For health and safety reasons due to COVID-19, our study was conducted remotely. We asked participants to view a set of 24 images from our dataset using either anaglyph glasses, a 3D TV, a VR headset, or by free-viewing in their homes. For each image, participants were asked to rate the viewing comfort and apparent depth on a Likert scale from 1 to 5 . Participants were also asked to rate how aesthetically pleasing they found each image. Images were randomized, and participants were not aware of what they would be viewing.
|
| 220 |
+
|
| 221 |
+
Overall, participants found that our consistent line drawings are more comfortable, reproduce a greater sense of depth, and are more aesthetically appealing than the raw, inconsistently-rendered line drawings. Table 1 indicates the average difference between each of our results and the inconsistently-rendered line drawings. This difference is a percentage increase from the raw, unstylized lines to our method. So, for example, the first cell demonstrates that the average score was ${26}\%$ better for our consistently-rendered, unstylized lines than for raw, unstylized lines rendered inconsistently. Note that adding stylization to our lines improved comfort, depth reproduction, and the overall aesthetic. We expected participants to find the stylized lines more aesthetically pleasing, but we did not anticipate they would find these more comfortable or conducive to a greater sense of depth. However, the stylized lines are more prominent than the unstylized lines and may provide participants with more visual information to fuse, resulting in greater viewing comfort and depth. Also note that adding shading, a monocular depth cue, significantly improved all metrics, regardless of how that shading was rendered. Even the halftone/newsprint shader applied inconsistently, which renders large circles into the scene, was more comfortable, produced more depth, and was more aesthetically pleasing than plain lines. This is interesting, because these images were ${55}\%$ less consistent than our plain line drawing. We computed consistency by comparing the colour values of pixels that should match according to the disparity map.
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
|
| 225 |
+
Figure 13: Line drawings from photos with disparity maps of varying quality.
|
| 226 |
+
|
| 227 |
+
<table><tr><td/><td>comfort</td><td>depth</td><td>appearance</td></tr><tr><td>our unstylized lines</td><td>26%</td><td>14%</td><td>20%</td></tr><tr><td>our stylized lines</td><td>32%</td><td>16%</td><td>26%</td></tr><tr><td>our unstylized lines with consistent shading</td><td>46%</td><td>46%</td><td>71%</td></tr><tr><td>our unstylized lines with inconsistent shading</td><td>27%</td><td>42%</td><td>41%</td></tr><tr><td>our stylized lines with consistent shading</td><td>48%</td><td>45%</td><td>59%</td></tr><tr><td>our stylized lines with inconsistent shading</td><td>52%</td><td>41%</td><td>66%</td></tr></table>
|
| 228 |
+
|
| 229 |
+
Table 1: The difference in comfort, depth reproduction, and aesthetic appearance between raw, unstylized and inconsistently rendered lines and our method. Note that the averaged participant scores for view-dependent, unstylized lines are presented in Table 2.
|
| 230 |
+
|
| 231 |
+
<table><tr><td/><td>comfort</td><td>depth</td><td>appearance</td></tr><tr><td>view dependent, unstylized lines</td><td>2.5</td><td>2.6</td><td>2.1</td></tr><tr><td>our stylized lines with consistent shading</td><td>3.7</td><td>3.8</td><td>3.4</td></tr></table>
|
| 232 |
+
|
| 233 |
+
Table 2: Averaged participant scores for view-dependent, unstylized, and inconsistently rendered lines that were used to to compare our various methods to. Note that we have provided averaged participant scores for stylized lines with consistent shading for reference.
|
| 234 |
+
|
| 235 |
+
Ideally, our participants would be a random sample of individuals with varying backgrounds and exposure to S3D. However, as we were required to run this study remotely, we relied on finding individuals that owned their own S3D viewing equipment or were able to free-view. Hence, our participant pool was drawn from individuals that could be considered S3D enthusiasts. Consequently, participants were critical, and quick to identify and articulate flaws in images, such as window violations and ghosting. However, we appreciated their honest and experienced assessments as they provided a clearer and more concise evaluation of our results.
|
| 236 |
+
|
| 237 |
+
We also note that the study conditions were not ideal. Firstly, we relied on participants to self-report their ability to perceive depth. Second, due to the rarity and variety of S3D viewing equipment available, it is unlikely that any two participants used the exact same viewing technology. We categorized viewing mechanisms into three groups: anaglyph, $3\mathrm{{DTV}}/3\mathrm{{DS}}/\mathrm{{VR}}$ , and free-viewing. Of the 16 participants, ${50}\%$ used anaglyph glasses, which are prone to crosstalk and ghosting that may cause discomfort. A smaller number of participants, 31.2%, used some other 3D viewing apparatus, such as a 3DTV. This technology may exhibit some crosstalk or ghosting, but significantly less than anaglyph glasses, typically making this technology more comfortable to use. Finally, about ${18.8}\%$ of participants free-viewed the images. The study conditions may have thereby contributed disproportionally to viewing discomfort.
|
| 238 |
+
|
| 239 |
+
## 7 Conclusion and Limitations
|
| 240 |
+
|
| 241 |
+
Our algorithm successfully produces stylized stereoscopic 3D line drawings from photographs. These line drawings reproduce 3D shape, especially when combined with mono-scopic shading. Furthermore, for fine-grained stylizations, inconsistent shading did not have a negative impact on the perception of depth or comfort. However, large-grained styl-izations, such as halftoning, were not as comfortable as their consistently-shaded variations, as expected.
|
| 242 |
+
|
| 243 |
+
A major limitation of our method is that the quality of our method's results largely depends on the quality of the disparity maps provided. Noisy, non-smooth disparity maps, as well as those with obfuscated or obscured object contours, will likely produce noisy line drawings where the object contours are not clearly visible. This, in turn, may produce line drawings with no identifiable subject. Overcoming this limitation is the subject of future work.
|
| 244 |
+
|
| 245 |
+
## References
|
| 246 |
+
|
| 247 |
+
[1] M. S. Banks, J. C. A. Read, R. S. Allison, and S. J. Watt. Stereoscopy and the human visual system. SMPTE Motion Imaging Journal, 121(4):24-43, 5 2012.
|
| 248 |
+
|
| 249 |
+
[2] D. N. Bhat and S. K. Nayar. Stereo and specular reflection. Int. J. Comput. Vision, 26(2):91-106, 2 1998.
|
| 250 |
+
|
| 251 |
+
[3] K. R. Brooks. Depth perception and the history of three-dimensional art: Who produced the first stereoscopic images? i-Perception, 8(1):2041669516680114, 2017.
|
| 252 |
+
|
| 253 |
+
[4] J. Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-8(6):679-698, 11 1986.
|
| 254 |
+
|
| 255 |
+
[5] F. Cole, A. Golovinskiy, A. Limpaecher, H. S. Barros, A. Finkelstein, T. Funkhouser, and S. Rusinkiewicz. Where
|
| 256 |
+
|
| 257 |
+
do people draw lines? In ACM SIGGRAPH 2008 Papers, SIGGRAPH '08, pp. 88:1-88:11. ACM, New York, NY, USA, 2008.
|
| 258 |
+
|
| 259 |
+
[6] M. Gelautz and D. Markovic. Recognition of object contours
|
| 260 |
+
|
| 261 |
+
from stereo images: an edge combination approach. In ${3D}$ Data Processing, Visualization and Transmission, 2004.
|
| 262 |
+
|
| 263 |
+
[7] E. Goldstein. Sensation and Perception. Cengage Learning, 2009.
|
| 264 |
+
|
| 265 |
+
[8] A. Hertzmann. Introduction to 3D non-photorealistic rendering: Silhouettes and outlines. SIGGRAPH Course Notes, 01 1999.
|
| 266 |
+
|
| 267 |
+
[9] A. Hertzmann. Why do line drawings work? a realism hypothesis. Perception, 49(4):439-451, 2020. PMID: 32126897. doi: 10.1177/0301006620908207
|
| 268 |
+
|
| 269 |
+
[10] A. Hertzmann and D. Zorin. Illustrating smooth surfaces. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH, pp. 517- 526. ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 2000.
|
| 270 |
+
|
| 271 |
+
[11] H. Kang, S. Lee, and C. K. Chui. Coherent line drawing. In Proceedings of the 5th International Symposium on Non-photorealistic Animation and Rendering, NPAR '07, pp. 43- 50. ACM, New York, NY, USA, 2007.
|
| 272 |
+
|
| 273 |
+
[12] Y. Kim, Y. Lee, H. Kang, and S. Lee. Stereoscopic 3D line drawing. ACM Transactions on Graphics, 32(4):57:1-57:13, 7 2013.
|
| 274 |
+
|
| 275 |
+
[13] Y.-S. Kim, J.-Y. Kwon, and I.-K. Lee. Stereoscopic line drawing using depth maps. In ACM SIGGRAPH 2012 Posters, SIGGRAPH '12, pp. 113:1-113:1. ACM, New York, NY, USA, 2012.
|
| 276 |
+
|
| 277 |
+
[14] Y. Lee, Y. Kim, H. Kang, and S. Lee. Binocular depth perception of stereoscopic 3D line drawings. In Proceedings of the ACM Symposium on Applied Perception, SAP '13, pp. 31-34. ACM, New York, NY, USA, 2013.
|
| 278 |
+
|
| 279 |
+
[15] B. Mendiburu. 3D Movie Making: Stereoscopic Digital Cinema from Script to Screen. Focal Press, 52009.
|
| 280 |
+
|
| 281 |
+
[16] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgib-bon. Kinectfusion: Real-time dense surface mapping and tracking. In 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 127-136, 2011. doi: 10. 1109/ISMAR. 2011.6092378
|
| 282 |
+
|
| 283 |
+
[17] L. Northam, P. Asente, and C. S. Kaplan. Consistent stylization and painterly rendering of stereoscopic 3D images. In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, NPAR '12, pp. 47-56. Eurographics Association, Goslar Germany, Germany, 2012.
|
| 284 |
+
|
| 285 |
+
[18] L. Northam, P. Asente, and C. S. Kaplan. Stereoscopic 3D image stylization. Computers and Graphics, 37:389-402, 08 2013.
|
| 286 |
+
|
| 287 |
+
[19] N. Ofir, M. Galun, S. Alpert, A. Brandt, B. Nadler, and R. Basri. On detection of faint edges in noisy images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4):894-908, 2020. doi: 10.1109/TPAMI.2019.2892134
|
| 288 |
+
|
| 289 |
+
[20] C. Richardt, L. Šwirski, I. P. Davies, and N. A. Dodgson. Predicting stereoscopic viewing comfort using a coherence-based computational model. In Proceedings of the International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging, pp. 97-104. ACM, New York, NY, USA, 2011.
|
| 290 |
+
|
| 291 |
+
[21] S. Savant. A review on edge detection techniques for image segmentation. International Journal of Computer Science and Information Technologies, 4:5898-5900, 08 2014.
|
| 292 |
+
|
| 293 |
+
[22] D. Scharstein and C. Pal. Learning conditional random fields for stereo. In Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on, pp. 1-8, 6 2007.
|
| 294 |
+
|
| 295 |
+
[23] E. Stavrakis and M. Gelautz. Image-based stereoscopic painterly rendering. In Proceedings of the Fifteenth Euro-graphics Conference on Rendering Techniques, EGSR'04, pp. 53-60. Eurographics Association, Aire-la-Ville, Switzerland,
|
| 296 |
+
|
| 297 |
+
Switzerland, 2004.
|
| 298 |
+
|
| 299 |
+
[24] E. Stavrakis and M. Gelautz. Stereoscopic painting with varying levels of detail. In In Proceedings of SPIE - Stereoscopic Displays and Virtual Reality Systems XII, pp. 55-64, 2005.
|
| 300 |
+
|
| 301 |
+
[25] R. Toth, J. Hasselgren, and T. Akenine-Möller. Perception of highlight disparity at a distance in consumer head-mounted displays. In Proceedings of the 7th Conference on High-Performance Graphics, HPG '15, pp. 61-66. ACM, New York, NY, USA, 2015.
|
| 302 |
+
|
| 303 |
+
[26] C. Wheatstone. Contributions to the Physiology of Vision: Part the First: On Some Remarkable and Hitherto Unobserved Phenomena of Binocular Vision. 1838.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/O63xxj_lZqH/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,263 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ GENERATING ROUGH STEREOSCOPIC 3D LINE DRAWINGS FROM 3D IMAGES
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
We present a method to produce stylized drawings from S3D images. Taking advantage of the information provided by the disparity map, we extract object contours and determine their visibility. The discovered contours are stylized and warped to produce an S3D line drawing. Since the produced line drawing can be ambiguous in shape, we add stylized shading to provide monocular depth cues. We investigate using both consistently rendered shading, and inconsistently rendered shading to determine the importance of lines and shading to depth perception.
|
| 8 |
+
|
| 9 |
+
Index Terms: Computing methodologies-Non-photorealistic rendering一;
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Stereoscopic 3D is used in a variety of art forms, such as photography and film, to create the effect of depth. The perceived depth can provide a greater sense of reality, create an immersive or engaging experience, and serve as an artistic medium to induce emotional responses in the viewer. S3D creates a sense of depth by presenting a slightly different image to each eye. The left and right images exhibit horizontal separation between objects, which is interpreted as depth by the brain. Producing S3D content is challenging and emphasis must be placed on consistency, ensuring that the object(s)/scene visible in both views matches exactly to produce a comfortable viewing experience and correct depiction of depth $\left\lbrack {{12},{18}}\right\rbrack$ .
|
| 14 |
+
|
| 15 |
+
Line, or pen-and-ink, drawings are one of the oldest S3D art forms, dating back to Sir Charles Wheatstone's original drawings in the ${1830}\mathrm{\;s}$ [26]. This format persists today in comics and diagrams. Although S3D line drawings can be produced from 3D meshes using automated algorithms, producing S3D line drawings from S3D photos has not received significant attention.
|
| 16 |
+
|
| 17 |
+
One possibility for producing line drawings from S3D photos is to use a S3D stylization algorithm such as the layer-based method presented by Northam et al. $\left\lbrack {{17},{18}}\right\rbrack$ . This approach divides the S3D image and disparity ${}^{1}$ maps into layers by disparity, such that each layer only contains pixels from a single disparity, then applies stylization to these layers. While their approach can be used with a variety of artistic styles and filters, contours and line drawing were not considered.
|
| 18 |
+
|
| 19 |
+
If we try to produce a line drawing using this method, the results are displeasing. This is because object contours will be conflated with other edges, such as lighting and texture boundaries. Thus, the final result contains lines that do not convey shape. We could, however, use the additional information provided by a disparity map to isolate object contours. However, the layer-based approach cannot be used to extract these contours from the disparity map, because layers contain pixels with the same disparity value. Figure 1 illustrates this issue. Note how the edges found for the disparity layer do not correspond to object contours, which can be found in the disparity map. Instead, they correspond to the edges of the region with the given disparity.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: Line drawing from disparity layers. Note how each disparity layer only contains pixels of one value. Hence, extracting lines from such layers produces the edges of the layer pixels instead of the desired object contours.
|
| 24 |
+
|
| 25 |
+
In this paper, we present a method to produce stylized stereoscopic 3D line drawings from 3D photos using stereoscopic warping instead of layers. While constructing this method, we observed that some drawings of simple objects were ambiguous and did not uniquely identify the $3\mathrm{D}$ shape. For example, the contour of a sphere is a circle and could represent either a flat circle or a sphere. Shading, a monocular depth cue, can help resolve these ambiguities. Hence, we also investigate the effects of adding stylized shading to our produced drawings.
|
| 26 |
+
|
| 27 |
+
The line extraction and rendering algorithm is presented in Section 3, and our shading method is discussed in Section 4. Our results are presented in Section 5. Finally, we present an evaluation of our results to verify their 3D comfort and depth quality in Section 6.
|
| 28 |
+
|
| 29 |
+
§ 2 BACKGROUND
|
| 30 |
+
|
| 31 |
+
A line drawing is a simplistic representation of an object or scene. It is comprised entirely of lines, which may be stylized, and which do not contain shading or colour. Despite the simplicity of such drawings, these drawings are capable of accurately conveying the subject that they depict. Hertzmann indicates line drawings "work" because they "approximate realistic renderings" [9].
|
| 32 |
+
|
| 33 |
+
Where do artists draw lines? A line drawing study by Cole et al. examined where artists draw lines for a variety of objects [5]. They observed that contours, creases and folds - which describe the shape of the object - were drawn, but lines depicting shadows or highlights were not. This was also observed by Hertzmann et al. while rendering line drawings for smooth meshes [10].
|
| 34 |
+
|
| 35 |
+
In a stereoscopic 3D line drawing, contours, creases and folds give the primary sense of an object’s $3\mathrm{D}$ shape and depth. Without other S3D cues from which to infer depth, it is important that these lines are as consistent as possible between left and right views. Inconsistencies can cause viewing discomfort from binocular rivalry ${}^{2}$ and double vision, detracting from the viewer's perception of object depth. Previous studies have shown that these S3D lines alone sufficiently convey object shape/depth for many images $\left\lbrack {1,{12},{14}}\right\rbrack$ .
|
| 36 |
+
|
| 37 |
+
${}^{2}$ binocular rivalry is a phenomena where the brain rapidly switches between left and right eyes because the images differ
|
| 38 |
+
|
| 39 |
+
${}^{1}$ disparity is inversely proportional to depth and conveys the horizontal separation between a point in left and right views
|
| 40 |
+
|
| 41 |
+
A number of algorithms have been proposed to produce stereoscopic 3D line drawings from meshes. Most notably, Kim et al. presented a method that produces 3D line drawings by generating contours for left and right eyes separately [12]. Contours are then pruned for view consistency by checking the visibility of points along the curve formed by creating an epipolar plane between a pair of points on the left and right contours. Kim et al. also describe a method for consistent stylization of lines by linking control points between matching contours and applying stylization to the linked pairs. However, this method can only be used with full 3D models.
|
| 42 |
+
|
| 43 |
+
Another paper by Kim et al. describes a method for producing stylized stereoscopic 3D line drawings from S3D photographs [13]. Their paper applies Canny edge detection [4] to the edge tangent field [11] of the left stereo image and warps the discovered edges to the right image using the disparity map. However, the rendered lines are from all edges that can be found in the actual image, not only object contours but also texture or lighting contours. By contrast, a hand-drawn stereoscopic 3D line drawing would be likely to include only object contours and creases. As their method is based on edge detection purely in the colour domain, they cannot differentiate between geometric discontinuities and colour discontinuities. Disparity maps, which indicate the horizontal separation between pixels of the left and right image, isolate geometric information from colour information. Therefore, applying edge detection to the disparity map could uniquely produce object contours. Our method will harness the information in the disparity map to construct S3D line drawings.
|
| 44 |
+
|
| 45 |
+
§ 2.1 PERCEPTION AND MONOSCOPIC DEPTH CUES
|
| 46 |
+
|
| 47 |
+
Our perception of depth arises from both monoscopic (2D) and 3D depth cues. Monoscopic depth cues include shading, relative size, occlusion, and motion $\left\lbrack {1,3}\right\rbrack$ . Shading an S3D line drawing can improve depth perception, but the amount of improvement is limited for images with rich detail [14]. However, for S3D line drawings of simple objects with few internal lines, shading provides the necessary information about object shape. For example, imagine a circular contour: is this a line drawing of a circle or a sphere?
|
| 48 |
+
|
| 49 |
+
Stereo-consistent shading is complicated by the fact that shading can be view-dependent. Apart from purely Lamber-tian surfaces, shading features such as specular highlights may be visible in only one eye due to the position of the eyes with respect to the light source and object [2]. This phenomenon can also be observed in S3D photographs, as demonstrated in Figure 2. Note that both specular highlights and reflections differ between left and right views and are circled in cyan for visibility. While these specular highlights are natural to the human visual system, they can be problematic for computer vision algorithms commonly used with stereo [2]. Additionally, it is believed in film that specular highlights can cause binocular rivalry if they are rendered inconsistently between views. Therefore, they are often removed or redrawn to be consistent [15]. We provide users the option of adding shading to our S3D line drawings, to improve the perception of shape. However, our shading will remain true-to-nature. That is, we will not remove or adjust the natural lighting of the S3D images to ensure consistency.
|
| 50 |
+
|
| 51 |
+
< g r a p h i c s >
|
| 52 |
+
|
| 53 |
+
Figure 2: Inconsistent specular highlights and reflections.
|
| 54 |
+
|
| 55 |
+
§ 2.2 STYLIZATION
|
| 56 |
+
|
| 57 |
+
Stylization can be applied to both the S3D lines and shaded regions. Surprisingly, it has been shown that simple line styles such as overdrawn lines, varying thickness, and jitter do not negatively impact depth perception and comfort in S3D images if rendered consistently [14].
|
| 58 |
+
|
| 59 |
+
The naive approach to create consistent stylized lines is to render them in the left view, then use the disparity map and warp (horizontally shift) them to the right. However, the rendered, stylized object contours may have pixels that bleed over onto other surfaces. Therefore, warping individual pixels would not produce smooth lines. Alternatively, the control points of curves or the endpoints of line segments could be warped. But if any of these points from the stylized lines lie on other objects, the lines rendered in the right view may be discontinuous or distorted. Instead, we will match the original control points to their underlying disparities prior to stylization or rendering, similar to the approach used by Kim et al. [12]. Although there are many ways to stylize lines, we focus on overdrawn and jittered styles.
|
| 60 |
+
|
| 61 |
+
In addition to stylized contours, we will also stylize shaded regions. Stereoscopic 3D image stylization has been well studied, although existing methods focus on stylizing the whole image consistently instead of a small region. Stavrakis et al. applied stylization to the left image and used the disparity map to warp it to the right, then did the reverse for occluded regions $\left\lbrack {{23},{24}}\right\rbrack$ . As discussed previously, Northam et al. applied stylization using disparity layers $\left\lbrack {{17},{18}}\right\rbrack$ . However, since lighting and specular highlights are view-dependent, these methods would enforce consistency where none exists. Hence, we will apply stylization algorithms to shaded regions in a view-dependent manner to preserve these inconsistencies. By preserving inconsistencies, we contradict Richardt et al. [20] and Northam et al. $\left\lbrack {{17},{18}}\right\rbrack$ , which focus on establishing or maintaining consistency at all cost. However, we believe that because shading is a monocular depth cue, binocular rivalry and randomness will have minimal effects on viewer perception.
|
| 62 |
+
|
| 63 |
+
§ 3 PRODUCING A STEREOSCOPIC 3D LINE DRAWING
|
| 64 |
+
|
| 65 |
+
Our method is divided into several stages and requires left and right images and disparity maps as input. First, the object-depiction contours are extracted. Next, the contours are split into curves by view visibility: left-only, right-only, and shared. Curves are stylized, then warped from left-to-right. Finally, shading may be added to improve depth perception.
|
| 66 |
+
|
| 67 |
+
§ 3.1 EXTRACTING CONTOURS
|
| 68 |
+
|
| 69 |
+
The shape of an object is described by its silhouette (contours) and interior creases and folds $\left\lbrack {8,{10}}\right\rbrack$ . Both contours and interior lines are needed to give a clear sense of shape [5]. While these lines may be found in the S3D image, they are more easily isolated using information in the disparity map.
|
| 70 |
+
|
| 71 |
+
There are two possible approaches to find contours (edges). The first approach to finding edges is to apply edge detection methods to the raw or preprocessed disparity map. The second approach is to perform a 3D reconstruction of the scene using the provided disparity maps and apply a silhouette finding algorithm, such as Hertzmann and Zorin, to identify the edges [10].
|
| 72 |
+
|
| 73 |
+
We use the first method, applying edge detection methods to the raw or preprocessed disparity map, instead of discovering silhouettes from a 3D reconstruction. While many of the edges can be found from the reconstruction using Hertzmann and Zorin's approach, more subtle edges occurring where two objects intersect at the same depth are not always identified, as shown in Figure 3 [10]. Secondly, after identifying the silhouette edges from the 3D reconstruction, the visibility of those edges must then be computed for each eye, as in Kim [12]. However, visibility is given in the disparity map, so recomputing this information is a waste. Finally, we do not assume that the baseline and focal length of the image is known or can be estimated such that a believable 3D reconstruction can be produced.
|
| 74 |
+
|
| 75 |
+
< g r a p h i c s >
|
| 76 |
+
|
| 77 |
+
Figure 3: Hertzmann and Zorin's method vs our method.
|
| 78 |
+
|
| 79 |
+
§ 3.1.1 FINDING EDGES IN A DISPARITY MAP
|
| 80 |
+
|
| 81 |
+
There are two types of edges in a disparity map. The first type occurs at a depth discontinuity, where one object occludes another, creating a jump in neighbouring disparity values. The second type occurs where two surfaces meet at the same depth, or as creases and folds on an object's surface. The first type of edge can be found using a Laplacian or Canny edge detector, as shown in Figure 4.
|
| 82 |
+
|
| 83 |
+
< g r a p h i c s >
|
| 84 |
+
|
| 85 |
+
Figure 4: The strong edges of a disparity map may b efound by a Canny edge detector or Laplacian edge detector but they are not ideal.
|
| 86 |
+
|
| 87 |
+
The second type of edge is more elusive. Adjusting the parameters of a Laplacian or Canny detector can find these edges, but not uniquely, as shown in Figure 5.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Figure 5: The second type of edge is hard to detect uniquely using Canny or Laplace.
|
| 92 |
+
|
| 93 |
+
Our method does manage to successfully and uniquely identify these edges, as shown in Figure 6.
|
| 94 |
+
|
| 95 |
+
< g r a p h i c s >
|
| 96 |
+
|
| 97 |
+
Figure 6: The second type of edge is easy to detect uniquely using our method.
|
| 98 |
+
|
| 99 |
+
To make these low-contrast edges more visible, we can convert disparity to depth and apply a bilateral filter to smooth the plateaus in the result, as suggested by [16]. However, while this improves the visibility of type one edges, it does not improve visibility for all type two edges, as shown in Figure 7.
|
| 100 |
+
|
| 101 |
+
(a) A stereoscopic 3D (b) Background (c) Background $n$ creases are not found creases are found
|
| 102 |
+
|
| 103 |
+
the background. using [16]'s method. using our method.
|
| 104 |
+
|
| 105 |
+
§ FIGURE 7: OUR METHOD IMPROVES VISIBILITY FOR TYPE TWO EDGES.
|
| 106 |
+
|
| 107 |
+
We note that finding type two edges, or finding edges in a low-contrast or noisy region, is known to be a difficult problem [19]. Savant indicates that Laplacian, Canny, and other detectors are not able to uniquely identify edges in low-contrast areas [21]. And while second-order derivative methods can identify some edges that are zero-crossings, not all type two edges are zero-crossings.
|
| 108 |
+
|
| 109 |
+
Hence, we propose the following method to identify edges in disparity maps. First, we use a Canny edge detector, as suggested by Gelautz and Markovic, to identify type one edges [6].
|
| 110 |
+
|
| 111 |
+
Next, we improve the visibility of type two edges. Hertz-mann suggested rendering a scene where different coloured directional lights are cast along positive and negative axes onto a 3D model to produce a brightly-coloured normal map from which edges could be found [10]. In order to apply this technique to our disparity maps, each pixel needs a surface normal. We assign surface normals by applying a simple surface triangulation to each map. Each pixel position(x, y) is a vertex with depth $z$ equal to the disparity at that position. Triangular "faces" are formed by a point $p$ in the disparity map and two of its immediate neighbours, $q$ and $r$ . A normal can then be calculated for each of these faces, as well as the vertex normal from the average of the eight adjacent triangular face normals.
|
| 112 |
+
|
| 113 |
+
The normals are multiplied by a directional light vector to enhance visualization, as illustrated in Figure 8. However, when directional lights are cast onto the lit normal map, we do not observe a smoothly shaded cat as expected. Instead, the normal map appears stepped - with rings of front-facing planes depicted in dark red. This stepped appearance is a consequence of the limited dynamic range of most disparity maps. A perfectly smooth surface cannot be depicted in this discrete space, resulting in many pixels being assigned the same integer disparity instead of their actual values. These artificial edges make discovery of actual edges, such as the interface between the wall and floor, difficult to achieve.
|
| 114 |
+
|
| 115 |
+
In order to remove the stepped or plateaued appearance, floating point disparities are needed, along with a smoothing operator to reduce the discretized appearance. Ideally, converting the disparity map from 8-bit to floating point and applying a simple out-of-the-box smoothing operation would smooth these plateaus out. However, directly applying a bilateral filter will preserve or enhance these edges and a Gaussian or box filter would soften all edges indiscriminately, effectively blending objects together. Instead, to smooth these plateaus and generate a smooth lit disparity with preserved edges, we:
|
| 116 |
+
|
| 117 |
+
1. Compute the strong edges using a Canny filter, dilating the result to produce an edge mask where edges have a diameter of 10 pixels.
|
| 118 |
+
|
| 119 |
+
2. Calculate the surface normals via triangulation as previously discussed. Use larger triangles in regions away from edges and smaller triangles along edges. This hybrid approach gives us clear, prominent lines corresponding to key edges, and additional smoothing elsewhere. Do this for both the discrete (un-smoothed) and floating point (smoothed) disparity maps.
|
| 120 |
+
|
| 121 |
+
3. Cast directional lights to colourize and produce the smoothed and un-smoothed maps. This yields Figure 8(a) and Figure 8(b), respectively.
|
| 122 |
+
|
| 123 |
+
4. Compute the complexity of the discrete (un-smoothed) disparity map, $\alpha$ , as the number of observed disparities. Apply a bilateral filter to the smoothed map $\frac{\alpha }{10}{}^{3}$ times.
|
| 124 |
+
|
| 125 |
+
5. To correct the blown-out edges caused by the previous step, extract the strong mask edges from the unsmoothed map (that is, the pixels of the unsmoothed map coinciding with the pixels of the mask generated in step 1) and superimpose them on the smoothed map. Apply a bilateral filter to the smoothed map $\frac{\alpha }{10}$ times to help blend the edges in.
|
| 126 |
+
|
| 127 |
+
6. Overlay the original, un-dilated version of the mask on top of the smoothed map, as seen in Figure 8(c), to aid in the identification of strong edges.
|
| 128 |
+
|
| 129 |
+
We can now apply the Canny edge operator to the smoothed and coloured map in an automated fashion.
|
| 130 |
+
|
| 131 |
+
In general, we seek to compensate for less detailed masks with more permissive Canny thresholds that yield a more detailed final edge set. Conversely, more detailed masks imply that a stricter threshold should be used, to prevent an overly noisy final edge set. Let the number of mask pixels be $\beta$ . Let $\beta$ divided by the total number of pixels be $x$ . We recognize that the level of detail in the mask, $x$ , is inversely proportional to the number of pixels desired in the final edge set.
|
| 132 |
+
|
| 133 |
+
We also want to take into account the aforementioned disparity map complexity, $\alpha$ . This is another inversely proportional relationship, between disparity map complexity and the target number of pixels in the final edge set. The more complex the disparity map is, the greater the number of easily identifiable edges, the less permissive the Canny threshold is required to be. Conversely, the less detailed the disparity map, the more difficulty Canny will have extracting edges from it, and the more permissive we should make the Canny threshold.
|
| 134 |
+
|
| 135 |
+
We will use these two complexity measures - and the direct, inversely proportional relationships that we observed - to select Canny thresholds automatically. In so doing, we are letting the image - not the user - do the talking.
|
| 136 |
+
|
| 137 |
+
Let $\phi = \frac{1}{\alpha x}$ be the target number of pixels in the final edge set.
|
| 138 |
+
|
| 139 |
+
We want the final edge set to contain at least as many pixels as the mask; only then can more edges be found to supplement the mask edges. Therefore, we modify our target to $\left( {1 + \sigma }\right) \beta$ where $\sigma = \min \left( {3,\phi }\right)$ . Notice that we set a cap of 3 on $\phi$ . This is because, experimentally, we have found that larger values introduce a lot of noise.
|
| 140 |
+
|
| 141 |
+
What we have established is a target number of final edge pixels, not a Canny threshold parameter. But each Canny threshold parameter will produce a certain number of edge pixels. The higher the threshold, the less pixels in the final result; the lower the threshold, the more pixels in the final result. The minimum threshold parameter is min=0 and the maximum is $\max = {255}$ . Using these boundaries, we can conduct a binary search for the ideal parameter. We start by using a threshold parameter midway between min and max. We then count the number of pixels in the resulting edge set. If the result is below target, we need to be more permissive, so we lower our max and try a lower threshold in the next iteration. If the result exceeds the target, we need to be less permissive, so we increase our min. We stop once max=min, or the edge pixel count equals the target.
|
| 142 |
+
|
| 143 |
+
Once edges are found, we use the findContours function in OpenCV to extract curve points from the raster image. The extracted curve points are processed to remove curve duplication. The curves are also split whenever adjacent point disparities differ by more than a small threshold, under the assumption that the adjacent points belong to separate surfaces. We note that some of the original detail may be lost in this process.
|
| 144 |
+
|
| 145 |
+
${}^{3}\frac{\alpha }{10}$ , and other parameter values, were selected after applying the method to our test set of 12 images and selecting the parameter value that produced the best results overall for all images
|
| 146 |
+
|
| 147 |
+
< g r a p h i c s >
|
| 148 |
+
|
| 149 |
+
Figure 8: Directionally lit disparity maps.
|
| 150 |
+
|
| 151 |
+
§ 3.1.2 SPLITTING LINES FOR CONSISTENCY
|
| 152 |
+
|
| 153 |
+
Edges extracted from the disparity maps are rendered as smooth splines for the corresponding view. However, view dependent rendering introduces inconsistencies, which arise when edges are found in one disparity map but not the other due to the slight variation in the viewpoint.
|
| 154 |
+
|
| 155 |
+
To prevent these inconsistencies - which cause discomfort - we arbitrarily select the left view to be the "true" edges. Then, any line in the left view that is visible in the right is warped using the disparity map to the right view. Lines only visible in the left are rendered only in the left; likewise, lines only visible in the right are rendered only in the right. View visibility is determined by the disparity map. Any pixel $p\left( {x,y}\right)$ is visible in both left and right views if $L\left( {x,y}\right) = d$ and the corresponding pixel $R\left( {x - d,y}\right) = d$ , where $L$ and $R$ are the left and right disparity maps respectively, and $d$ is the disparity of pixel $p$ .
|
| 156 |
+
|
| 157 |
+
Contours are warped by their control points and then rendered in a view-dependent manner. While this method of rendering potentially introduces inconsistencies, warping the rendered lines would introduce noise as the lines may lie on surfaces with different disparities.
|
| 158 |
+
|
| 159 |
+
Finally, we note that long edges extracted from the disparity map may span multiple objects and both occluded and visible regions. Warping the entire stroke can result in partially occluded edges being visible in the wrong view. To prevent this, we split strokes whenever the visibilities of adjacent control points change, i.e. from visible to occluded. We also split strokes when the disparities of adjacent control points differ by more than some threshold ${\tau }_{d}{}^{4}$ . Strokes that cannot be warped, because they are only visible in one view, are rendered in only one view.
|
| 160 |
+
|
| 161 |
+
§ 3.1.3 CONSISTENT CONTROL POINT STYLIZATION
|
| 162 |
+
|
| 163 |
+
Monoscopic and S3D line drawings are often stylized and represent objects using rough, overdrawn and jittery lines. To increase visual interest, we provide the option of stylizing S3D lines with an overdrawn or jittered style.
|
| 164 |
+
|
| 165 |
+
Kim et al. discussed a method for stylizing stereoscopic 3D lines [12]. Their method performs stylization after lines have been discovered for both left and right images. Specifically, it links line segments in the left view to the matching segments in the right and consistently renders texture to these linked and parameterized curves.
|
| 166 |
+
|
| 167 |
+
Our approach is similar and stylizes lines prior to warping by replicating and transforming control points. To produce overdrawn lines, curves are duplicated a fixed number of times. Lines can then be scaled (about their centroids or the centre of the image) by a small random factor, so that the overdrawn lines are visibly distinct. A jittered or rough appearance is created by adding small random translation vectors to each control point of a line. Note that, prior to altering the control points of a line, it is important to store the original, pre-transformed disparities of those control points, so that they can be correctly warped after stylization
|
| 168 |
+
|
| 169 |
+
§ 4 SHADING
|
| 170 |
+
|
| 171 |
+
S3D line drawings depict the shapes of objects. However, these lines do not convey information such as surface texture or roundness - but shading and highlights do. Object shading and shadows are monocular depth cues [7]. Shading, particularly involving specular highlights, is view dependent [25]. Adding monocular depth cues to S3D line drawings can improve understanding of surface shape, and enhance depth perception. Also, because lighting is view dependent, the left and right views will be stylized independently to preserve their separate lighting characteristics.
|
| 172 |
+
|
| 173 |
+
To produce the stylized shading, the left and right input images are converted to grayscale and stylized using a variety of algorithms. While any stylization algorithm or filter could be used, we chose those that do not explicitly render contours, as our method will produce those separately. Finally, the stylized shading and S3D lines are combined to produce the final image.
|
| 174 |
+
|
| 175 |
+
§ 5 RESULTS
|
| 176 |
+
|
| 177 |
+
We tested our method on several S3D images, some of which are shown in Figure 9. Seventy-five percent of the images used as input to our method have high-quality or near-perfect disparity maps.
|
| 178 |
+
|
| 179 |
+
Figure 10 illustrates some of the $3\mathrm{D}$ line drawings produced by our method. Note that, since lines are generated from the disparity map, contours and interior lines are the only lines visible.
|
| 180 |
+
|
| 181 |
+
Stylizing the S3D lines yielded the images depicted in Figure 11. Note that even with jittered and overdrawn lines, the left and right views remain consistent.
|
| 182 |
+
|
| 183 |
+
We used three types of stylization for shading: toon-like shading produced by quantization of the RGB image, impressionist, and halftoning with large particles. None of these stylizations explicitly render contours, so there is no overlap between shading and the line drawing. We combined the stylized S3D line drawings with stylized shading to produce our final images, some of which are shown in Figure 12. To ensure the visibility of the S3D lines, we reduced the darkness of shaded regions by ${50}\%$ .
|
| 184 |
+
|
| 185 |
+
${}^{4}$ We used ${\tau }_{d} = 5$ , as we observed that curve points are close together, with no large jumps in depth.
|
| 186 |
+
|
| 187 |
+
< g r a p h i c s >
|
| 188 |
+
|
| 189 |
+
Figure 9: Sample inputs to S3D line drawing algorithm.
|
| 190 |
+
|
| 191 |
+
< g r a p h i c s >
|
| 192 |
+
|
| 193 |
+
Figure 10: S3D line drawings.
|
| 194 |
+
|
| 195 |
+
< g r a p h i c s >
|
| 196 |
+
|
| 197 |
+
Figure 11: S3D line drawings (red/cyan anaglyph).
|
| 198 |
+
|
| 199 |
+
< g r a p h i c s >
|
| 200 |
+
|
| 201 |
+
Figure 12: Final stylized S3D line drawings (red/cyan anaglyph).
|
| 202 |
+
|
| 203 |
+
We also applied our method to S3D photos with computed disparity maps. These maps contain noise, disparity mismatches, and obscured object contours, which pose a challenge for many S3D algorithms. Figure 13 demonstrates our method's performance on S3D photos with disparity maps of varying quality. Despite these disparity errors, our method is still able to produce line drawings, as demonstrated in Figure 13. Note how some lines appear to be missing, typically because they are not visible in the disparity map.
|
| 204 |
+
|
| 205 |
+
§ 6 EVALUATION
|
| 206 |
+
|
| 207 |
+
To evaluate our results for quality of depth reproduction and viewing comfort, we conducted a short study. For health and safety reasons due to COVID-19, our study was conducted remotely. We asked participants to view a set of 24 images from our dataset using either anaglyph glasses, a 3D TV, a VR headset, or by free-viewing in their homes. For each image, participants were asked to rate the viewing comfort and apparent depth on a Likert scale from 1 to 5 . Participants were also asked to rate how aesthetically pleasing they found each image. Images were randomized, and participants were not aware of what they would be viewing.
|
| 208 |
+
|
| 209 |
+
Overall, participants found that our consistent line drawings are more comfortable, reproduce a greater sense of depth, and are more aesthetically appealing than the raw, inconsistently-rendered line drawings. Table 1 indicates the average difference between each of our results and the inconsistently-rendered line drawings. This difference is a percentage increase from the raw, unstylized lines to our method. So, for example, the first cell demonstrates that the average score was ${26}\%$ better for our consistently-rendered, unstylized lines than for raw, unstylized lines rendered inconsistently. Note that adding stylization to our lines improved comfort, depth reproduction, and the overall aesthetic. We expected participants to find the stylized lines more aesthetically pleasing, but we did not anticipate they would find these more comfortable or conducive to a greater sense of depth. However, the stylized lines are more prominent than the unstylized lines and may provide participants with more visual information to fuse, resulting in greater viewing comfort and depth. Also note that adding shading, a monocular depth cue, significantly improved all metrics, regardless of how that shading was rendered. Even the halftone/newsprint shader applied inconsistently, which renders large circles into the scene, was more comfortable, produced more depth, and was more aesthetically pleasing than plain lines. This is interesting, because these images were ${55}\%$ less consistent than our plain line drawing. We computed consistency by comparing the colour values of pixels that should match according to the disparity map.
|
| 210 |
+
|
| 211 |
+
< g r a p h i c s >
|
| 212 |
+
|
| 213 |
+
Figure 13: Line drawings from photos with disparity maps of varying quality.
|
| 214 |
+
|
| 215 |
+
max width=
|
| 216 |
+
|
| 217 |
+
X comfort depth appearance
|
| 218 |
+
|
| 219 |
+
1-4
|
| 220 |
+
our unstylized lines 26% 14% 20%
|
| 221 |
+
|
| 222 |
+
1-4
|
| 223 |
+
our stylized lines 32% 16% 26%
|
| 224 |
+
|
| 225 |
+
1-4
|
| 226 |
+
our unstylized lines with consistent shading 46% 46% 71%
|
| 227 |
+
|
| 228 |
+
1-4
|
| 229 |
+
our unstylized lines with inconsistent shading 27% 42% 41%
|
| 230 |
+
|
| 231 |
+
1-4
|
| 232 |
+
our stylized lines with consistent shading 48% 45% 59%
|
| 233 |
+
|
| 234 |
+
1-4
|
| 235 |
+
our stylized lines with inconsistent shading 52% 41% 66%
|
| 236 |
+
|
| 237 |
+
1-4
|
| 238 |
+
|
| 239 |
+
Table 1: The difference in comfort, depth reproduction, and aesthetic appearance between raw, unstylized and inconsistently rendered lines and our method. Note that the averaged participant scores for view-dependent, unstylized lines are presented in Table 2.
|
| 240 |
+
|
| 241 |
+
max width=
|
| 242 |
+
|
| 243 |
+
X comfort depth appearance
|
| 244 |
+
|
| 245 |
+
1-4
|
| 246 |
+
view dependent, unstylized lines 2.5 2.6 2.1
|
| 247 |
+
|
| 248 |
+
1-4
|
| 249 |
+
our stylized lines with consistent shading 3.7 3.8 3.4
|
| 250 |
+
|
| 251 |
+
1-4
|
| 252 |
+
|
| 253 |
+
Table 2: Averaged participant scores for view-dependent, unstylized, and inconsistently rendered lines that were used to to compare our various methods to. Note that we have provided averaged participant scores for stylized lines with consistent shading for reference.
|
| 254 |
+
|
| 255 |
+
Ideally, our participants would be a random sample of individuals with varying backgrounds and exposure to S3D. However, as we were required to run this study remotely, we relied on finding individuals that owned their own S3D viewing equipment or were able to free-view. Hence, our participant pool was drawn from individuals that could be considered S3D enthusiasts. Consequently, participants were critical, and quick to identify and articulate flaws in images, such as window violations and ghosting. However, we appreciated their honest and experienced assessments as they provided a clearer and more concise evaluation of our results.
|
| 256 |
+
|
| 257 |
+
We also note that the study conditions were not ideal. Firstly, we relied on participants to self-report their ability to perceive depth. Second, due to the rarity and variety of S3D viewing equipment available, it is unlikely that any two participants used the exact same viewing technology. We categorized viewing mechanisms into three groups: anaglyph, $3\mathrm{{DTV}}/3\mathrm{{DS}}/\mathrm{{VR}}$ , and free-viewing. Of the 16 participants, ${50}\%$ used anaglyph glasses, which are prone to crosstalk and ghosting that may cause discomfort. A smaller number of participants, 31.2%, used some other 3D viewing apparatus, such as a 3DTV. This technology may exhibit some crosstalk or ghosting, but significantly less than anaglyph glasses, typically making this technology more comfortable to use. Finally, about ${18.8}\%$ of participants free-viewed the images. The study conditions may have thereby contributed disproportionally to viewing discomfort.
|
| 258 |
+
|
| 259 |
+
§ 7 CONCLUSION AND LIMITATIONS
|
| 260 |
+
|
| 261 |
+
Our algorithm successfully produces stylized stereoscopic 3D line drawings from photographs. These line drawings reproduce 3D shape, especially when combined with mono-scopic shading. Furthermore, for fine-grained stylizations, inconsistent shading did not have a negative impact on the perception of depth or comfort. However, large-grained styl-izations, such as halftoning, were not as comfortable as their consistently-shaded variations, as expected.
|
| 262 |
+
|
| 263 |
+
A major limitation of our method is that the quality of our method's results largely depends on the quality of the disparity maps provided. Noisy, non-smooth disparity maps, as well as those with obfuscated or obscured object contours, will likely produce noisy line drawings where the object contours are not clearly visible. This, in turn, may produce line drawings with no identifiable subject. Overcoming this limitation is the subject of future work.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,437 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Paper Forager: Supporting the Rapid Exploration of Research Document Collections
|
| 2 |
+
|
| 3 |
+
Author One, Author Two, Author Three
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Fig. 1. Three views of the Paper Forager system: (A) the initial state of system showing all 5,055 papers in the sample corpus from the ACM CHI and UIST conferences, (B) the filtered results showing only the papers containing an individual keyword, and (C) a sample paper overview page which further allows a user to click on a page to read the content.
|
| 8 |
+
|
| 9 |
+
Abstract- We present Paper Forager, a web-based system which allows users to rapidly explore large collections of research documents. Our sample corpus uses 5,055 papers published at the ACM CHI and UIST conferences. Paper Forager provides a visually based browsing experience, allowing users to identify papers of interest based on their graphical appearance, in addition to providing traditional faceted search techniques. A cloud-based architecture stores the papers as multi-resolution images, giving users immediate access to reading individual pages of a paper, thus reducing the transaction cost between finding, scanning, and reading papers of interest. Initial user feedback sessions elicited positive subjective feedback, while a 24-month external deployment generated in-the-wild usage data which we analyze. Users of the system indicated that they would be enthusiastic to continue having access to the Paper Forager system in the future.
|
| 10 |
+
|
| 11 |
+
Index Terms-literature review, document search, document browsing, corpus visualization
|
| 12 |
+
|
| 13 |
+
## 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Literature reviews can be a long and tedious task requiring information seekers to sort through a large number of documents and follow extended chains of related research. With paper proceedings, users can easily scan and read any of the papers, but finding specific papers can be difficult.
|
| 16 |
+
|
| 17 |
+
In contrast, online digital libraries and search systems improve the ability to find specific papers of interest. A number of new systems have been developed [1]-[6] which provide advanced faceted search and filtering capabilities. However, these systems are driven by metadata and textual content and ignore visual qualities such as figures, graphics, layout, and design. Furthermore, such systems require the user to download the source PDF file before the paper can be read in detail. We seek a single system that can support a continuous transition between finding, scanning, and reading documents within a corpus.
|
| 18 |
+
|
| 19 |
+
- Submitted to GI 2021
|
| 20 |
+
|
| 21 |
+
Web technologies such as DeepZoom [7] and Google Maps support browsing of extremely large image-based data sets through the progressive loading of multi-resolution images. This type of architecture is beneficial in that it gives users rapid access to detailed content. However, we are unaware of any prior systems which have used such an architecture for document exploration.
|
| 22 |
+
|
| 23 |
+
In this paper we present Paper Forager, a system to support the rapid filtering and exploration of a collection of research papers. Paper Forager relies on a cloud-based architecture, storing the papers as multi-resolution images that can be progressively downloaded on-demand. By using this architecture, we allow the user to transition from browsing an entire corpus of thousands of papers, to reading any individual page within that corpus, within seconds. In doing so, we accomplish our goal of reducing the transaction cost between finding, scanning, and reading papers of interest.
|
| 24 |
+
|
| 25 |
+
Our main research contribution is the development of a novel system for literature review, which synthesizes previously explored concepts such as faceted search and zooming based interfaces. We present the design and implementation of Paper Forager and its associated architecture, implemented on a sample corpus of 5,055 papers from the ${ACM}$ CHI and UIST conferences. Additionally, we present results gathered from initial user feedback and a 24 month external deployment of the system. Users of the system felt it was easy and enjoyable to use, and the majority indicated that would like to continue using Paper Forager in the future.
|
| 26 |
+
|
| 27 |
+
## 2 RELATED WORK
|
| 28 |
+
|
| 29 |
+
### 2.1 Faceted Search
|
| 30 |
+
|
| 31 |
+
Faceted search allows users to explore a collection by filtering on multiple dimensions. While powerful, representing all of the available options in a user interface can be problematic [8]. Many papers have looked at improving the faceted searching experience. FacetLens [9] represents facets as nested areas on the interface and FacetAtlas [10] displays the relationships between related facets through a weighted network diagram and colored density map. Pivot Slice [6] used a collection of research papers as a sample corpus, and allows users to explore relationships between facets using direct manipulation. The faceted search system in Paper Forager is designed to be more approachable for new users than the above systems, at the expense of being less versatile in the types of queries which can be performed.
|
| 32 |
+
|
| 33 |
+
### 2.2 Visual Document Browsing
|
| 34 |
+
|
| 35 |
+
There have been numerous research projects exploring the space of visually exploring a collection of documents.
|
| 36 |
+
|
| 37 |
+
The WebBook and Web Forager [11] pre-loaded and rendered web pages so they could be rapidly flipped through, and more recently, Hong et al. [12] looked at improving the digital page flipping experience. Document Cards [13] extracts important terms and images from a document and displays them in compact representations.
|
| 38 |
+
|
| 39 |
+
The DocuBrowse system [14] is designed to browse and search for documents in large online enterprise document collections. Similar to Paper Forager, DocuBrowse includes both a faceted search interface and visual thumbnails of results. While source content can be opened, it is not clear how long it would take to download and view an individual document. Paper Forager expands upon ideas from the DocuBrowse interface, and uses a cloud-based architecture to support rapid viewing through the progressive loading of multi-resolution images. Paper Forager also takes advantage of the connections, such as citation networks, while DocuBrowse supports a wider range of file types without looking at their interconnectivity.
|
| 40 |
+
|
| 41 |
+
While not directly related to document browsing, the PhotoMesa system [15] allows zooming into a large number of images which are grouped and sorted by available metadata. Similarly, the Pivot Viewer component of the Silverlight framework [16] supports faceted searching of a collection of images based on associated metadata. Results are displayed using a dynamically resizing grid of images, using the Silverlight Deep Zoom technology [7]. We are unaware of attempts to use this type of technology for the exploration of research document collections. Paper Forager implements an architecture similar to Pivot Viewer, but with a customized design and interface for the purpose of rapidly exploring a corpus of research literature.
|
| 42 |
+
|
| 43 |
+
### 2.3 Research Literature Exploration Tools
|
| 44 |
+
|
| 45 |
+
There are many deployed systems which provide search access to collections of research papers including Google Scholar [17], Mendelay [18], CiteSeerX [19], Microsoft Academic Search [20], and the ACM Digital Library [21]. For a thorough analysis readers are directed to Gove et al.'s evaluation of 14 such systems [4] which highlights the strengths and weaknesses of each system.
|
| 46 |
+
|
| 47 |
+
There are also research systems which have looked at the topic of research literature exploration. Aris et al. [1] and PaperLens [5] are visualization tools which look at paper metadata to show temporal patterns of paper publication, and each uses citation links among papers to explore a field's rate of growth and identify key topics. Along similar lines, the PULP system [22] uses reinforcement learning to find and present a visualization of how the topics in a corpus of research papers have change over time.
|
| 48 |
+
|
| 49 |
+
GraphTrail [2] is a system for exploring general purpose large networked datasets, and used a corpus of ACM CHI papers as a sample database. GraphTrail supports the piecewise construction of complex queries while keeping a history of the steps taken which allows for easy backtracking and modification of earlier stages. Systems such as Citeology [23] and CiteRivers [24] support exploring scientific literature through their citation networks and patterns, with CiteRivers also including additional data about the document contents. PaperQuest [25] aims to help researchers make efficient decisions about which papers to read next by displaying the minimum amount of relevant information, and considering papers for which the researcher has already displayed an interest.
|
| 50 |
+
|
| 51 |
+
Another research exploration tool is the Action Science Explorer (ASE) [3], [4]. The ASE system uses a citation network visualization in the center of the interface and makes use of citation sentence extraction, ranking and filtering by network statistics, automatic document clustering and summarization, and reference management.
|
| 52 |
+
|
| 53 |
+
The main difference between Paper Forager and the above systems is that while these existing systems all perform some amount of analysis, visualization, or filtering based on the metadata or text of a paper, they hide the design, layout, and images of the actual research documents. Furthermore, with existing systems, users must wait until the document is downloaded before reading the paper in detail. Paper Forager provides a basic level of faceted metadata searching along with emphasizing the visual content of the documents, and provides immediate access to reading individual pages of the documents.
|
| 54 |
+
|
| 55 |
+
An example of a visually-focused research exploration tool is the UIST Archive Explorer [26] which was created for the ${20}^{\text{th }}$ anniversary of the UIST conference and provided an interface for browsing the collection of papers previously published at UIST. Papers could be viewed by year, keyword, or author. Selecting a paper caused the pages of the paper to be arranged in a row and the user could zoom in for more details. Compared to Paper Forager, the UIST Archive Explorer used a smaller corpus of documents (578 vs. 5055), was hosted locally (whereas Paper Forager uses a cloud-based architecture), and did not allow for navigation between papers based on their citation networks.
|
| 56 |
+
|
| 57 |
+
## 3 THE LITERATURE REVIEW PROCESS
|
| 58 |
+
|
| 59 |
+
The theory of information foraging [27] suggests that information seekers try to find documents with potentially high value and then use the available informational "scent" cues to determine which documents, if any, are worthwhile to examine further. We can thus think about the process of literature review being composed of three main stages:
|
| 60 |
+
|
| 61 |
+
Finding: filtering the collection of all possible papers down to those you might want to read, either by browsing the collection, or explicitly searching.
|
| 62 |
+
|
| 63 |
+
Scanning: making a decision for each individual paper as to whether it is worthwhile to read based on the available information scent cues.
|
| 64 |
+
|
| 65 |
+
Reading: looking through the content of the paper for useful information.
|
| 66 |
+
|
| 67 |
+
In order to maintain flow [28] during the literature review process, it is desirable for the transitions between the stages to be as smooth as possible. Research exploring the dynamics of task switching [29], [30] has shown that small interaction improvements can cause categorical behavior changes that far exceed the benefits of decreased task times.
|
| 68 |
+
|
| 69 |
+
When papers were primarily distributed in printed proceedings, the finding phase of the process was inefficient. However, once a collection of possibly relevant papers was found, the process of scanning the papers consisted of flipping through the pages. The informational scene cues [27] presented to the information gatherer to make a reading decision consisted of what was visible in the printed form of the paper - namely the title, text, figures, and the paper's overall graphic design and layout. Based on these cues, a decision to read or not would be made, and the cost of transitioning between the scanning and reading phases was minimal (Fig. 2).
|
| 70 |
+
|
| 71 |
+
With digital libraries the finding phase of the process is much more efficient, and the transition cost between finding and scanning was greatly reduced. However, the available informational scent cues presented during the scanning phase was reduced to basic textual information such as the title, authors, and sometimes the abstract of the paper. Advanced paper browsing tools such as ASE [3] provide additional functionality in the finding phase as well as incorporating additional scent cues to inform the reading decision such as visualizations and statistical measures of keywords, authorship, and citation networks. But still, the images and visual design of the original paper are not available to the researcher during the scanning phase; the graphics of a paper are not visible until after the decision has been made to move from scanning to reading. Additionally, the transaction cost when deciding to read a paper is relatively high: the paper needs to first be downloaded, which even on a fast network can often take between 3 and 15 seconds, and then it is opened for reading in a secondary application (or at least a new window within the same application). Besides the time cost, the context switch to a secondary application can be disrupt the flow of the information gathering process.
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Fig. 2. Four main approaches to paper discovery the context switches required between the various stages of the literature review process.
|
| 76 |
+
|
| 77 |
+
### 3.1 Design Goals
|
| 78 |
+
|
| 79 |
+
With Paper Forager, we want to take the quick searching and filtering benefits of modern advanced paper discovery systems and combine them with the visual qualities and benefits of paper proceedings. Additionally, we want to reduce the cost of transitioning between stages (Fig. 2) which will improve the flow of the literature review process and encourage a wider exploration of the paper space. By supporting more exploration, the system may put users in a position to make more serendipitous discoveries [31].
|
| 80 |
+
|
| 81 |
+
## 4 Paper Forager
|
| 82 |
+
|
| 83 |
+
We created Paper Forager to address the problems encountered while exploring large collections of research papers. As a sample corpus we used 5,055 papers published at the ACM CHI and UIST conferences. The metadata was collected using the Microsoft Academic Search API [20] and the source documents were automatically downloaded using links from Google Scholar where possible and manually downloaded from the ACM DL otherwise.
|
| 84 |
+
|
| 85 |
+
The Paper Forager interface is composed of a set of interface controls at the top of the screen, and a main display area below. On startup, Paper Forager arranges all documents in the collection in the main display area, sorted with the oldest papers at the top and new newest at the bottom (Fig. 1A).
|
| 86 |
+
|
| 87 |
+
### 4.1 Interface Controls
|
| 88 |
+
|
| 89 |
+
Along the top of the window are the interface controls for refining the displayed collection of papers which includes the search field, histogram filters, author list, history bar, and saved paper controls (Fig. ).
|
| 90 |
+
|
| 91 |
+
#### 4.1.1 Search Field
|
| 92 |
+
|
| 93 |
+
On the left is the search field (Fig. 3) which initializes keyword searches of the titles and abstracts of the papers, as well as searches for authors and conference titles. The search system will automatically recognize author and conference names. For example, a search for "database" would find all papers with the term "database" in the title or abstract (Fig. 1B), whereas a search for "Buxton" would be recognized as an author search for "William Buxton" and would find all papers published by that author. Additionally, searching for "CHI" or "UIST" will return all papers published at the respective conference, and adding a year to the end of a search term, such as "CHI 2007", modifies the filters to show only the papers from the 2007 edition of the CHI conference.
|
| 94 |
+
|
| 95 |
+
By default, entering a term in the search field will perform a new query using the entire collection as input, but prefacing a search term with a plus sign (+) creates an additive search filter. For example, if after searching for "Buxton" the user searches for "+mouse", only papers authored by William Buxton which include the term "mouse" will be displayed.
|
| 96 |
+
|
| 97 |
+
#### 4.1.2 Histogram Filters
|
| 98 |
+
|
| 99 |
+
Beside the search field are histogram filters displaying the number of papers published in each year and the relative distribution of the number of citations each paper has received (Fig. 4). Users can click the Year and Citations headings to set the sorting order of the papers in the main display area. As search events occur, the histograms dynamically update and animate to reflect the distribution for the actively displayed grid of papers. Under each histogram is a dual value slider which allows the selection of displayed papers to be limited to a specific range of years or number of citations.
|
| 100 |
+
|
| 101 |
+

|
| 102 |
+
|
| 103 |
+
Fig. 4. (A) Histogram filters and Author List for all papers in the CHI and UIST corpus and (B) after searching for the term "tangible".
|
| 104 |
+
|
| 105 |
+
#### 4.1.3 Author List
|
| 106 |
+
|
| 107 |
+
To the right of the filter histograms is a list of the top authors of the papers within the current search results (Fig. 3). For example, Fig. 4A shows that Ravin Balakrishnan has the most papers overall in the database, while Fig. 4B shows that Hiroshi Ishii has the most papers for the search term "tangible". Clicking on an author name is equivalent to creating an additive search for the author, so in Fig. B, clicking on "Scott Klemmer" is equivalent to entering "+Scott Klemmer" in the search field, and will result in showing all papers for the term "tangible" which have Scott Klemmer as an author.
|
| 108 |
+
|
| 109 |
+
#### 4.1.4 History Bar
|
| 110 |
+
|
| 111 |
+
Previous research has demonstrated the benefits of keeping a history of actions during information foraging [2], [32]. The history bar in
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Fig. 3. The interface controls of Paper Forager.
|
| 116 |
+
|
| 117 |
+
Paper Forager is designed for this purpose and provides a way for users to see how they arrived at their current view and the ability to easily backtrack if desired.
|
| 118 |
+
|
| 119 |
+
Q "Mouse" [108] ${}^{ \times }$ H ▲ James Landay [41] ${}^{x}$ Q Published at ACM CHI [4273]* 1 L James Landay, 1981-2007 [24]* 2 William Buxton [33]* J $\bot$ James Landay,>8 citations [10] ${}^{\mathrm{x}}$ A My Saved Papers [2] ${}^{ \times }$ K $\bot$ James Landay, $> 5$ citations [9] ${}^{\mathrm{x}}$ - Digital tape drawing ${}^{ * }$ - Digital tape drawing ${}^{ * }$ ) ${}^{ * }$ References [6] ${}^{ * }$ i $\blacksquare$ Digital tape drawing ${}^{x}\rangle$ of Citations [8] ${}^{x}$
|
| 120 |
+
|
| 121 |
+
Fig. 5. History tokens for (A) search terms, (B) conferences, (C) authors, (D) saved paper lists, (E) individual papers, (F) references of a paper, (G) citations of a paper, and tokens with filters applied (H-K).
|
| 122 |
+
|
| 123 |
+
Each type of search event has its own history token icon (Fig. 5, A-G) and as the histogram filter sliders are adjusted, the ranges are displayed beside the description of the active search (Fig. 6, H-K). The number of results matching the query is displayed in square brackets at the end of the history token.
|
| 124 |
+
|
| 125 |
+
A All Papers [5055]
|
| 126 |
+
|
| 127 |
+
BAII Papers [5055] | Q "mouse" [108] *
|
| 128 |
+
|
| 129 |
+
CAII Papers [5055] 【Q “mouse" $+ \bot$ Brad Myers [7]*
|
| 130 |
+
|
| 131 |
+
D All Papers [5055] | Q "mouse":+上Brad Myers, 201818 [3]*
|
| 132 |
+
|
| 133 |
+
E All Papers [5055] | Q’+⊥’ || Encapsulating interactive behaviors *
|
| 134 |
+
|
| 135 |
+
F All Papers [5055] | Q + 1. $\bot$ Encapsulating interactive behaviors ${}^{ \times }$ $\rangle$ -G Citations [9] ${}^{ \times }$
|
| 136 |
+
|
| 137 |
+
G All Papers [5055] | Q + 1. || (§) - 3. || (c) Why good engineers (sometimes) create ... *
|
| 138 |
+
|
| 139 |
+
Fig. 6. Initial state of the history bar (A) and changes after a series of operations: (B) searching for "mouse", (C) clicking on the author Brad Myers, (D) adjusting the year and citation filters, (E) selecting a paper, (F) viewing that paper's citations, and (G) selecting another paper.
|
| 140 |
+
|
| 141 |
+
Each search or filtering event is accompanied by a new token in the history bar (Fig. 7). As the list of tokens grows longer, the previous ones are minimized to show only their icon and their full description is displayed in a tooltip.
|
| 142 |
+
|
| 143 |
+
Inserted between the history tokens are three different separation symbols (Fig. 7): a vertical line when the new state is independent from the previous one, a plus sign when an additive query is entered, and a right facing arrow when looking at references or citations of a particular paper. Clicking on a token in the history list will remove all subsequent query events leaving the clicked token as the active search state. The tokens also include an ’ $x$ ’ button to remove the query from the history list.
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+
Fig. 7. History token separators.
|
| 148 |
+
|
| 149 |
+
#### 4.1.5 Saved Paper Controls
|
| 150 |
+
|
| 151 |
+
Paper Forager allows users to mark papers as saved. The collection of the user's saved papers, as well as all papers saved by the user community can be accessed through links in the top right corner (Fig. 3). Besides accessing the collection of saved papers for viewing, clicking the "Reference List" button copies a formatted list of paper references suitable for a "References" section of a paper to the user's clipboard.
|
| 152 |
+
|
| 153 |
+
### 4.2 Main Display Area
|
| 154 |
+
|
| 155 |
+
The main display area offers a collection view, a paper view, and a page view.
|
| 156 |
+
|
| 157 |
+
#### 4.2.1 Collection View
|
| 158 |
+
|
| 159 |
+
The collection view is used to display all papers that match the current query and filters. Papers within the collection are sized so that all results are initially within view. As searches are performed the grid of papers is animated to remove those papers which do not satisfy the query and re-arrange those that do to fill the available space (Fig. 8).
|
| 160 |
+
|
| 161 |
+

|
| 162 |
+
|
| 163 |
+
Fig. 8. Stages of the reordering animation. (A) initial state, (B) removed papers fade away, (C) remaining tiles move and resize into new position.
|
| 164 |
+
|
| 165 |
+
The total animation time is 1.5 seconds, where the outgoing tiles fade out for the first 0.75 of a second, and the remaining tiles rearrange for the next 0.75 seconds. A similar animation occurs when papers not previously on the screen are added.
|
| 166 |
+
|
| 167 |
+
As the cursor moves around the grid of displayed papers, the paper under the cursor highlights and a large tooltip is displayed with the paper's title, abstract, authors, year, conference, and number of citations (Fig. 9). Clicking on a paper will bring that paper into focus in the paper view.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+
Fig. 9. Example of a paper tooltip.
|
| 172 |
+
|
| 173 |
+
#### 4.2.2 Paper View
|
| 174 |
+
|
| 175 |
+
Once a paper is selected, either by clicking on a single paper, or by executing a query with only one result, it is displayed in the paper view (Fig. 10). Here, the composite image of the paper is fit to the main canvas area, with additional metadata including the title, abstract, authors, venue, and year displayed on the right. A badge icon can be clicked to add the paper to the user's list of saved papers. Clicking an author's name will load all papers by that author (equivalent to searching for the author's name), and there is also a link to follow the DOI link for the paper to view the official page in the ACM Digital Library.
|
| 176 |
+
|
| 177 |
+
The lower section of the side panel contains thumbnails for each of the papers in the corpus which are referenced by the active paper, as well as all the papers which cite the active paper. Hovering over these thumbnails triggers the associated paper tooltip (Fig. 9) and clicking on a paper thumbnail adds the paper to the history bar and brings it into focus. Clicking on either of the "References" or "Citations" labels takes the system back to the collection view, displaying all of the referenced/cited papers. Below the paper image is a button to return to the paper collection view, as well as buttons to navigate to the previous and next papers in the current collection. For example, after searching for "mouse" and selecting a paper, repeatedly clicking on "next paper" will let you flip through all papers for the term "mouse". This functionality is also accessible through the left and right arrow keys.
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
|
| 181 |
+
Fig. 10. Interface elements of the single paper view.
|
| 182 |
+
|
| 183 |
+
#### 4.2.3 Page View
|
| 184 |
+
|
| 185 |
+
Clicking on a single page animates the display to fit that page into the view (Fig. 11), allowing users to read individual pages. In this page view, the navigational controls and arrow keys change to support navigation between the pages of the document.
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+
Fig. 11. The page view displays individual pages.
|
| 190 |
+
|
| 191 |
+
Once the last page in the paper is reached the view zooms back to the paper view, and subsequent navigation operations will navigate at the paper level. This enables an efficient workflow of first flipping through papers, then going through the pages of an interesting paper, and then coming back out to flip through more papers (Fig. 12). The layout of the main window is designed so that on 24" or larger monitors the body text of the focused page is large enough to be read comfortably. For smaller monitors, or for more detailed examination of a portion of a page, the page view supports zooming and panning with the mouse wheel and left mouse button respectively.
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
|
| 195 |
+
Fig.12. Workflow for navigating between and within papers. (Note: "Paper B" has only 4 pages.)
|
| 196 |
+
|
| 197 |
+
#### 4.2.4 Preloading Images
|
| 198 |
+
|
| 199 |
+
On a reasonably fast broadband internet connection it takes approximately 2 to 3 seconds to download and display a composite paper image (such as in Fig. ) on a 24" monitor. This is an unacceptable delay if trying to rapidly flip through a collection of papers. To address this, when a paper is brought into single paper view, the images for the previous and next papers are automatically downloaded and composited at the proper resolution so they can be immediately displayed when requested.
|
| 200 |
+
|
| 201 |
+
#### 4.2.5 Interaction Model
|
| 202 |
+
|
| 203 |
+
The intent of the Paper Forager design is to support a primary interaction model of searching or filtering for relevant documents, and then clicking on papers or pages to enlarge their view to see them in more detail. Additionally, similar to zooming user interfaces [33], the collection, paper, and page views support interactive zooming and panning. We anticipate that even though the system supports freeform panning and zooming, that users will prefer, and gravitate towards the search/filter/click interaction model.
|
| 204 |
+
|
| 205 |
+
### 4.3 System Implementation
|
| 206 |
+
|
| 207 |
+
Paper Forager is implemented as an in-browser application using the Microsoft Silverlight framework. During development, this allowed for the application to be used in browsers on both Mac OS X and Windows computers with the Silverlight runtime is installed. Due to recent changes to the plug-in architects of major browsers now limit the Silverlight runtime to Internet Explorer on Windows.
|
| 208 |
+
|
| 209 |
+
The components of the deployed system (Fig. 13) are hosted and stored using parts of the Amazon Web Services (AWS) framework. The application binaries, images, and metadata are stored on and hosted from an AWS Secure Simple Storage (S3) instance. Usage log data and saved paper information are stored in separate AWS SimpleDB (SDB) tables. Due to cross-domain security policies which restrict communication of Silverlight applications, an AWS EC2 server hosts and interprets PHP scripts which facilitate communication between the application and the databases.
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+
Fig. 13. System architecture diagram.
|
| 214 |
+
|
| 215 |
+
#### 4.3.1 Image Pyramids
|
| 216 |
+
|
| 217 |
+
To enable fast streaming of papers over the internet and allow the papers to be viewed at a range of resolutions from very small thumbnails up to a large size suitable for reading, papers were converted in to a collection of "image pyramids" following the Microsoft Deep Zoom file format [7]. Each document is rendered at 14 resolutions, from the smallest size of 1 pixel square, up to the original size of the image, in our case, 10,048 pixels wide by 6098 pixels tall. At each resolution of the "pyramid", the images are divided into smaller "tiles" so that only the parts of the image which are needed at that resolution are downloaded (Fig. 14).
|
| 218 |
+
|
| 219 |
+
We tried maximum tile sizes of 256, 512, and 1024 pixels and found that 512 pixel square tiles provided the best performance for the types of images streamed with our system. On the client side, a Silverlight MultiScaleImage component handles downloading and compositing the tiles to display the image at the requested resolution.
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
|
| 223 |
+
Fig. 14. Image Pyramid data format example.
|
| 224 |
+
|
| 225 |
+
#### 4.3.2 Data Processing
|
| 226 |
+
|
| 227 |
+
The original PDF versions of the papers go through a multi-stage processing pipeline to convert them into their multi-scale image pyramid format (Fig. 15). First, the PDF files are split into individual pages and converted to JPG image files at 300 dpi. Using the "Data Sets" feature of Adobe Photoshop, composited PSD files are created combining all the pages of the paper into a single image (Fig. 16). The last step of the process involves converting the large combined JPG image into the image pyramid format.
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+
Fig. 15. Data processing pipeline.
|
| 232 |
+
|
| 233 |
+
The conversion process for each paper took approximately 1 minute on a workstation computer with ${24}\mathrm{{GB}}$ of ram and dual ${2.53}\mathrm{{GHz}}$ Xeon processors, and the entire sample corpus of 5,055 papers took approximately 90 hours to process, producing $\sim {1.9}$ million small .jpg images, which generated $\sim {54}\mathrm{{GB}}$ of total image data. Each paper can be processed independently, so the pipeline is well suited for parallelization or computation on remote clusters or servers.
|
| 234 |
+
|
| 235 |
+
#### 4.3.3 Paper Layout
|
| 236 |
+
|
| 237 |
+
If the paper has 5 or less pages, it uses the 5-page template, and otherwise it uses the 10-page layout. This version of the system did not support papers with more than 10 pages, but it would not be difficult to extend this pattern one more level to a 17-page layout ( 1 large first page, and a 4-by-4 grid for subsequent pages).
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+
Fig. 16. Sample 5-page (left) and 10-page layouts (right).
|
| 242 |
+
|
| 243 |
+
We choose to combine all pages of each paper into a single image object before creating the image pyramid as a performance optimization to limit the number of individual objects the system would need to display at any one time. Alternative strategies will be discussed as future work.
|
| 244 |
+
|
| 245 |
+
## 5 EVALUATION
|
| 246 |
+
|
| 247 |
+
Quantifying the benefits of information visualization systems is notoriously tricky [34]. To gain insights and usage observations related to our system, we ran two evaluations: a small controlled session to collect initial user feedback, and then a broad, long-term external deployment.
|
| 248 |
+
|
| 249 |
+
### 5.1 Initial User Feedback
|
| 250 |
+
|
| 251 |
+
We conducted a qualitative user study to evaluate the features and usability of the Paper Forager system. We wanted to collect initlal feedback from users, and validate that some simple (and not so simple) tasks can be accomplished by users in a reasonable amount of time. We recruited 6 participants that were taking an HCI course at a local university ( 4 male, 2 female, ages 21-24). These students had recently completed a project which required them to gather references for a $\mathrm{{HCI}}$ topic of choice. As such, they were ideal candidates to give feedback on our system and provide a comparative analysis of Paper Forager to the systems and strategies that they had independently used for their literature reviews.
|
| 252 |
+
|
| 253 |
+
The feedback sessions began with a 5 minute overview demonstrating the main features of the system, after which the participants explored the system on their own for an additional 5 minutes. The sessions concluded with the participants completing a series of 8 tasks, of generally increasing difficulty (Fig. 17).
|
| 254 |
+
|
| 255 |
+
The tasks were devised such that some could likely be accomplished with a standard digital library search system, some would benefit from faceted searching capabilities, and three of them (c, e, and h) would be prohibitively difficult to accomplish without the added capabilities afforded by the Paper Forager system. The goal of the tasks was to encourage the participants to try different aspects of the system rather than cover all possible use cases for the application. After completing the tasks, participants were asked for thoughts about the system and suggestions for improvements.
|
| 256 |
+
|
| 257 |
+
### 5.2 Results
|
| 258 |
+
|
| 259 |
+
All 6 users were able to complete the 8 tasks. While the tasks were not specifically designed to test the speed of using the Paper Forager system compared to traditional digital libraries, task completion times were recorded to see the range of completion times for the various tasks across the set of participants.
|
| 260 |
+
|
| 261 |
+
Mean task completion times ranged from 33 seconds (task $I$ ) to 3 minutes and 45 seconds (task8). In addition to the 6 study participants, a Paper Forager user with approximately 3 hours of experience was asked to perform the tasks to benchmark expert performance levels of these tasks. The longer time for the last task was due to participants not always knowing which part of the paper to read in detail to find the necessary information (Fig. 17)
|
| 262 |
+
|
| 263 |
+
It is interesting to note that for the tasks 1 through 7, the fastest times from the "novice" study participants after their brief introduction to the system are similar to the completion times from the
|
| 264 |
+
|
| 265 |
+
"expert" user, suggesting that some of the novice users were becoming proficient with using the system after only a short amount of time. In the comments section of the survey half (3 of 6) of the participants mentioned that their favourite feature was the ability to string together multiple queries with the " + " operator, and 2 of 6 commented that they particularly liked that they could see thumbnails for the referenced and cited papers in the paper view. During the 5 minute exploration phase, all participants experimented with the dynamic zooming and panning functionality using the mouse. However, during the tasks, they chose to use the search/filter/click interaction style. Additional features which were requested included auto-completion in the search field, additional conferences in the corpus, and more social sharing capabilities. Overall, participants were extremely enthusiastic about the system, and were hopeful that it would be publically released so they could continue to use it.
|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
|
| 269 |
+
Fig. 17. Task completion times for the 6 study participants, as well as times from one 'expert' user.
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
|
| 273 |
+
Fig. 18. Images used in questions 3 (left) and 8 (right).
|
| 274 |
+
|
| 275 |
+
With the overall positive feedback of the system and the confirmation that users of the system would be able to complete some useful tasks, we went forward to go forward with a broader deployment.
|
| 276 |
+
|
| 277 |
+
### 5.3 External Deployment
|
| 278 |
+
|
| 279 |
+
To gain additional feedback and in-the-wild usage data, as well as to validate the deployability of our cloud-based architecture, we conducted a long-term external deployment of the system. To maintain compliance with ACM copyright policies (as the papers used in the system are from ACM CHI and ACM UIST), access to Paper Forager was restricted to users with a private ACM account with access permissions for CHI and UIST papers, and to IP ranges with site license to the ACM Digital Library (such as most post-secondary institutions). The system was deployed and available for use continuously over a 2 year period.
|
| 280 |
+
|
| 281 |
+
### 5.4 Usage Data and Feedback
|
| 282 |
+
|
| 283 |
+
Over the 24-month deployment period, 493 log-in events were registered from 153 unique users, with 49 of the users logging into the system more than one time. There were a number of "regular" users with 20 users logging into the system more than 5 times each, and 11 users logging more than 100 minutes of active usage. A total of 1,887 papers were viewed in "paper view mode" (Fig. 10) and 1,851 searches were performed over the course of the deployment.
|
| 284 |
+
|
| 285 |
+
#### 5.4.1 Types of Usage
|
| 286 |
+
|
| 287 |
+
Since Paper Forager was designed to support the various stages of the literature review process (Finding, Scanning, and Reading), we analysed the log data to see if people were using the system in different ways, or if all users were using the system in a similar manner. To do this we looked at usage along two dimensions: Browsing vs. Searching, and Scanning vs. Reading.
|
| 288 |
+
|
| 289 |
+
#### 5.4.2 Browsing vs. Searching (Methods of "Finding")
|
| 290 |
+
|
| 291 |
+
In this dimension we are looking at different ways users can locate papers which might be relevant, during the finding phase of the literature review process. Using the system in a more "browsing" manner would involve looking at collections of papers, following citation or reference links, and reading many tooltips. Alternatively, a more "search-based" approach to the finding process involves specifically entering search terms into the search field. This dimension is calculated as the user's ratio of "search" events to "browsing" (viewing collections, inspecting tooltips) events.
|
| 292 |
+
|
| 293 |
+
#### 5.4.3 Scanning vs. Reading
|
| 294 |
+
|
| 295 |
+
In the scanning phase of the literature review process, a user is quickly looking at papers to figure out if they are worth reading. In Paper Forager, a reasonable proxy for a user spending lots of time in the scanning phase could be a user looking at many papers in the overview paper view mode, and zooming into view many single pages in the page view could indicate a user spending lots of time in the reading phase. This dimension is calculated as the ratio of "paper view" events to "page view" events. The 60 users of the system with multiple logins or more than 20 minutes of continuous usage have their activity plotted along these two dimensions in Fig. 19. Each point represents a user, with the size of the point proportional to the amount of activity for that user. Each axis spans a ${25}\mathrm{x}$ difference in behaviour; that is, users at the bottom of the chart looked at ${25}\mathrm{x}$ more individual pages than users at the top, and users on the left side performed ${25}\mathrm{x}$ more searches than those on the right. It is interesting and encouraging to see that users exhibited such a wide range of usage behaviours. Even among the most active users (those with larger circles) we can they are distributed around the plot, suggesting that the system can be successfully used for different stages of the review process.
|
| 296 |
+
|
| 297 |
+

|
| 298 |
+
|
| 299 |
+
Fig. 19. Usage log analysis showing usage patterns for finding behavior (x-axis) and scanning vs. reading (y-axis).
|
| 300 |
+
|
| 301 |
+
#### 5.4.4 Feedback and Suggestions
|
| 302 |
+
|
| 303 |
+
At the end of the deployment each user who logged into the system was sent a short voluntary questionnaire ( 30 of 153 responded, 20% response rate), where they were asked to answer five questions on a 5-point Likert scale (Strongly Disagree, Somewhat Disagree, Neither Agree/Nor Disagree, Somewhat Agree, Strongly Agree):
|
| 304 |
+
|
| 305 |
+
- I found Paper Forager easy to use.
|
| 306 |
+
|
| 307 |
+
- I found Paper Forager enjoyable to use.
|
| 308 |
+
|
| 309 |
+
- Paper Forager is a more effective way to research papers than the techniques/systems I have been using previously.
|
| 310 |
+
|
| 311 |
+
- Paper Forager is an efficient way to explore research papers.
|
| 312 |
+
|
| 313 |
+
- If kept up to date with papers in my field, I would use Paper Forager to explore research papers in the future.
|
| 314 |
+
|
| 315 |
+
The first four questions where based on the criteria outlined by Jeng [35] on which factors contribute to the usability of a digital library system (learnability, satisfaction, effectiveness, and efficiency). Results are shown in Fig. 20. In general, users felt Paper Forager was easy and enjoyable to use, and a majority of users said if kept up to date with papers in their field, they would continue to use Paper Forager in the future. Besides the subjective questions, users were also asked to provide details about features they liked and suggestions for improvement.
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
|
| 319 |
+
Fig. 20. Results from the subjective questions asked after the external deployment.
|
| 320 |
+
|
| 321 |
+
Several users relayed interesting ways in which they used the system. One Ph.D. student was writing his first paper with a new supervisor and wanted to ensure that his paper followed the general conventions that the supervisor had used in the past. By searching for the supervisor's name and rapidly flipping through his previous papers the student was able to get the answer to a number of questions about the supervisor's style:
|
| 322 |
+
|
| 323 |
+
How many figures does he usually include in a paper? Does he dock figures at the top and bottom of columns, or does he float them in the middle? Does he like using long figure captions? Does he use a particular color scheme for charts? How often does he include an explicit "Contributions" section? How does he typically word his conclusions?
|
| 324 |
+
|
| 325 |
+
Before the student had access to Paper Forager he was looking at a single example paper of the supervisor's to try and answer these questions; it was too much work to download and look at all of the supervisor's papers individually. With Paper Forager, each of these tasks took a very short amount of time and effort.
|
| 326 |
+
|
| 327 |
+
Another user (3D user interface researcher) mentioned they used Paper Forager not only in the process of writing papers, for but other tasks as well:
|
| 328 |
+
|
| 329 |
+
It is so extremely fast and easy to search various topics. You get an idea of what has been in a field, dig for follow up papers (in-depth search) or other related papers (breadth search). I have even used it to find the best reviewers for a paper, or find relevant researchers on any topic (committees, collaborations, etc.) This is just how the digital library should look! This tool has saved me HUGE amounts of time.
|
| 330 |
+
|
| 331 |
+
Finally, a grad student finishing up their Ph.D. mentioned that Paper Forager changed the way they approached writing papers:
|
| 332 |
+
|
| 333 |
+
It allowed me to rapidly compare papers to get a sense of structure and style. For instance, when I was writing my own paper, I would quickly look at several examples from related papers to understand what was the typical approach.
|
| 334 |
+
|
| 335 |
+
The responsiveness also allowed me to view more related papers. With Google Scholar or the ACM DL, it's often several clicks to view papers, and I have to download the PDF first; with Paper Forager I can quickly look at a paper and decide whether it's relevant, so I would actively look at more papers than I would have otherwise.
|
| 336 |
+
|
| 337 |
+
A common issue with the system was that it covered too few conferences, and users wanted the collection expanded to cover more of their interests. Several users (particularly those with slower computers and larger monitors) had trouble with performance, finding the interface to not be as responsive as they would have liked.
|
| 338 |
+
|
| 339 |
+
Many users mentioned liking that the references and citations were prominently displayed in the side panel of the paper view, and suggested that the links between related papers could be shown even more emphasized by showing the relationships in the main collection view. A number of users said they liked the collection view as they often remember papers by their "Fig. 1", however for very large collections (such as the entire 5,055 paper collection shown on launch), some users felt the view was not very helpful, and suggested alternatives such as using a different view of papers when they are displayed very small which could more clearly display relevant information.
|
| 340 |
+
|
| 341 |
+
## 6 Discussion & Future Work
|
| 342 |
+
|
| 343 |
+
The Paper Forager system was designed and optimized to work with collections on the order of 10,000 research documents. It will be interesting to look at how the interaction model should change for much larger collections of papers (an entire digital library for example), as well as how the performance of the system would be affected. Additionally, we would like to explore using the system with other collections of documents with citation networks such as patent applications or court proceedings.
|
| 344 |
+
|
| 345 |
+
Related to the system performance, Paper Forager combines all pages of each paper into a single image object. It would also be interesting to explore the design opportunities that would arise from storing each page of the paper individually. This would allow for more varied arrangements such as selectively showing only the first page of a paper, arranging the pages of each paper in a row, or highlighting the pages with the most figures. In the time since the system was first developed, Silverlight as a technology has become less-well supported (notably, the Silverlight plug-in will no longer run in the Chrome browser). Re-engineering the system as a HTML5/JavaScript web application would be worthwhile.
|
| 346 |
+
|
| 347 |
+
To preserve the design and layout work the authors put into creating their papers, we maintained the formatting from the original document. However, we are interested in exploring different representations for the papers when they are at small sizes such as those explored in previous work [26], [36], [37]. It would also be interesting to consider automated approaches for determining good miniaturized representations of research papers and other types of documents.
|
| 348 |
+
|
| 349 |
+
We would also like to look at ways of annotating the thumbnail images to show aspects of the metadata such as number of citations or which papers have been saved the most often. A coloring technique similar to the one used in AppMap [38] were the thumbnails are shaded based on one variable and sorted by another could lead to interesting discoveries. The searching and filtering capabilities of Paper Forager were purposefully simplified to improve the approachability of the system, but it would be useful to explore combining the visual aspects of Paper Forager with an advanced paper filtering system such as ASE [3], [4] or a visualization of the citation space such as Citeology [23].
|
| 350 |
+
|
| 351 |
+
Using an image format to display papers has some downsides compared to viewing the actual PDF file, even when the image is at a high resolution. For example, users are unable to select text from a paper in Paper Forager. We believe there is a great potential in a hybrid system where multi-scale images would be used to immediately display the paper while a PDF file loaded in the background. Once a PDF is loaded it could seamlessly replace the multi-image representation.
|
| 352 |
+
|
| 353 |
+
The ACM paper template contains the guidance "Please read previous years' proceedings to understand the writing style and conventions that successful authors have used." We agree that this is a useful, although laborious, task for prospective authors, and hope that Paper Forager could serve as a mechanism to simplify this process.
|
| 354 |
+
|
| 355 |
+
## 7 CONCLUSION
|
| 356 |
+
|
| 357 |
+
With Paper Forager we have created a cloud-based system which allows users to rapidly explore a collection of research articles. Our tests of the system produced positive feedback from users who overall agreed that Paper Forager was easy and enjoyable to use while being effective and efficient. We believe our work fills an important gap in existing systems for exploring document collections, allowing users to seamlessly transition between finding, scanning, and reading documents of interest. We hope our work can inspire future research and development in the area.
|
| 358 |
+
|
| 359 |
+
## REFERENCES
|
| 360 |
+
|
| 361 |
+
[1] A. Aris, B. Shneiderman, V. Qazvinian, and D. Radev, "Visual overviews for discovering key papers and influences across research fronts," J. Am. Soc. Inf. Sci. Technol., vol. 60, no. 11, pp. 2219-2228, Nov. 2009.
|
| 362 |
+
|
| 363 |
+
[2] C. Dunne, N. Henry Riche, B. Lee, R. Metoyer, and G. Robertson, "GraphTrail: Analyzing Large Multivariate, Heterogeneous Networks While Supporting Exploration History," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2012, pp. 1663-1672.
|
| 364 |
+
|
| 365 |
+
[3] C. Dunne, B. Shneiderman, R. Gove, J. Klavans, and B. Dorr, "Rapid Understanding of Scientific Paper Collections: Integrating Statistics, Text Analytics, and Visualization," J Am Soc Inf Sci Technol, vol. 63, no. 12, pp. 2351-2369, Dec. 2012.
|
| 366 |
+
|
| 367 |
+
[4] R. Gove, C. Dunne, B. Shneiderman, J. Klavans, and B. Dorr, "Evaluating visual and statistical exploration of scientific literature networks," in 2011 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), 2011, pp. 217-224.
|
| 368 |
+
|
| 369 |
+
[5] B. Lee, M. Czerwinski, G. Robertson, and B. B. Bederson, "Understanding Research Trends in Conferences Using paperLens," in CHI '05 Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA, 2005, pp. 1969-1972.
|
| 370 |
+
|
| 371 |
+
[6] J. Zhao, C. Collins, F. Chevalier, and R. Balakrishnan, "Interactive Exploration of Implicit and Explicit Relations in Faceted Datasets," IEEE Trans. Vis. Comput. Graph., vol. 19, no. 12, pp. 2080-2089, Dec. 2013.
|
| 372 |
+
|
| 373 |
+
[7] "Deep Zoom | Features | Microsoft Silverlight." [Online]. Available: http://www.microsoft.com/silverlight/deep-zoom/.[Accessed: 22- Sep-2015].
|
| 374 |
+
|
| 375 |
+
[8] M. Hearst, "UIs for Faceted Navigation: Recent Advances and Remaining Open Problems," in International Journal of Machine Learning and Computing, 2008, vol. 1, pp. 337-343.
|
| 376 |
+
|
| 377 |
+
[9] B. Lee, G. Smith, G. G. Robertson, M. Czerwinski, and D. S. Tan, "FacetLens: Exposing Trends and Relationships to Support Sensemaking Within Faceted Datasets," in Proceedings of the
|
| 378 |
+
|
| 379 |
+
SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2009, pp. 1293-1302.
|
| 380 |
+
|
| 381 |
+
[10] N. Cao, J. Sun, Y.-R. Lin, D. Gotz, S. Liu, and H. Qu, "FacetAtlas: Multifaceted Visualization for Rich Text Corpora," IEEE Trans. Vis. Comput. Graph., vol. 16, no. 6, pp. 1172-1181, Nov. 2010.
|
| 382 |
+
|
| 383 |
+
[11] S. K. Card, G. G. Robertson, and W. York, "The WebBook and the Web Forager: An Information Workspace for the World-Wide Web," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 1996, p. 111-.
|
| 384 |
+
|
| 385 |
+
[12] L. Hong, S. K. Card, and J. (JD) Chen, "Turning Pages of 3D Electronic Books," in Proceedings of the 3D User Interfaces, Washington, DC, USA, 2006, pp. 159-165.
|
| 386 |
+
|
| 387 |
+
[13] H. Strobelt, D. Oelke, C. Rohrdantz, A. Stoffel, D. A. Keim, and O. Deussen, "Document cards: A top trumps visualization for documents," vol. 15, no. 6, pp. 1145-1152, 2009.
|
| 388 |
+
|
| 389 |
+
[14] A. Girgensohn, F. Shipman, F. Chen, and L. Wilcox, "DocuBrowse: Faceted Searching, Browsing, and Recommendations in an Enterprise Context," in Proceedings of the 15th International Conference on Intelligent User Interfaces, New York, NY, USA, 2010, pp. 189-198.
|
| 390 |
+
|
| 391 |
+
[15] B. B. Bederson, "PhotoMesa: A Zoomable Image Browser Using Quantum Treemaps and Bubblemaps," in Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 2001, pp. 71-80.
|
| 392 |
+
|
| 393 |
+
[16] "PivotViewer | Features | Microsoft Silverlight." [Online]. Available: http://www.microsoft.com/silverlight/pivotviewer/.[Accessed: 22- Sep-2015].
|
| 394 |
+
|
| 395 |
+
[17] J. L. Howland, T. C. Wright, R. A. Boughan, and B. C. Roberts, "How Scholarly Is Google Scholar? A Comparison to Library Databases," Coll. Res. Libr., vol. 70, no. 3, pp. 227-234, May 2009.
|
| 396 |
+
|
| 397 |
+
[18] "Dashboard | Mendeley." [Online]. Available: https://www.mendeley.com/dashboard/.[Accessed: 22-Sep-2015].
|
| 398 |
+
|
| 399 |
+
[19] C. L. Giles, K. D. Bollacker, and S. Lawrence, "CiteSeer: An Automatic Citation Indexing System," in Proceedings of the Third ACM Conference on Digital Libraries, New York, NY, USA, 1998, pp. 89-98.
|
| 400 |
+
|
| 401 |
+
[20] "Microsoft Academic Search." [Online]. Available: http://academic.research.microsoft.com/.[Accessed: 22-Sep-2015].
|
| 402 |
+
|
| 403 |
+
[21] J. R. White, "On the 10th Anniversary of ACM's Digital Library," Commun ACM, vol. 51, no. 11, pp. 5-5, Nov. 2008.
|
| 404 |
+
|
| 405 |
+
[22] A. Medlar, K. Ilves, P. Wang, W. Buntine, and D. Glowacka, "PULP: A System for Exploratory Search of Scientific Literature," in Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, 2016, pp. 1133-1136.
|
| 406 |
+
|
| 407 |
+
[23] J. Matejka, T. Grossman, and G. Fitzmaurice, "Citeology: visualizing paper genealogy," in CHI'12 Extended Abstracts on Human Factors in Computing Systems, 2012, pp. 181-190.
|
| 408 |
+
|
| 409 |
+
[24] F. Heimerl, Q. Han, S. Koch, and T. Ertl, "CiteRivers: Visual Analytics of Citation Patterns," IEEE Trans. Vis. Comput. Graph., vol. PP, no. 99, pp. 1-1, 2015.
|
| 410 |
+
|
| 411 |
+
[25] A. Ponsard, F. Escalona, and T. Munzner, "PaperQuest: A Visualization Tool to Support Literature Review," in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, New York, NY, USA, 2016, pp. 2264-2271.
|
| 412 |
+
|
| 413 |
+
[26] "ZUIST - Homepage." [Online]. Available: http://zvtm.sourceforge.net/zuist/.[Accessed: 22-Sep-2015].
|
| 414 |
+
|
| 415 |
+
[27] P. Pirolli and S. Card, "Information Foraging in Information Access Environments," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 1995, pp. 51- 58.
|
| 416 |
+
|
| 417 |
+
[28] B. B. Bederson, "Interfaces for Staying in the Flow," Ubiquity, vol. 2004, no. September, pp. 1-1, Sep. 2004.
|
| 418 |
+
|
| 419 |
+
[29] J. Brandt, M. Dontcheva, M. Weskamp, and S. R. Klemmer, "Example-centric Programming: Integrating Web Search into the Development Environment," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2010, pp. 513-522.
|
| 420 |
+
|
| 421 |
+
[30] W. D. Gray and D. A. Boehm-Davis, "Milliseconds matter: An introduction to microstrategies and to their use in describing and predicting interactive behavior," J. Exp. Psychol. Appl., vol. 6, no. 4, pp. 322-335, 2000.
|
| 422 |
+
|
| 423 |
+
[31] P. André, m. c. schraefel, J. Teevan, and S. T. Dumais, "Discovery is Never by Chance: Designing for (Un)Serendipity," in Proceedings of the Seventh ACM Conference on Creativity and Cognition, New York, NY, USA, 2009, pp. 305-314.
|
| 424 |
+
|
| 425 |
+
[32] A. Wexelblat and P. Maes, "Footprints: History-rich Tools for Information Foraging," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 1999, pp. 270-277.
|
| 426 |
+
|
| 427 |
+
[33] B. B. Bederson and J. D. Hollun, "Pad++: A zooming graphical interface for exploring alternate interface physics," in In Proceedings of User Interface and Software Technology, 1994.
|
| 428 |
+
|
| 429 |
+
[34] C. Plaisant, "The Challenge of Information Visualization Evaluation," in Proceedings of the Working Conference on Advanced Visual Interfaces, New York, NY, USA, 2004, pp. 109-116.
|
| 430 |
+
|
| 431 |
+
[35] J. Jeng, "What Is Usability in the Context of the Digital Library," Inf. Technol. Libr., vol. 24, no. 2, pp. 47-57, 2004.
|
| 432 |
+
|
| 433 |
+
[36] J. Teevan et al., "Visual Snippets: Summarizing Web Pages for Search and Revisitation," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2009, pp. 2023-2032.
|
| 434 |
+
|
| 435 |
+
[37] A. Woodruff, A. Faulring, R. Rosenholtz, J. Morrsion, and P. Pirolli, "Using Thumbnails to Search the Web," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2001, pp. 198-205.
|
| 436 |
+
|
| 437 |
+
[38] M. Rooke, T. Grossman, and G. Fitzmaurice, "AppMap: Exploring User Interface Visualizations," in Proceedings of Graphics Interface 2011, School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada, 2011, pp. 111-118.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/QhN4tUZd8r/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,357 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ PAPER FORAGER: SUPPORTING THE RAPID EXPLORATION OF RESEARCH DOCUMENT COLLECTIONS
|
| 2 |
+
|
| 3 |
+
Author One, Author Two, Author Three
|
| 4 |
+
|
| 5 |
+
< g r a p h i c s >
|
| 6 |
+
|
| 7 |
+
Fig. 1. Three views of the Paper Forager system: (A) the initial state of system showing all 5,055 papers in the sample corpus from the ACM CHI and UIST conferences, (B) the filtered results showing only the papers containing an individual keyword, and (C) a sample paper overview page which further allows a user to click on a page to read the content.
|
| 8 |
+
|
| 9 |
+
Abstract- We present Paper Forager, a web-based system which allows users to rapidly explore large collections of research documents. Our sample corpus uses 5,055 papers published at the ACM CHI and UIST conferences. Paper Forager provides a visually based browsing experience, allowing users to identify papers of interest based on their graphical appearance, in addition to providing traditional faceted search techniques. A cloud-based architecture stores the papers as multi-resolution images, giving users immediate access to reading individual pages of a paper, thus reducing the transaction cost between finding, scanning, and reading papers of interest. Initial user feedback sessions elicited positive subjective feedback, while a 24-month external deployment generated in-the-wild usage data which we analyze. Users of the system indicated that they would be enthusiastic to continue having access to the Paper Forager system in the future.
|
| 10 |
+
|
| 11 |
+
Index Terms-literature review, document search, document browsing, corpus visualization
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Literature reviews can be a long and tedious task requiring information seekers to sort through a large number of documents and follow extended chains of related research. With paper proceedings, users can easily scan and read any of the papers, but finding specific papers can be difficult.
|
| 16 |
+
|
| 17 |
+
In contrast, online digital libraries and search systems improve the ability to find specific papers of interest. A number of new systems have been developed [1]-[6] which provide advanced faceted search and filtering capabilities. However, these systems are driven by metadata and textual content and ignore visual qualities such as figures, graphics, layout, and design. Furthermore, such systems require the user to download the source PDF file before the paper can be read in detail. We seek a single system that can support a continuous transition between finding, scanning, and reading documents within a corpus.
|
| 18 |
+
|
| 19 |
+
* Submitted to GI 2021
|
| 20 |
+
|
| 21 |
+
Web technologies such as DeepZoom [7] and Google Maps support browsing of extremely large image-based data sets through the progressive loading of multi-resolution images. This type of architecture is beneficial in that it gives users rapid access to detailed content. However, we are unaware of any prior systems which have used such an architecture for document exploration.
|
| 22 |
+
|
| 23 |
+
In this paper we present Paper Forager, a system to support the rapid filtering and exploration of a collection of research papers. Paper Forager relies on a cloud-based architecture, storing the papers as multi-resolution images that can be progressively downloaded on-demand. By using this architecture, we allow the user to transition from browsing an entire corpus of thousands of papers, to reading any individual page within that corpus, within seconds. In doing so, we accomplish our goal of reducing the transaction cost between finding, scanning, and reading papers of interest.
|
| 24 |
+
|
| 25 |
+
Our main research contribution is the development of a novel system for literature review, which synthesizes previously explored concepts such as faceted search and zooming based interfaces. We present the design and implementation of Paper Forager and its associated architecture, implemented on a sample corpus of 5,055 papers from the ${ACM}$ CHI and UIST conferences. Additionally, we present results gathered from initial user feedback and a 24 month external deployment of the system. Users of the system felt it was easy and enjoyable to use, and the majority indicated that would like to continue using Paper Forager in the future.
|
| 26 |
+
|
| 27 |
+
§ 2 RELATED WORK
|
| 28 |
+
|
| 29 |
+
§ 2.1 FACETED SEARCH
|
| 30 |
+
|
| 31 |
+
Faceted search allows users to explore a collection by filtering on multiple dimensions. While powerful, representing all of the available options in a user interface can be problematic [8]. Many papers have looked at improving the faceted searching experience. FacetLens [9] represents facets as nested areas on the interface and FacetAtlas [10] displays the relationships between related facets through a weighted network diagram and colored density map. Pivot Slice [6] used a collection of research papers as a sample corpus, and allows users to explore relationships between facets using direct manipulation. The faceted search system in Paper Forager is designed to be more approachable for new users than the above systems, at the expense of being less versatile in the types of queries which can be performed.
|
| 32 |
+
|
| 33 |
+
§ 2.2 VISUAL DOCUMENT BROWSING
|
| 34 |
+
|
| 35 |
+
There have been numerous research projects exploring the space of visually exploring a collection of documents.
|
| 36 |
+
|
| 37 |
+
The WebBook and Web Forager [11] pre-loaded and rendered web pages so they could be rapidly flipped through, and more recently, Hong et al. [12] looked at improving the digital page flipping experience. Document Cards [13] extracts important terms and images from a document and displays them in compact representations.
|
| 38 |
+
|
| 39 |
+
The DocuBrowse system [14] is designed to browse and search for documents in large online enterprise document collections. Similar to Paper Forager, DocuBrowse includes both a faceted search interface and visual thumbnails of results. While source content can be opened, it is not clear how long it would take to download and view an individual document. Paper Forager expands upon ideas from the DocuBrowse interface, and uses a cloud-based architecture to support rapid viewing through the progressive loading of multi-resolution images. Paper Forager also takes advantage of the connections, such as citation networks, while DocuBrowse supports a wider range of file types without looking at their interconnectivity.
|
| 40 |
+
|
| 41 |
+
While not directly related to document browsing, the PhotoMesa system [15] allows zooming into a large number of images which are grouped and sorted by available metadata. Similarly, the Pivot Viewer component of the Silverlight framework [16] supports faceted searching of a collection of images based on associated metadata. Results are displayed using a dynamically resizing grid of images, using the Silverlight Deep Zoom technology [7]. We are unaware of attempts to use this type of technology for the exploration of research document collections. Paper Forager implements an architecture similar to Pivot Viewer, but with a customized design and interface for the purpose of rapidly exploring a corpus of research literature.
|
| 42 |
+
|
| 43 |
+
§ 2.3 RESEARCH LITERATURE EXPLORATION TOOLS
|
| 44 |
+
|
| 45 |
+
There are many deployed systems which provide search access to collections of research papers including Google Scholar [17], Mendelay [18], CiteSeerX [19], Microsoft Academic Search [20], and the ACM Digital Library [21]. For a thorough analysis readers are directed to Gove et al.'s evaluation of 14 such systems [4] which highlights the strengths and weaknesses of each system.
|
| 46 |
+
|
| 47 |
+
There are also research systems which have looked at the topic of research literature exploration. Aris et al. [1] and PaperLens [5] are visualization tools which look at paper metadata to show temporal patterns of paper publication, and each uses citation links among papers to explore a field's rate of growth and identify key topics. Along similar lines, the PULP system [22] uses reinforcement learning to find and present a visualization of how the topics in a corpus of research papers have change over time.
|
| 48 |
+
|
| 49 |
+
GraphTrail [2] is a system for exploring general purpose large networked datasets, and used a corpus of ACM CHI papers as a sample database. GraphTrail supports the piecewise construction of complex queries while keeping a history of the steps taken which allows for easy backtracking and modification of earlier stages. Systems such as Citeology [23] and CiteRivers [24] support exploring scientific literature through their citation networks and patterns, with CiteRivers also including additional data about the document contents. PaperQuest [25] aims to help researchers make efficient decisions about which papers to read next by displaying the minimum amount of relevant information, and considering papers for which the researcher has already displayed an interest.
|
| 50 |
+
|
| 51 |
+
Another research exploration tool is the Action Science Explorer (ASE) [3], [4]. The ASE system uses a citation network visualization in the center of the interface and makes use of citation sentence extraction, ranking and filtering by network statistics, automatic document clustering and summarization, and reference management.
|
| 52 |
+
|
| 53 |
+
The main difference between Paper Forager and the above systems is that while these existing systems all perform some amount of analysis, visualization, or filtering based on the metadata or text of a paper, they hide the design, layout, and images of the actual research documents. Furthermore, with existing systems, users must wait until the document is downloaded before reading the paper in detail. Paper Forager provides a basic level of faceted metadata searching along with emphasizing the visual content of the documents, and provides immediate access to reading individual pages of the documents.
|
| 54 |
+
|
| 55 |
+
An example of a visually-focused research exploration tool is the UIST Archive Explorer [26] which was created for the ${20}^{\text{ th }}$ anniversary of the UIST conference and provided an interface for browsing the collection of papers previously published at UIST. Papers could be viewed by year, keyword, or author. Selecting a paper caused the pages of the paper to be arranged in a row and the user could zoom in for more details. Compared to Paper Forager, the UIST Archive Explorer used a smaller corpus of documents (578 vs. 5055), was hosted locally (whereas Paper Forager uses a cloud-based architecture), and did not allow for navigation between papers based on their citation networks.
|
| 56 |
+
|
| 57 |
+
§ 3 THE LITERATURE REVIEW PROCESS
|
| 58 |
+
|
| 59 |
+
The theory of information foraging [27] suggests that information seekers try to find documents with potentially high value and then use the available informational "scent" cues to determine which documents, if any, are worthwhile to examine further. We can thus think about the process of literature review being composed of three main stages:
|
| 60 |
+
|
| 61 |
+
Finding: filtering the collection of all possible papers down to those you might want to read, either by browsing the collection, or explicitly searching.
|
| 62 |
+
|
| 63 |
+
Scanning: making a decision for each individual paper as to whether it is worthwhile to read based on the available information scent cues.
|
| 64 |
+
|
| 65 |
+
Reading: looking through the content of the paper for useful information.
|
| 66 |
+
|
| 67 |
+
In order to maintain flow [28] during the literature review process, it is desirable for the transitions between the stages to be as smooth as possible. Research exploring the dynamics of task switching [29], [30] has shown that small interaction improvements can cause categorical behavior changes that far exceed the benefits of decreased task times.
|
| 68 |
+
|
| 69 |
+
When papers were primarily distributed in printed proceedings, the finding phase of the process was inefficient. However, once a collection of possibly relevant papers was found, the process of scanning the papers consisted of flipping through the pages. The informational scene cues [27] presented to the information gatherer to make a reading decision consisted of what was visible in the printed form of the paper - namely the title, text, figures, and the paper's overall graphic design and layout. Based on these cues, a decision to read or not would be made, and the cost of transitioning between the scanning and reading phases was minimal (Fig. 2).
|
| 70 |
+
|
| 71 |
+
With digital libraries the finding phase of the process is much more efficient, and the transition cost between finding and scanning was greatly reduced. However, the available informational scent cues presented during the scanning phase was reduced to basic textual information such as the title, authors, and sometimes the abstract of the paper. Advanced paper browsing tools such as ASE [3] provide additional functionality in the finding phase as well as incorporating additional scent cues to inform the reading decision such as visualizations and statistical measures of keywords, authorship, and citation networks. But still, the images and visual design of the original paper are not available to the researcher during the scanning phase; the graphics of a paper are not visible until after the decision has been made to move from scanning to reading. Additionally, the transaction cost when deciding to read a paper is relatively high: the paper needs to first be downloaded, which even on a fast network can often take between 3 and 15 seconds, and then it is opened for reading in a secondary application (or at least a new window within the same application). Besides the time cost, the context switch to a secondary application can be disrupt the flow of the information gathering process.
|
| 72 |
+
|
| 73 |
+
< g r a p h i c s >
|
| 74 |
+
|
| 75 |
+
Fig. 2. Four main approaches to paper discovery the context switches required between the various stages of the literature review process.
|
| 76 |
+
|
| 77 |
+
§ 3.1 DESIGN GOALS
|
| 78 |
+
|
| 79 |
+
With Paper Forager, we want to take the quick searching and filtering benefits of modern advanced paper discovery systems and combine them with the visual qualities and benefits of paper proceedings. Additionally, we want to reduce the cost of transitioning between stages (Fig. 2) which will improve the flow of the literature review process and encourage a wider exploration of the paper space. By supporting more exploration, the system may put users in a position to make more serendipitous discoveries [31].
|
| 80 |
+
|
| 81 |
+
§ 4 PAPER FORAGER
|
| 82 |
+
|
| 83 |
+
We created Paper Forager to address the problems encountered while exploring large collections of research papers. As a sample corpus we used 5,055 papers published at the ACM CHI and UIST conferences. The metadata was collected using the Microsoft Academic Search API [20] and the source documents were automatically downloaded using links from Google Scholar where possible and manually downloaded from the ACM DL otherwise.
|
| 84 |
+
|
| 85 |
+
The Paper Forager interface is composed of a set of interface controls at the top of the screen, and a main display area below. On startup, Paper Forager arranges all documents in the collection in the main display area, sorted with the oldest papers at the top and new newest at the bottom (Fig. 1A).
|
| 86 |
+
|
| 87 |
+
§ 4.1 INTERFACE CONTROLS
|
| 88 |
+
|
| 89 |
+
Along the top of the window are the interface controls for refining the displayed collection of papers which includes the search field, histogram filters, author list, history bar, and saved paper controls (Fig. ).
|
| 90 |
+
|
| 91 |
+
§ 4.1.1 SEARCH FIELD
|
| 92 |
+
|
| 93 |
+
On the left is the search field (Fig. 3) which initializes keyword searches of the titles and abstracts of the papers, as well as searches for authors and conference titles. The search system will automatically recognize author and conference names. For example, a search for "database" would find all papers with the term "database" in the title or abstract (Fig. 1B), whereas a search for "Buxton" would be recognized as an author search for "William Buxton" and would find all papers published by that author. Additionally, searching for "CHI" or "UIST" will return all papers published at the respective conference, and adding a year to the end of a search term, such as "CHI 2007", modifies the filters to show only the papers from the 2007 edition of the CHI conference.
|
| 94 |
+
|
| 95 |
+
By default, entering a term in the search field will perform a new query using the entire collection as input, but prefacing a search term with a plus sign (+) creates an additive search filter. For example, if after searching for "Buxton" the user searches for "+mouse", only papers authored by William Buxton which include the term "mouse" will be displayed.
|
| 96 |
+
|
| 97 |
+
§ 4.1.2 HISTOGRAM FILTERS
|
| 98 |
+
|
| 99 |
+
Beside the search field are histogram filters displaying the number of papers published in each year and the relative distribution of the number of citations each paper has received (Fig. 4). Users can click the Year and Citations headings to set the sorting order of the papers in the main display area. As search events occur, the histograms dynamically update and animate to reflect the distribution for the actively displayed grid of papers. Under each histogram is a dual value slider which allows the selection of displayed papers to be limited to a specific range of years or number of citations.
|
| 100 |
+
|
| 101 |
+
< g r a p h i c s >
|
| 102 |
+
|
| 103 |
+
Fig. 4. (A) Histogram filters and Author List for all papers in the CHI and UIST corpus and (B) after searching for the term "tangible".
|
| 104 |
+
|
| 105 |
+
§ 4.1.3 AUTHOR LIST
|
| 106 |
+
|
| 107 |
+
To the right of the filter histograms is a list of the top authors of the papers within the current search results (Fig. 3). For example, Fig. 4A shows that Ravin Balakrishnan has the most papers overall in the database, while Fig. 4B shows that Hiroshi Ishii has the most papers for the search term "tangible". Clicking on an author name is equivalent to creating an additive search for the author, so in Fig. B, clicking on "Scott Klemmer" is equivalent to entering "+Scott Klemmer" in the search field, and will result in showing all papers for the term "tangible" which have Scott Klemmer as an author.
|
| 108 |
+
|
| 109 |
+
§ 4.1.4 HISTORY BAR
|
| 110 |
+
|
| 111 |
+
Previous research has demonstrated the benefits of keeping a history of actions during information foraging [2], [32]. The history bar in
|
| 112 |
+
|
| 113 |
+
< g r a p h i c s >
|
| 114 |
+
|
| 115 |
+
Fig. 3. The interface controls of Paper Forager.
|
| 116 |
+
|
| 117 |
+
Paper Forager is designed for this purpose and provides a way for users to see how they arrived at their current view and the ability to easily backtrack if desired.
|
| 118 |
+
|
| 119 |
+
Q "Mouse" [108] ${}^{ \times }$ H ▲ James Landay [41] ${}^{x}$ Q Published at ACM CHI [4273]* 1 L James Landay, 1981-2007 [24]* 2 William Buxton [33]* J $\bot$ James Landay,>8 citations [10] ${}^{\mathrm{x}}$ A My Saved Papers [2] ${}^{ \times }$ K $\bot$ James Landay, $> 5$ citations [9] ${}^{\mathrm{x}}$ - Digital tape drawing ${}^{ * }$ - Digital tape drawing ${}^{ * }$ ) ${}^{ * }$ References [6] ${}^{ * }$ i $\blacksquare$ Digital tape drawing ${}^{x}\rangle$ of Citations [8] ${}^{x}$
|
| 120 |
+
|
| 121 |
+
Fig. 5. History tokens for (A) search terms, (B) conferences, (C) authors, (D) saved paper lists, (E) individual papers, (F) references of a paper, (G) citations of a paper, and tokens with filters applied (H-K).
|
| 122 |
+
|
| 123 |
+
Each type of search event has its own history token icon (Fig. 5, A-G) and as the histogram filter sliders are adjusted, the ranges are displayed beside the description of the active search (Fig. 6, H-K). The number of results matching the query is displayed in square brackets at the end of the history token.
|
| 124 |
+
|
| 125 |
+
A All Papers [5055]
|
| 126 |
+
|
| 127 |
+
BAII Papers [5055] | Q "mouse" [108] *
|
| 128 |
+
|
| 129 |
+
CAII Papers [5055] 【Q “mouse" $+ \bot$ Brad Myers [7]*
|
| 130 |
+
|
| 131 |
+
D All Papers [5055] | Q "mouse":+上Brad Myers, 201818 [3]*
|
| 132 |
+
|
| 133 |
+
E All Papers [5055] | Q’+⊥’ || Encapsulating interactive behaviors *
|
| 134 |
+
|
| 135 |
+
F All Papers [5055] | Q + 1. $\bot$ Encapsulating interactive behaviors ${}^{ \times }$ $\rangle$ -G Citations [9] ${}^{ \times }$
|
| 136 |
+
|
| 137 |
+
G All Papers [5055] | Q + 1. || (§) - 3. || (c) Why good engineers (sometimes) create ... *
|
| 138 |
+
|
| 139 |
+
Fig. 6. Initial state of the history bar (A) and changes after a series of operations: (B) searching for "mouse", (C) clicking on the author Brad Myers, (D) adjusting the year and citation filters, (E) selecting a paper, (F) viewing that paper's citations, and (G) selecting another paper.
|
| 140 |
+
|
| 141 |
+
Each search or filtering event is accompanied by a new token in the history bar (Fig. 7). As the list of tokens grows longer, the previous ones are minimized to show only their icon and their full description is displayed in a tooltip.
|
| 142 |
+
|
| 143 |
+
Inserted between the history tokens are three different separation symbols (Fig. 7): a vertical line when the new state is independent from the previous one, a plus sign when an additive query is entered, and a right facing arrow when looking at references or citations of a particular paper. Clicking on a token in the history list will remove all subsequent query events leaving the clicked token as the active search state. The tokens also include an ’ $x$ ’ button to remove the query from the history list.
|
| 144 |
+
|
| 145 |
+
< g r a p h i c s >
|
| 146 |
+
|
| 147 |
+
Fig. 7. History token separators.
|
| 148 |
+
|
| 149 |
+
§ 4.1.5 SAVED PAPER CONTROLS
|
| 150 |
+
|
| 151 |
+
Paper Forager allows users to mark papers as saved. The collection of the user's saved papers, as well as all papers saved by the user community can be accessed through links in the top right corner (Fig. 3). Besides accessing the collection of saved papers for viewing, clicking the "Reference List" button copies a formatted list of paper references suitable for a "References" section of a paper to the user's clipboard.
|
| 152 |
+
|
| 153 |
+
§ 4.2 MAIN DISPLAY AREA
|
| 154 |
+
|
| 155 |
+
The main display area offers a collection view, a paper view, and a page view.
|
| 156 |
+
|
| 157 |
+
§ 4.2.1 COLLECTION VIEW
|
| 158 |
+
|
| 159 |
+
The collection view is used to display all papers that match the current query and filters. Papers within the collection are sized so that all results are initially within view. As searches are performed the grid of papers is animated to remove those papers which do not satisfy the query and re-arrange those that do to fill the available space (Fig. 8).
|
| 160 |
+
|
| 161 |
+
< g r a p h i c s >
|
| 162 |
+
|
| 163 |
+
Fig. 8. Stages of the reordering animation. (A) initial state, (B) removed papers fade away, (C) remaining tiles move and resize into new position.
|
| 164 |
+
|
| 165 |
+
The total animation time is 1.5 seconds, where the outgoing tiles fade out for the first 0.75 of a second, and the remaining tiles rearrange for the next 0.75 seconds. A similar animation occurs when papers not previously on the screen are added.
|
| 166 |
+
|
| 167 |
+
As the cursor moves around the grid of displayed papers, the paper under the cursor highlights and a large tooltip is displayed with the paper's title, abstract, authors, year, conference, and number of citations (Fig. 9). Clicking on a paper will bring that paper into focus in the paper view.
|
| 168 |
+
|
| 169 |
+
< g r a p h i c s >
|
| 170 |
+
|
| 171 |
+
Fig. 9. Example of a paper tooltip.
|
| 172 |
+
|
| 173 |
+
§ 4.2.2 PAPER VIEW
|
| 174 |
+
|
| 175 |
+
Once a paper is selected, either by clicking on a single paper, or by executing a query with only one result, it is displayed in the paper view (Fig. 10). Here, the composite image of the paper is fit to the main canvas area, with additional metadata including the title, abstract, authors, venue, and year displayed on the right. A badge icon can be clicked to add the paper to the user's list of saved papers. Clicking an author's name will load all papers by that author (equivalent to searching for the author's name), and there is also a link to follow the DOI link for the paper to view the official page in the ACM Digital Library.
|
| 176 |
+
|
| 177 |
+
The lower section of the side panel contains thumbnails for each of the papers in the corpus which are referenced by the active paper, as well as all the papers which cite the active paper. Hovering over these thumbnails triggers the associated paper tooltip (Fig. 9) and clicking on a paper thumbnail adds the paper to the history bar and brings it into focus. Clicking on either of the "References" or "Citations" labels takes the system back to the collection view, displaying all of the referenced/cited papers. Below the paper image is a button to return to the paper collection view, as well as buttons to navigate to the previous and next papers in the current collection. For example, after searching for "mouse" and selecting a paper, repeatedly clicking on "next paper" will let you flip through all papers for the term "mouse". This functionality is also accessible through the left and right arrow keys.
|
| 178 |
+
|
| 179 |
+
< g r a p h i c s >
|
| 180 |
+
|
| 181 |
+
Fig. 10. Interface elements of the single paper view.
|
| 182 |
+
|
| 183 |
+
§ 4.2.3 PAGE VIEW
|
| 184 |
+
|
| 185 |
+
Clicking on a single page animates the display to fit that page into the view (Fig. 11), allowing users to read individual pages. In this page view, the navigational controls and arrow keys change to support navigation between the pages of the document.
|
| 186 |
+
|
| 187 |
+
< g r a p h i c s >
|
| 188 |
+
|
| 189 |
+
Fig. 11. The page view displays individual pages.
|
| 190 |
+
|
| 191 |
+
Once the last page in the paper is reached the view zooms back to the paper view, and subsequent navigation operations will navigate at the paper level. This enables an efficient workflow of first flipping through papers, then going through the pages of an interesting paper, and then coming back out to flip through more papers (Fig. 12). The layout of the main window is designed so that on 24" or larger monitors the body text of the focused page is large enough to be read comfortably. For smaller monitors, or for more detailed examination of a portion of a page, the page view supports zooming and panning with the mouse wheel and left mouse button respectively.
|
| 192 |
+
|
| 193 |
+
< g r a p h i c s >
|
| 194 |
+
|
| 195 |
+
Fig.12. Workflow for navigating between and within papers. (Note: "Paper B" has only 4 pages.)
|
| 196 |
+
|
| 197 |
+
§ 4.2.4 PRELOADING IMAGES
|
| 198 |
+
|
| 199 |
+
On a reasonably fast broadband internet connection it takes approximately 2 to 3 seconds to download and display a composite paper image (such as in Fig. ) on a 24" monitor. This is an unacceptable delay if trying to rapidly flip through a collection of papers. To address this, when a paper is brought into single paper view, the images for the previous and next papers are automatically downloaded and composited at the proper resolution so they can be immediately displayed when requested.
|
| 200 |
+
|
| 201 |
+
§ 4.2.5 INTERACTION MODEL
|
| 202 |
+
|
| 203 |
+
The intent of the Paper Forager design is to support a primary interaction model of searching or filtering for relevant documents, and then clicking on papers or pages to enlarge their view to see them in more detail. Additionally, similar to zooming user interfaces [33], the collection, paper, and page views support interactive zooming and panning. We anticipate that even though the system supports freeform panning and zooming, that users will prefer, and gravitate towards the search/filter/click interaction model.
|
| 204 |
+
|
| 205 |
+
§ 4.3 SYSTEM IMPLEMENTATION
|
| 206 |
+
|
| 207 |
+
Paper Forager is implemented as an in-browser application using the Microsoft Silverlight framework. During development, this allowed for the application to be used in browsers on both Mac OS X and Windows computers with the Silverlight runtime is installed. Due to recent changes to the plug-in architects of major browsers now limit the Silverlight runtime to Internet Explorer on Windows.
|
| 208 |
+
|
| 209 |
+
The components of the deployed system (Fig. 13) are hosted and stored using parts of the Amazon Web Services (AWS) framework. The application binaries, images, and metadata are stored on and hosted from an AWS Secure Simple Storage (S3) instance. Usage log data and saved paper information are stored in separate AWS SimpleDB (SDB) tables. Due to cross-domain security policies which restrict communication of Silverlight applications, an AWS EC2 server hosts and interprets PHP scripts which facilitate communication between the application and the databases.
|
| 210 |
+
|
| 211 |
+
< g r a p h i c s >
|
| 212 |
+
|
| 213 |
+
Fig. 13. System architecture diagram.
|
| 214 |
+
|
| 215 |
+
§ 4.3.1 IMAGE PYRAMIDS
|
| 216 |
+
|
| 217 |
+
To enable fast streaming of papers over the internet and allow the papers to be viewed at a range of resolutions from very small thumbnails up to a large size suitable for reading, papers were converted in to a collection of "image pyramids" following the Microsoft Deep Zoom file format [7]. Each document is rendered at 14 resolutions, from the smallest size of 1 pixel square, up to the original size of the image, in our case, 10,048 pixels wide by 6098 pixels tall. At each resolution of the "pyramid", the images are divided into smaller "tiles" so that only the parts of the image which are needed at that resolution are downloaded (Fig. 14).
|
| 218 |
+
|
| 219 |
+
We tried maximum tile sizes of 256, 512, and 1024 pixels and found that 512 pixel square tiles provided the best performance for the types of images streamed with our system. On the client side, a Silverlight MultiScaleImage component handles downloading and compositing the tiles to display the image at the requested resolution.
|
| 220 |
+
|
| 221 |
+
< g r a p h i c s >
|
| 222 |
+
|
| 223 |
+
Fig. 14. Image Pyramid data format example.
|
| 224 |
+
|
| 225 |
+
§ 4.3.2 DATA PROCESSING
|
| 226 |
+
|
| 227 |
+
The original PDF versions of the papers go through a multi-stage processing pipeline to convert them into their multi-scale image pyramid format (Fig. 15). First, the PDF files are split into individual pages and converted to JPG image files at 300 dpi. Using the "Data Sets" feature of Adobe Photoshop, composited PSD files are created combining all the pages of the paper into a single image (Fig. 16). The last step of the process involves converting the large combined JPG image into the image pyramid format.
|
| 228 |
+
|
| 229 |
+
< g r a p h i c s >
|
| 230 |
+
|
| 231 |
+
Fig. 15. Data processing pipeline.
|
| 232 |
+
|
| 233 |
+
The conversion process for each paper took approximately 1 minute on a workstation computer with ${24}\mathrm{{GB}}$ of ram and dual ${2.53}\mathrm{{GHz}}$ Xeon processors, and the entire sample corpus of 5,055 papers took approximately 90 hours to process, producing $\sim {1.9}$ million small .jpg images, which generated $\sim {54}\mathrm{{GB}}$ of total image data. Each paper can be processed independently, so the pipeline is well suited for parallelization or computation on remote clusters or servers.
|
| 234 |
+
|
| 235 |
+
§ 4.3.3 PAPER LAYOUT
|
| 236 |
+
|
| 237 |
+
If the paper has 5 or less pages, it uses the 5-page template, and otherwise it uses the 10-page layout. This version of the system did not support papers with more than 10 pages, but it would not be difficult to extend this pattern one more level to a 17-page layout ( 1 large first page, and a 4-by-4 grid for subsequent pages).
|
| 238 |
+
|
| 239 |
+
< g r a p h i c s >
|
| 240 |
+
|
| 241 |
+
Fig. 16. Sample 5-page (left) and 10-page layouts (right).
|
| 242 |
+
|
| 243 |
+
We choose to combine all pages of each paper into a single image object before creating the image pyramid as a performance optimization to limit the number of individual objects the system would need to display at any one time. Alternative strategies will be discussed as future work.
|
| 244 |
+
|
| 245 |
+
§ 5 EVALUATION
|
| 246 |
+
|
| 247 |
+
Quantifying the benefits of information visualization systems is notoriously tricky [34]. To gain insights and usage observations related to our system, we ran two evaluations: a small controlled session to collect initial user feedback, and then a broad, long-term external deployment.
|
| 248 |
+
|
| 249 |
+
§ 5.1 INITIAL USER FEEDBACK
|
| 250 |
+
|
| 251 |
+
We conducted a qualitative user study to evaluate the features and usability of the Paper Forager system. We wanted to collect initlal feedback from users, and validate that some simple (and not so simple) tasks can be accomplished by users in a reasonable amount of time. We recruited 6 participants that were taking an HCI course at a local university ( 4 male, 2 female, ages 21-24). These students had recently completed a project which required them to gather references for a $\mathrm{{HCI}}$ topic of choice. As such, they were ideal candidates to give feedback on our system and provide a comparative analysis of Paper Forager to the systems and strategies that they had independently used for their literature reviews.
|
| 252 |
+
|
| 253 |
+
The feedback sessions began with a 5 minute overview demonstrating the main features of the system, after which the participants explored the system on their own for an additional 5 minutes. The sessions concluded with the participants completing a series of 8 tasks, of generally increasing difficulty (Fig. 17).
|
| 254 |
+
|
| 255 |
+
The tasks were devised such that some could likely be accomplished with a standard digital library search system, some would benefit from faceted searching capabilities, and three of them (c, e, and h) would be prohibitively difficult to accomplish without the added capabilities afforded by the Paper Forager system. The goal of the tasks was to encourage the participants to try different aspects of the system rather than cover all possible use cases for the application. After completing the tasks, participants were asked for thoughts about the system and suggestions for improvements.
|
| 256 |
+
|
| 257 |
+
§ 5.2 RESULTS
|
| 258 |
+
|
| 259 |
+
All 6 users were able to complete the 8 tasks. While the tasks were not specifically designed to test the speed of using the Paper Forager system compared to traditional digital libraries, task completion times were recorded to see the range of completion times for the various tasks across the set of participants.
|
| 260 |
+
|
| 261 |
+
Mean task completion times ranged from 33 seconds (task $I$ ) to 3 minutes and 45 seconds (task8). In addition to the 6 study participants, a Paper Forager user with approximately 3 hours of experience was asked to perform the tasks to benchmark expert performance levels of these tasks. The longer time for the last task was due to participants not always knowing which part of the paper to read in detail to find the necessary information (Fig. 17)
|
| 262 |
+
|
| 263 |
+
It is interesting to note that for the tasks 1 through 7, the fastest times from the "novice" study participants after their brief introduction to the system are similar to the completion times from the
|
| 264 |
+
|
| 265 |
+
"expert" user, suggesting that some of the novice users were becoming proficient with using the system after only a short amount of time. In the comments section of the survey half (3 of 6) of the participants mentioned that their favourite feature was the ability to string together multiple queries with the " + " operator, and 2 of 6 commented that they particularly liked that they could see thumbnails for the referenced and cited papers in the paper view. During the 5 minute exploration phase, all participants experimented with the dynamic zooming and panning functionality using the mouse. However, during the tasks, they chose to use the search/filter/click interaction style. Additional features which were requested included auto-completion in the search field, additional conferences in the corpus, and more social sharing capabilities. Overall, participants were extremely enthusiastic about the system, and were hopeful that it would be publically released so they could continue to use it.
|
| 266 |
+
|
| 267 |
+
< g r a p h i c s >
|
| 268 |
+
|
| 269 |
+
Fig. 17. Task completion times for the 6 study participants, as well as times from one 'expert' user.
|
| 270 |
+
|
| 271 |
+
< g r a p h i c s >
|
| 272 |
+
|
| 273 |
+
Fig. 18. Images used in questions 3 (left) and 8 (right).
|
| 274 |
+
|
| 275 |
+
With the overall positive feedback of the system and the confirmation that users of the system would be able to complete some useful tasks, we went forward to go forward with a broader deployment.
|
| 276 |
+
|
| 277 |
+
§ 5.3 EXTERNAL DEPLOYMENT
|
| 278 |
+
|
| 279 |
+
To gain additional feedback and in-the-wild usage data, as well as to validate the deployability of our cloud-based architecture, we conducted a long-term external deployment of the system. To maintain compliance with ACM copyright policies (as the papers used in the system are from ACM CHI and ACM UIST), access to Paper Forager was restricted to users with a private ACM account with access permissions for CHI and UIST papers, and to IP ranges with site license to the ACM Digital Library (such as most post-secondary institutions). The system was deployed and available for use continuously over a 2 year period.
|
| 280 |
+
|
| 281 |
+
§ 5.4 USAGE DATA AND FEEDBACK
|
| 282 |
+
|
| 283 |
+
Over the 24-month deployment period, 493 log-in events were registered from 153 unique users, with 49 of the users logging into the system more than one time. There were a number of "regular" users with 20 users logging into the system more than 5 times each, and 11 users logging more than 100 minutes of active usage. A total of 1,887 papers were viewed in "paper view mode" (Fig. 10) and 1,851 searches were performed over the course of the deployment.
|
| 284 |
+
|
| 285 |
+
§ 5.4.1 TYPES OF USAGE
|
| 286 |
+
|
| 287 |
+
Since Paper Forager was designed to support the various stages of the literature review process (Finding, Scanning, and Reading), we analysed the log data to see if people were using the system in different ways, or if all users were using the system in a similar manner. To do this we looked at usage along two dimensions: Browsing vs. Searching, and Scanning vs. Reading.
|
| 288 |
+
|
| 289 |
+
§ 5.4.2 BROWSING VS. SEARCHING (METHODS OF "FINDING")
|
| 290 |
+
|
| 291 |
+
In this dimension we are looking at different ways users can locate papers which might be relevant, during the finding phase of the literature review process. Using the system in a more "browsing" manner would involve looking at collections of papers, following citation or reference links, and reading many tooltips. Alternatively, a more "search-based" approach to the finding process involves specifically entering search terms into the search field. This dimension is calculated as the user's ratio of "search" events to "browsing" (viewing collections, inspecting tooltips) events.
|
| 292 |
+
|
| 293 |
+
§ 5.4.3 SCANNING VS. READING
|
| 294 |
+
|
| 295 |
+
In the scanning phase of the literature review process, a user is quickly looking at papers to figure out if they are worth reading. In Paper Forager, a reasonable proxy for a user spending lots of time in the scanning phase could be a user looking at many papers in the overview paper view mode, and zooming into view many single pages in the page view could indicate a user spending lots of time in the reading phase. This dimension is calculated as the ratio of "paper view" events to "page view" events. The 60 users of the system with multiple logins or more than 20 minutes of continuous usage have their activity plotted along these two dimensions in Fig. 19. Each point represents a user, with the size of the point proportional to the amount of activity for that user. Each axis spans a ${25}\mathrm{x}$ difference in behaviour; that is, users at the bottom of the chart looked at ${25}\mathrm{x}$ more individual pages than users at the top, and users on the left side performed ${25}\mathrm{x}$ more searches than those on the right. It is interesting and encouraging to see that users exhibited such a wide range of usage behaviours. Even among the most active users (those with larger circles) we can they are distributed around the plot, suggesting that the system can be successfully used for different stages of the review process.
|
| 296 |
+
|
| 297 |
+
< g r a p h i c s >
|
| 298 |
+
|
| 299 |
+
Fig. 19. Usage log analysis showing usage patterns for finding behavior (x-axis) and scanning vs. reading (y-axis).
|
| 300 |
+
|
| 301 |
+
§ 5.4.4 FEEDBACK AND SUGGESTIONS
|
| 302 |
+
|
| 303 |
+
At the end of the deployment each user who logged into the system was sent a short voluntary questionnaire ( 30 of 153 responded, 20% response rate), where they were asked to answer five questions on a 5-point Likert scale (Strongly Disagree, Somewhat Disagree, Neither Agree/Nor Disagree, Somewhat Agree, Strongly Agree):
|
| 304 |
+
|
| 305 |
+
* I found Paper Forager easy to use.
|
| 306 |
+
|
| 307 |
+
* I found Paper Forager enjoyable to use.
|
| 308 |
+
|
| 309 |
+
* Paper Forager is a more effective way to research papers than the techniques/systems I have been using previously.
|
| 310 |
+
|
| 311 |
+
* Paper Forager is an efficient way to explore research papers.
|
| 312 |
+
|
| 313 |
+
* If kept up to date with papers in my field, I would use Paper Forager to explore research papers in the future.
|
| 314 |
+
|
| 315 |
+
The first four questions where based on the criteria outlined by Jeng [35] on which factors contribute to the usability of a digital library system (learnability, satisfaction, effectiveness, and efficiency). Results are shown in Fig. 20. In general, users felt Paper Forager was easy and enjoyable to use, and a majority of users said if kept up to date with papers in their field, they would continue to use Paper Forager in the future. Besides the subjective questions, users were also asked to provide details about features they liked and suggestions for improvement.
|
| 316 |
+
|
| 317 |
+
< g r a p h i c s >
|
| 318 |
+
|
| 319 |
+
Fig. 20. Results from the subjective questions asked after the external deployment.
|
| 320 |
+
|
| 321 |
+
Several users relayed interesting ways in which they used the system. One Ph.D. student was writing his first paper with a new supervisor and wanted to ensure that his paper followed the general conventions that the supervisor had used in the past. By searching for the supervisor's name and rapidly flipping through his previous papers the student was able to get the answer to a number of questions about the supervisor's style:
|
| 322 |
+
|
| 323 |
+
How many figures does he usually include in a paper? Does he dock figures at the top and bottom of columns, or does he float them in the middle? Does he like using long figure captions? Does he use a particular color scheme for charts? How often does he include an explicit "Contributions" section? How does he typically word his conclusions?
|
| 324 |
+
|
| 325 |
+
Before the student had access to Paper Forager he was looking at a single example paper of the supervisor's to try and answer these questions; it was too much work to download and look at all of the supervisor's papers individually. With Paper Forager, each of these tasks took a very short amount of time and effort.
|
| 326 |
+
|
| 327 |
+
Another user (3D user interface researcher) mentioned they used Paper Forager not only in the process of writing papers, for but other tasks as well:
|
| 328 |
+
|
| 329 |
+
It is so extremely fast and easy to search various topics. You get an idea of what has been in a field, dig for follow up papers (in-depth search) or other related papers (breadth search). I have even used it to find the best reviewers for a paper, or find relevant researchers on any topic (committees, collaborations, etc.) This is just how the digital library should look! This tool has saved me HUGE amounts of time.
|
| 330 |
+
|
| 331 |
+
Finally, a grad student finishing up their Ph.D. mentioned that Paper Forager changed the way they approached writing papers:
|
| 332 |
+
|
| 333 |
+
It allowed me to rapidly compare papers to get a sense of structure and style. For instance, when I was writing my own paper, I would quickly look at several examples from related papers to understand what was the typical approach.
|
| 334 |
+
|
| 335 |
+
The responsiveness also allowed me to view more related papers. With Google Scholar or the ACM DL, it's often several clicks to view papers, and I have to download the PDF first; with Paper Forager I can quickly look at a paper and decide whether it's relevant, so I would actively look at more papers than I would have otherwise.
|
| 336 |
+
|
| 337 |
+
A common issue with the system was that it covered too few conferences, and users wanted the collection expanded to cover more of their interests. Several users (particularly those with slower computers and larger monitors) had trouble with performance, finding the interface to not be as responsive as they would have liked.
|
| 338 |
+
|
| 339 |
+
Many users mentioned liking that the references and citations were prominently displayed in the side panel of the paper view, and suggested that the links between related papers could be shown even more emphasized by showing the relationships in the main collection view. A number of users said they liked the collection view as they often remember papers by their "Fig. 1", however for very large collections (such as the entire 5,055 paper collection shown on launch), some users felt the view was not very helpful, and suggested alternatives such as using a different view of papers when they are displayed very small which could more clearly display relevant information.
|
| 340 |
+
|
| 341 |
+
§ 6 DISCUSSION & FUTURE WORK
|
| 342 |
+
|
| 343 |
+
The Paper Forager system was designed and optimized to work with collections on the order of 10,000 research documents. It will be interesting to look at how the interaction model should change for much larger collections of papers (an entire digital library for example), as well as how the performance of the system would be affected. Additionally, we would like to explore using the system with other collections of documents with citation networks such as patent applications or court proceedings.
|
| 344 |
+
|
| 345 |
+
Related to the system performance, Paper Forager combines all pages of each paper into a single image object. It would also be interesting to explore the design opportunities that would arise from storing each page of the paper individually. This would allow for more varied arrangements such as selectively showing only the first page of a paper, arranging the pages of each paper in a row, or highlighting the pages with the most figures. In the time since the system was first developed, Silverlight as a technology has become less-well supported (notably, the Silverlight plug-in will no longer run in the Chrome browser). Re-engineering the system as a HTML5/JavaScript web application would be worthwhile.
|
| 346 |
+
|
| 347 |
+
To preserve the design and layout work the authors put into creating their papers, we maintained the formatting from the original document. However, we are interested in exploring different representations for the papers when they are at small sizes such as those explored in previous work [26], [36], [37]. It would also be interesting to consider automated approaches for determining good miniaturized representations of research papers and other types of documents.
|
| 348 |
+
|
| 349 |
+
We would also like to look at ways of annotating the thumbnail images to show aspects of the metadata such as number of citations or which papers have been saved the most often. A coloring technique similar to the one used in AppMap [38] were the thumbnails are shaded based on one variable and sorted by another could lead to interesting discoveries. The searching and filtering capabilities of Paper Forager were purposefully simplified to improve the approachability of the system, but it would be useful to explore combining the visual aspects of Paper Forager with an advanced paper filtering system such as ASE [3], [4] or a visualization of the citation space such as Citeology [23].
|
| 350 |
+
|
| 351 |
+
Using an image format to display papers has some downsides compared to viewing the actual PDF file, even when the image is at a high resolution. For example, users are unable to select text from a paper in Paper Forager. We believe there is a great potential in a hybrid system where multi-scale images would be used to immediately display the paper while a PDF file loaded in the background. Once a PDF is loaded it could seamlessly replace the multi-image representation.
|
| 352 |
+
|
| 353 |
+
The ACM paper template contains the guidance "Please read previous years' proceedings to understand the writing style and conventions that successful authors have used." We agree that this is a useful, although laborious, task for prospective authors, and hope that Paper Forager could serve as a mechanism to simplify this process.
|
| 354 |
+
|
| 355 |
+
§ 7 CONCLUSION
|
| 356 |
+
|
| 357 |
+
With Paper Forager we have created a cloud-based system which allows users to rapidly explore a collection of research articles. Our tests of the system produced positive feedback from users who overall agreed that Paper Forager was easy and enjoyable to use while being effective and efficient. We believe our work fills an important gap in existing systems for exploring document collections, allowing users to seamlessly transition between finding, scanning, and reading documents of interest. We hope our work can inspire future research and development in the area.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,365 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Contour Line Stylization to Visualize Multivariate Information
|
| 2 |
+
|
| 3 |
+
Category: n/a
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Figure 1: (left) A geographic map, and the contour plots of four climatic parameters A (albedo), B (soil moisture), C (pressure), and D (temperature) on a part of the map. (right) Four of our five designs that encode B, C, and D along the contour lines of A.
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Contour plots are widely used in geospatial data visualization as they provide natural interpretation of information across spatial scales. To compare a geospatial attribute against others, contour plots for the base attribute (e.g., elevation) are often overlaid, blended, or examined side by side with other attributes (e.g., temperature or pressure). Such visual inspection is challenging since overlay and color blending both clutter the visualization, and a side-by-side arrangement requires users to mentally integrate the information from different plots. Therefore, these approaches become less efficient as the number of attributes grows.
|
| 12 |
+
|
| 13 |
+
In this paper we examine the fundamental question of whether the base contour lines, which are already present in the map space, can be leveraged to visualize how other attributes relate to the base attribute. We present five different designs for stylizing contour lines, and investigate their interpretability using three crowdsourced studies. Our first two studies examined how contour width and number of contour intervals affect interpretability, using synthetic datasets where we controlled the underlying data distribution. We then compared the designs in a third study that used both synthetic and real-world meteorological data. Our studies show the effectiveness of stylizing contour lines to enrich the understanding of how different attributes relate to the reference contour plot, reveal trade-offs among design parameters, and provide designers with important insights into the factors that influence interpretability.
|
| 14 |
+
|
| 15 |
+
Index Terms: Human-centered computing-Visualization-Visualization techniques; Human-centered computing-Visualization-Visualization design and evaluation methods
|
| 16 |
+
|
| 17 |
+
## 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Contour plots are widely used to visualize geospatial information on two-dimensional maps. Contour lines and contour intervals are two important features of a contour plot. A contour line (isoline) represents a fixed threshold value and connects map points having that value. A contour interval corresponds to a range of values within the bounds indicated by two successive threshold values.
|
| 20 |
+
|
| 21 |
+
The simplicity and rich information found in contour plots make them a popular choice for infographic posters and in geospatial data analysis $\left\lbrack {5,{13},{30}}\right\rbrack$ . Contour lines provide us with a potentially-useful visualization resource - a set of points that are already on the map. We can leverage these points to show other data attributes along the contour line, which can provide insights into how other geospatial attributes relate to the base attribute. To the best of our knowledge, effectiveness of contour line stylization and the boundaries of human perception to interpret them are not well understood.
|
| 22 |
+
|
| 23 |
+
In this paper, we examine how to stylize contour lines to provide useful additional information to the viewer (Figure 1). We do not entwine ourselves with any domain specific application, but rather attempt to improve our understanding of various facets of contour line stylization. The contour plots may result from geospatial datasets, mathematical surfaces, or even scatterplot densities. However, there exist several motivating scenarios (e.g., analyzing historical change in contour lines or understanding correlation) where contour stylization may be useful. Figure 2 shows such a motivating example based on front prediction in meteorological analysis. The development of a front depends on several factors such as temperature, moisture, wind direction and pressure. Figure 2 (left) shows front prediction by the National Oceanic and Atmospheric Administration (NOAA) Weather Prediction Center (WPC) archive, where the curved lines (red, blue or mixed) correspond to various types of fronts (warm, cold or stationary fronts, respectively). Note that such fronts can be derived using software or by painstaking inspection of the numbers plotted on the map representing various weather parameters. Figure 2 (right) shows our contour stylization for 4 weather variables (pressure, temperature, relative humidity and precipitable water). The contour lines represent isobars (pressure). The temperature, relative humidity and precipitable water are encoded in red, blue, and white lines respectively. Contour line stylization can readily reveal some potential cold fronts (yellow curves 1,3, and 5) and warm fronts (curves 2 and 4), which shows the potential of using contour line stylization alongside traditional visualizations.
|
| 24 |
+
|
| 25 |
+
Multivariate visualizations that encode data attributes into different preattentive perceptual features of a visual element (glyph) [3, 34,41] such as size, shape, color, and texture, are typical ways to visualize geospatial information on a map. A well-known limitation of a glyph-based visualization is that it clutters the map [10]. While a dense overlay occludes the view of the base map (Figure 3 (left)), a sparse overlay compromises perception of geospatial connectedness and lacks the gradient information that naturally comes from a contour plot (Figure 3 (right)).
|
| 26 |
+
|
| 27 |
+
Our Contribution: We consider geospatial data with four attributes $- \mathrm{A},\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ - and encode $\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ along the contour lines of A. We design five visual encodings and investigate whether users can interpret the attribute values (high, low), trends (increasing or decreasing), and relationships (similar or opposite trends) along a contour line, or across a set of contour lines. Since the encoding position of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ is determined by $\mathrm{A}$ ’s contour line, users may want to vary the number of contouring thresholds for A, or use a different base contour plot. Therefore, we describe how to design a synthetic dataset to examine the influence of various design parameters through controlled experiments.
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+
Figure 2: (left) Front detection by NOAA WPC. (right) detection using one of our five visualization techniques.
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
Figure 3: Multivariate visualization with (left) glyphs occludes the map, and (right) grid stylization lacks the gradient information.
|
| 36 |
+
|
| 37 |
+
We conducted three crowdsourced studies that evaluate our designs. The first two studies reveal how contour width and the number of contour intervals influence the visual interpretability of our designs. The third study used both synthetic and real-world meteorological datasets to assess how the designs and datasets compared in terms of task completion time and accuracy, for common geospatial data analysis tasks. In addition to revealing insights into our designs, our experimental results also suggest that results obtained using synthetic datasets generalize to real-world datasets.
|
| 38 |
+
|
| 39 |
+
## 2 RELATED WORK
|
| 40 |
+
|
| 41 |
+
### 2.1 Multivariate Visualization on a Map
|
| 42 |
+
|
| 43 |
+
Geospatial data are often shown using choropleth maps [25, 28], and contour plots [27]. While choropleth maps and cartograms [14] reveal properties of a region, a contour plot helps to understand the data distribution on a map and find regions with similar properties. Data analysts often use color blending for finding probable correlations between two geospatial variables [15]. However, creating a high-quality bivariate choropleth or contour map requires careful choice of blending colors and textures [24].
|
| 44 |
+
|
| 45 |
+
Researchers have also attempted to construct trivariate choropleth maps using the CMY color model [6, 35]. Wu and Zhang [44] examined a 4-variate map that captures the contour band information for each variable in thin visual ribbons, and then overlays the ribbons for all four variables using four different colors. Overlaying glyphs $\left\lbrack {{29},{39}}\right\rbrack$ or charts $\left\lbrack {4,{16}}\right\rbrack$ on a map is a popular way to visualize geospatial information. Glyphs are often designed to encode data into features that can be perceived through preattentive visual channels [43]. A rich body of visualization design research examines how humans perceive various combinations of geometric, optical, relational, and semantic channels. We refer readers to recent surveys $\left\lbrack {9,{17},{41}}\right\rbrack$ for a detailed review of glyph design. Glyph based visualizations often must use a careful glyph positioning technique $\left\lbrack {{29},{44}}\right\rbrack$ , as creating glyphs for many data points on a map creates overlapping.
|
| 46 |
+
|
| 47 |
+
Various texture metrics such as contrast, coarseness, periodicity, and directionality $\left\lbrack {{26},{38}}\right\rbrack$ have been used to visualize multivariate data. Healey and Enns [20] introduced pexels that encode multidimensional datasets into multi-colored perceptual textures with height, density, and regularity properties. Shenas and Interrante [36] showed that color and texture can be combined to meaningfully convey multivariate information with four or more variables on a choropleth map.
|
| 48 |
+
|
| 49 |
+
### 2.2 Stylization of Lines and Boundaries
|
| 50 |
+
|
| 51 |
+
Stylized lines naturally appear in the visualization of trajectory data. For example, traffic flow data are often color-coded on road networks as heatmaps $\left\lbrack {{23},{40}}\right\rbrack$ . Andrienko et al. $\left\lbrack 1\right\rbrack$ extracted characteristic points from car trajectories and aggregated them to create flows between cellular areas to reveal movement patterns in a city. They used stylization to depict various information about the aggregated flows. Huang et al. [23] modeled taxi trajectories using a graph. They stylized the streets based on node centrality and overlaid rose charts to visualize other traffic information. Perin et al. $\left\lbrack {2,{33}}\right\rbrack$ investigated combinations of thickness, monochromatic color scheme, and tick mark frequency on a line to encode time and speed on a two-dimensional line. They observed that encoding speed with a color scheme and time using one of the other two features improved user perception.
|
| 52 |
+
|
| 53 |
+
Geographic cluster visualization and map generation techniques have also considered line stylization. Christophe et al. [8] proposed a pipeline for generating artistic and cartographic maps that integrates linear stylization, patch-based region filling and vector texture generation. Kim et al. [24] created Bristle Maps that put bristles perpendicular to the linear elements (streets, subway lines) of the map and then encoded multivariate information into the length, density, color, orientation, and transparency of the bristles. Zhang et al. [45] introduced TopoGroups that aggregate spatial data into hierarchical clusters, and show information about geographic clusters on the cluster boundaries. Although TopoGroups summarizes cluster information along the boundary, Zhang et al. noted that users may mistakenly see the visualization as representing local statistics near the boundary. In subsequent work, Zhang et al. [46] proposed TopoText that replaces the boundaries using oriented text.
|
| 54 |
+
|
| 55 |
+
Visual encoding of lines and boundaries has been widely used in visualizing data uncertainty [7, 18]. Cedilnik and Rheingans [7] overlaid a regular grid on the map and then stylized the grid edges using blur, jitter, and wave. Data uncertainty has also been mapped to contour lines, where uncertainty is mapped to line color, thickness, and dash frequency [31]. Line stylization has also been used in cartograms. Görtler [18] proposed bubble treemap that represents uncertainty information using wavy circle boundaries, and varied wave frequency and amplitude based on the uncertainty. Patterson and Lodha [32] encoded five socio-economic variables simultaneously on a world map using country fill color, glyph fill color, glyph size, country boundary color, and cartogram distortion.
|
| 56 |
+
|
| 57 |
+
## 3 Visual Encoding
|
| 58 |
+
|
| 59 |
+
In this section we describe five contour-based designs (Figures 4) for encoding geospatial information with four-attributes: $\mathrm{A},\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ . We assume that all the attributes are numeric and positive. We create a set of contour lines using the A, and then encode the attributes B, $\mathrm{C}$ , and $\mathrm{D}$ along the contour lines of $\mathrm{A}$ using visual features.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Figure 4: Encoding multivariate information using Parallel Lines, Color Blending, Pie, Thickness-Shade and Side-by-Side.
|
| 64 |
+
|
| 65 |
+
Rationale: For encoding, we used visual features that are preat-tentive [42] and intuitive to interpret, or has been used in prior research [33], e.g., line thickness, monochromatic color scheme, and pie slice. Most of our designs are based on the notion of channel separability [21], but we also kept color blending as it has often been used in the context for correlation analysis in geospatial data $\left\lbrack {{15},{22},{37}}\right\rbrack$ .
|
| 66 |
+
|
| 67 |
+
Design 1 (Parallel Lines): This design maps B, C, and D into three lines with distinct colors. The lines for $\mathrm{B}$ and $\mathrm{C}$ lie on opposite sides of the contour line of $\mathrm{A}$ , and the line for $\mathrm{D}$ follows the contour line of A. The data values are encoded using line width (between 0 and $w$ ), and the value of the attribute is linearly mapped to the range $\left\lbrack {0, w}\right\rbrack$ . If D’s value is 0, the base contour line A becomes visible.
|
| 68 |
+
|
| 69 |
+
Design 2 (Color Blending): This design encodes B and C with distinct colors, and then blends them on the contour line of A. The attribute $\mathrm{D}$ is mapped to the width of the contour line. Note that since the contour line of $\mathrm{A}$ has a non-zero width $u$ , the values of $\mathrm{D}$ are mapped to the linewidth range $\left\lbrack {u, w}\right\rbrack$ . Consequently, B and $\mathrm{C}$ remain visible even when $\mathrm{D}$ is 0 .
|
| 70 |
+
|
| 71 |
+
Design 3 (Pie): This design encodes B, C, and D using pie slices of distinct colors, and puts them together to create a pie icon. The only difference from a pie chart is that the sum of the values of $\mathrm{B}$ , $\mathrm{C}$ , and $\mathrm{D}$ may not be equal to the total pie area. The pie icons are placed successively along the contour line of A. The pie slices for B, C, D start at ${0}^{ \circ },{120}^{ \circ }$ and ${240}^{ \circ }$ (assuming the top as ${0}^{ \circ }$ ), and can grow clockwise to cover an angle of ${120}^{ \circ }$ . An attribute value is encoded into the angle covered by the corresponding pie slice.
|
| 72 |
+
|
| 73 |
+
Design 4 (Thickness-Shade): This design represents B and D using two distinct lines. The lines of B and D lie on opposite sides of the contour lines of $\mathrm{A}$ , with values encoded using line width. The values of $\mathrm{C}$ are encoded using a monochromatic color scheme, where the color appears on B’s line. A low C value corresponds to a lighter shade, and a high value to a darker shade. The minimum line width for $\mathrm{B}$ is set to a positive threshold $u$ , making a range of $\left\lbrack {u, w}\right\rbrack$ so that $\mathrm{C}$ remains visible even when $\mathrm{B}$ is 0 .
|
| 74 |
+
|
| 75 |
+
Design 5 (Side-by-Side): This design shows B, C, and D in separate side-by-side views. Each of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ is encoded using a distinct monochromatic color scheme. The color appears on the contour lines of A. We ensured that the width and height of each of the Side-by-Side views to be $\lceil \sqrt{A/3}\rceil$ , where $A$ is the total pixel area of any other design, assuming a square display.
|
| 76 |
+
|
| 77 |
+
## 4 IMPLEMENTATION DETAILS AND DATASETS
|
| 78 |
+
|
| 79 |
+
The choice for contouring thresholds are application specific. But in our controlled experiment, we used $k$ -quantiles as the thresholds. This allows us to reduce visual clutter and to examine the design across a large number of contouring thresholds. We first computed the contour lines for A, and then further processed these polylines by dividing long line segments uniformly to create fine-grained polygonal chains. We then encoded the attributes by interpolating the values at the endpoints of these tiny segments. Figure 5 illustrates the parameters used for the design. Here $b, c$ and $d$ denote the normalized $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ values, respectively, and $t$ is a thickness factor that was used to linearly map the attribute values to the input line-thickness range. The number of discrete shades in the perceptual color scale depends on the number of contour intervals. For blending, we used CSS mix-blend-mode, where the scheme Multiply was chosen in a pilot study comparing 3 possible candidate schemes: Multiply, Darken and Difference.
|
| 80 |
+
|
| 81 |
+
Synthetic Data: For each of the four attributes, we created scatterplots consisting of four Gaussian clusters that were positioned randomly in the four quadrants. Each cluster had 40000 samples, with randomly varying covariance (2.5-7.5), 2 features ( $\mathrm{x}$ and $\mathrm{y}$ coordinate), and 4/6/8 classes (for 4, 6, and 8 contour intervals). The clusters were then interpolated and reshaped such that the point density plot for each cluster takes the shape of a peak or valley.
|
| 82 |
+
|
| 83 |
+
All the clusters of A, B, C, and D at a quadrant overlapped one another making various peak-valley combinations. This also allowed us to obtain scenarios where an attribute value increases or decreases across successive contour lines of A. To visualize all possible trends (increasing or decreasing) of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ , we used all possible peak-valley combinations for these attributes. To imitate real-world topographic map patterns, we varied the cluster overlaps for A.
|
| 84 |
+
|
| 85 |
+
Real-World Meteorological Data: To test the designs on real-world data, we used real meteorological datasets ${}^{1}$ . For our study, we extracted 4 attributes from the dataset: Temperature, Pressure, Soil Moisture and Albedo from different geolocations.
|
| 86 |
+
|
| 87 |
+
Evaluation: A careful choice for various design parameters is important to achieve optimal readability for the designs. Two major factors that can influence the designs are width (the space allocated along the base contour line for encoding $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ ), and the number of contouring thresholds for A. Therefore, we first conducted two studies to examine these factors and choose appropriate parameter values for the designs. In the final study, we evaluated the designs based on viewers' task performance. The first two studies were conducted on synthetic data and the final study included both synthetic and real-world data.
|
| 88 |
+
|
| 89 |
+
## 5 STUDY 1 (CONTOUR WIDTH)
|
| 90 |
+
|
| 91 |
+
Our first study investigated the effect of contour width on design interpretability, as well as on tasks that use the underlying map.
|
| 92 |
+
|
| 93 |
+
Intuitively, increasing contour width should make the encoded variables easier to see and interpret, but will also increase occlusion of the map; in addition, wide contours may also overlap each other, depending on the density and shape of the contours. To investigate this trade-off, we set the number of contour intervals to 8 (i.e., 7 contouring thresholds), and then determined a range of widths to explore for each design. We used 8 contour intervals because this allows designers a reasonable spectrum of design choices, and gives us a reasonable range to investigate in Study 2 (described below).
|
| 94 |
+
|
| 95 |
+
Table 1 illustrates the width ranges used in Study 1. We chose minimum and maximum widths for each design based on informal testing with each design's encoding. The minimum width is determined by the number of pixels required to create the design and make the variation in the attributes noticeable. For example, Parallel Lines requires 9 pixels to encode the variation (low, mid, high) for each of the three attributes. The maximum width corresponds to the case when the successive linear elements are about to overlap.
|
| 96 |
+
|
| 97 |
+
---
|
| 98 |
+
|
| 99 |
+
${}^{1}$ anonymized
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
|
| 105 |
+
Figure 5: Illustration for the implementation details for different designs.
|
| 106 |
+
|
| 107 |
+
Table 1: Different widths for Study 1
|
| 108 |
+
|
| 109 |
+
<table><tr><td rowspan="2">Design</td><td colspan="3">Width Alternatives</td></tr><tr><td>Minimum</td><td>Median</td><td>Maximum</td></tr><tr><td>Parallel Lines</td><td>9</td><td>-</td><td>12</td></tr><tr><td>Color Blending</td><td>4</td><td>7</td><td>10</td></tr><tr><td>Pie</td><td>8</td><td>-</td><td>10</td></tr><tr><td>Thickness-Shade</td><td>8</td><td>-</td><td>10</td></tr><tr><td>Side-by-Side</td><td>1</td><td>2</td><td>4</td></tr></table>
|
| 110 |
+
|
| 111 |
+
We also selected a median width if there was enough difference between minimum and maximum that the median would have at least two pixel units from the extremes. Hence we only have the median width for Color Blending and Side-by-Side (e.g., for Parallel Lines, the encoding for $\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ needs to be the same, so the next possible width choice after 9 is 12).
|
| 112 |
+
|
| 113 |
+
### 5.1 S1: Participants, Data, and Tasks
|
| 114 |
+
|
| 115 |
+
We ran a crowdsourced study on Amazon Mechanical Turk (AMT) [12]. To be eligible, a participant needed to pass a color perception test (Ishihara test [11]), run the study on a desktop computer, reside in North America, and have at least an ${80}\%$ approval rate in AMT. We recorded 63 complete responses (32 male, 31 female, median age range 30-39).
|
| 116 |
+
|
| 117 |
+
All experimental tasks were created using synthetic data. We tested the 5 designs described earlier, with the width choices determined for that design (12 in total, see Table 1). Participants completed 57 tasks ( 4 tasks for each width that involved interpreting the variables encoded in the design, plus one additional task for three of the widths that involved reading the background map).
|
| 118 |
+
|
| 119 |
+
Rationale for Tasks: We chose the tasks to be general enough so that they can be applied in a variety of scenarios, and by considering use cases illustrated in Section 1. We deliberately designed 3-variable tasks since our knowledge of line stylization is most limited in this case. In our four interpretation tasks (Table 2), one involved identifying values, two involved looking for trends, and one involved comparing trends (e.g., Figure 2). For tasks 1 and 3, we marked four contour sections on the design (Figure 6 (left)), and participants selected the option that matched the requested combination or trend. For tasks 2 and 4, we drew four lines across the contours, and participants selected the option that best identified a specific trend (e.g., Figure 6 (right)).
|
| 120 |
+
|
| 121 |
+
The background map reading task used the Color Blending design with 3 of the widths(4,8, and 12). In this task, icons were placed in the underlying map, and participants were asked to count the number of icons and select an answer from 4 options. The reason for choosing Color Blending as a representative design for the background task is that it has three different width choices that are substantially different from each other. The icons were $8 \times 8$ pixels. The number of icons ranged between 18 to 22 , and were placed randomly on the map. Therefore, in some cases the icons were partially hidden by the contour lines.
|
| 122 |
+
|
| 123 |
+
Table 2: Tasks with domains for Study 1
|
| 124 |
+
|
| 125 |
+
<table><tr><td>ID</td><td>Task</td><td>Domain</td></tr><tr><td>1</td><td>Select the marked contour region that best represents the following combination: high B, low C, and high D</td><td>Compare different marked contour regions</td></tr><tr><td>2</td><td>Consider the contour regions intersected by the lines. Select the directed line that best represents the following trend: $\mathrm{B}$ and $\mathrm{D}$ both increase, and $\mathrm{C}$ decreases</td><td>Interpret trends across contour lines</td></tr><tr><td>3</td><td>Select the marked contour region that, when moving clockwise, best represents the following trend: B decreases, $\mathrm{D}$ increases, and $\mathrm{C}$ stays the same</td><td>Interpret trends along a contour line</td></tr><tr><td>4</td><td>Count the number of lines that show the following: B and $\mathrm{C}$ have the opposite trend to $\mathrm{D}$</td><td>Identify similar/ opposite trends across contour lines</td></tr></table>
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
|
| 129 |
+
Figure 6: Contour-region task (left); Trend task (right)
|
| 130 |
+
|
| 131 |
+
S1 Hypotheses: We hypothesized that as width increases, accuracy will increase and completion time will decrease $\left( {h}_{1}\right)$ . We also hypothesized that the influence of width will be more noticeable for Parallel Lines, Pie, and Side-by-Side than for other designs $\left( {h}_{2}\right)$ (because Color Blending and Thickness-Shade could be more difficult to interpret for inexperienced users). Finally, we hypothesized that for the background task, increasing contour width will lead to lower accuracy and higher completion time $\left( {h}_{3}\right)$ .
|
| 132 |
+
|
| 133 |
+
### 5.2 S1: Procedure
|
| 134 |
+
|
| 135 |
+
Participants completed an informed consent form and then were shown a description of the designs and given a set of practice tasks to complete. After each practice task, the participant was told whether the response was correct or not, and were given an explanation of the correct answer with a brief justification. Participants then completed the 57 tasks as described above. After each design, participants completed a NASA-TLX-style effort questionnaire [19] and at the end of the study, they rated their familiarity with visualization interfaces and their preferences for each of the widths. Participants were asked to complete the tasks as quickly and accurately as possible. Each task started with a 'start' button, and ended when the participant selected one of the multiple-choice answer options and pressed 'next'. Before starting a new task, participants were shown a reminder that they could rest before continuing.
|
| 136 |
+
|
| 137 |
+
The study used a within-participants design, with contour width as the independent variable (considered separately for each design); dependent variables were accuracy, completion time, and subjective effort scores. Designs and tasks were presented in random order (sampling without replacement).
|
| 138 |
+
|
| 139 |
+
### 5.3 S1: Results
|
| 140 |
+
|
| 141 |
+
We applied additional filters to test whether participants were legitimately attempting the tasks (e.g., answering inconsistently and large time gaps in their surveys). After filtering, we had 44 participants (20 male, 24 female) with a median age range of 31-39.
|
| 142 |
+
|
| 143 |
+
S1: Interpretation tasks: For the four interpretation tasks involving the B, C, and D attributes, data were analyzed using repeated-measures ANOVAs for each design (because each design used a different set of widths); Bonferroni corrected paired t-tests were used for follow-up comparisons. Figure 7 (left) shows that accuracies increased slightly as contour width increased, and Figure 7 (middle) shows that completion times decreased overall as width increased.
|
| 144 |
+
|
| 145 |
+
We found significant effects of width on accuracy for the Parallel Lines design, and on completion time for Color Blending, Pie, and Side-by-Side. No effect of width was found for Thickness-Shade. For Parallel Lines (using widths 9 and 12), we found a significant effect of width on accuracy $\left( {{F}_{1,{43}} = {5.4}, p < {.05}}\right)$ , with width 12 having higher accuracy than width 9. For Color Blending (widths $4,7,{10})$ , we found a significant effect of width on completion time $\left( {{F}_{2,{86}} = {8.09}, p < {.05}}\right)$ ; Figure 7 (middle). Post-hoc t-tests showed that width 10 was faster than both width 7 and width 4 (all $p < {.05}$ ). For ${Pie}$ (widths 8 and 10), we found a significant effect of width on completion time $\left( {{F}_{1,{43}} = {9.46}, p < {.05}}\right)$ . Width 10 was faster than width 8. For Side-by-Side (widths 1, 2, 4), we found a significant effect of width on completion time $\left( {{F}_{2,{86}} = {13.46}, p < {.05}}\right)$ . Post-hoc t-tests showed widths 2 and 4 to be faster than width $1\left( {p < {.05}}\right)$ .
|
| 146 |
+
|
| 147 |
+
S1: Background Task: The background icon-counting task used design Color Blending with widths 4, 8, and 12. We found a significant effect of width on accuracy $\left( {{F}_{2,{86}} = {121.76}, p < {.05}}\right)$ . Post-hoc t-tests showed significant differences among all 3 widths $\left( {p < {.05}}\right)$ . As shown in Figure 7 (right), the mean accuracy for width 4 was higher than 8 , which was higher than that of 12 . There was no effect of width on completion time.
|
| 148 |
+
|
| 149 |
+
S1: Effort and Preference: We asked participants to rate their amount of mental effort, overall effort, frustration, and perceived success with each design. Friedman tests showed significant differences in all questions (all $p < {.005}$ ), with Parallel Lines and Side-by-Side rated better than the other designs. The width preference question reveals higher user preferences for the maximum (50% of participants) and median (41%) widths.
|
| 150 |
+
|
| 151 |
+
### 5.4 S1: Discussion
|
| 152 |
+
|
| 153 |
+
Increased contour widths for interpretation tasks led to improved completion time in three of the designs, and improved accuracy for one design, partially supporting hypothesis ${h}_{1}$ . We did not find any significant effect of width for Thickness-Shade, which partially supports our hypothesis ${h}_{2}$ that suggested the effects of width would be more obvious for some designs. Our results for the background task (effect of width on accuracy, but not on completion time) partially support hypothesis ${h}_{3}$ . Overall, the fact that there was only a minor effect of reduced width on interpretability (particularly for accuracy) means that width can often be safely reduced in scenarios where the visibility of the background is critical.
|
| 154 |
+
|
| 155 |
+
Based on these findings we chose to use the maximum width for each design in further studies. In the following section, we explore the influence of contour intervals, which is another important element of a contour plot.
|
| 156 |
+
|
| 157 |
+
## 6 Study 2 (Contour Intervals)
|
| 158 |
+
|
| 159 |
+
A higher number of contour intervals increases both the number of visual elements in the design and the degree of background occlusion Increasing the number of contours, however, also provides more data points for the other variables visualized on the contour, and so may increase the interpretability of these variables. Our study explores this trade-off using a study design similar to that used above.
|
| 160 |
+
|
| 161 |
+
### 6.1 S2: Participants, Data, and Tasks
|
| 162 |
+
|
| 163 |
+
We ran the study on Amazon Mechanical Turk with the same eligibility criteria as in Study 1. We recorded 68 complete responses (40 male, 26 female, 1 non-binary, 1 preferred not to answer), aged 21-60 (median age range 21-29). None of the participants took part in Study 1. The study used the 5 designs described above, each with 3 contour interval alternatives (4,6, or 8 contours). Participants completed 60 tasks: the same 4 interpretation tasks from Study 1 for each combination of design and contour interval, and the background icon-counting task.
|
| 164 |
+
|
| 165 |
+
For analysing the interpretation tasks, the study used a within-participants design with three factors: Design (the five designs described above), Task (the four interpretation tasks from Study 1), and Number of Intervals (4, 6, or 8). The main dependent measures were accuracy and completion time; we also collected subjective effort and preference scores.
|
| 166 |
+
|
| 167 |
+
S2 Hypotheses: We hypothesized that more contour levels will result in better performance for the tasks that require analysis across contour lines $\left( {h}_{4}\right)$ . For the background task, we hypothesized that more contour levels will lead to lower accuracy and higher completion time $\left( {h}_{5}\right)$ , due to increased occlusion.
|
| 168 |
+
|
| 169 |
+
### 6.2 S2: Procedure
|
| 170 |
+
|
| 171 |
+
Similar to Study 1, participants went through the eligibility tests, design demonstration, and practice tasks. Then they completed the main study tasks and filled out the TLX-style effort surveys and overall preference questions. The data and tasks were the same as in Study 1, and designs and tasks were presented in random order.
|
| 172 |
+
|
| 173 |
+
### 6.3 S2: Results
|
| 174 |
+
|
| 175 |
+
After filtering the participants based on response consistency, we had 46 participants (28 male, 1 non-binary, 17 female) with median age range of 31-39.
|
| 176 |
+
|
| 177 |
+
S2: Interpretation tasks: We carried out $5 \times 4 \times 3$ RM-ANOVAs (Design $\times$ Task $\times$ Number of Intervals) for both accuracy and completion time, with Bonferroni-corrected t-tests as followup. There was a significant main effect of Intervals $\left( {{F}_{2,{90}} = {3.69}}\right.$ , $p < {.05}$ ) on accuracy. Post-hoc tests showed that 4 and 6 contour intervals had higher accuracy than 8 intervals (see Figure 8). There was also an interaction between Design and Task $\left( {{F}_{{12},{540}} = {2.09}}\right.$ , $p < {.05})$ . As can be seen in Figure 9, the Pie design had higher accuracy in Tasks 2 and 3 compared to the other designs. There was no main effect of Design on accuracy $\left( {{F}_{4,{180}} = {1.58}, p = {0.18}}\right)$ . For completion time, there were no main effects of Design or Intervals $\left( {p > {.05}}\right)$ , but there was an interaction between Intervals and Task $\left( {{F}_{6,{270}} = {3.21}, p < {.05}}\right)$ .
|
| 178 |
+
|
| 179 |
+
S2: Background Task: We found no main effects of the number of contour intervals on either completion time or accuracy for the icon-counting task.
|
| 180 |
+
|
| 181 |
+
S2: Subjective Effort and Preferences: Participants rated mental effort, overall effort, frustration, and perceived success with each design. Responses were similar across all designs, and Friedman tests showed a significant difference only for overall effort $\left( {p < {.05}}\right)$ , with the Pie design seen as requiring more effort than the others. We asked participants about their preference: 4 intervals (36%) and 6 intervals (39%) were preferred to 8 (25%).
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+
Figure 7: Study 1: (left and middle) Performances of the designs at different width choices for Tasks 1-4. Lines are connected to group the designs, but not to denote continuity of the width. (right) Accuracy and completion time for the icon-counting task.
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+
Figure 8: Study 2: Task performance with different contour intervals.
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
|
| 193 |
+
Figure 9: Study 2: Task accuracy for the five designs.
|
| 194 |
+
|
| 195 |
+
### 6.4 S2: Discussion
|
| 196 |
+
|
| 197 |
+
Overall, 4 and 6 intervals performed better than 8 . One main reason for this result is that fewer contour intervals result in less visual clutter. Although there was a significant effect of contour levels on accuracy only for Task 3, there were significant interactions between contour intervals and task for completion time, which partially supports ${h}_{4}$ . We observed that higher contour levels slightly reduced mean accuracy for Tasks 1 and 3, where users may have found it difficult to follow along a line where there were other parallel lines in close proximity.
|
| 198 |
+
|
| 199 |
+
The interactions show that task performance is dependent on design, contour intervals, and task combinations. The significant interaction (for accuracy) between contour levels and tasks suggests that the association between contour level and designs depends on tasks. Similarly, for completion time, the association between design and contour level depends on tasks.
|
| 200 |
+
|
| 201 |
+
In the next study, we focus on how performance varies by design in both synthetic and real-world datasets. We used 4 contour intervals for all the designs.
|
| 202 |
+
|
| 203 |
+
## 7 Study 3 (DESIGN COMPARISON)
|
| 204 |
+
|
| 205 |
+
### 7.1 S3: Participants, Data, and Tasks
|
| 206 |
+
|
| 207 |
+
We ran the study on Amazon Mechanical Turk with the same eligibility criteria as in Study 1. We recorded 78 complete responses (41 male, 33 female, 2 non-binary, 1 preferred not to answer), median age range 30-39. None of them participated in Study 1 or 2.
|
| 208 |
+
|
| 209 |
+
We used two datasets for this study (one synthetic and one from a real-world scenario). Participants completed 6 different tasks for each design and dataset combination, which resulted in 60 tasks. Of the 6 tasks, 4 were similar to studies 1 and 2 . We added 2 additional tasks (Table 3) to examine whether users are able to interpret the extent of value changes of a single attribute (Task 5), and to estimate the difference between two attributes (Task 6).
|
| 210 |
+
|
| 211 |
+
Table 3: Tasks with domains for Study 3
|
| 212 |
+
|
| 213 |
+
<table><tr><td>ID</td><td>Task</td><td>Domain</td></tr><tr><td>5</td><td>Select the marked contour region that has the maximum change in (a given attribute)</td><td>Estimate value changes along a contour line</td></tr><tr><td>6</td><td>Select the marked contour region that has the minimum difference between (a given pair of attributes)</td><td>Estimate value difference along a contour line</td></tr></table>
|
| 214 |
+
|
| 215 |
+
Similar to Study 1, participants went through the eligibility tests, design demonstrations, and practice tasks. Then they completed the main study, the effort survey, and preference questions (in this study, participants stated their preference for one of the designs after each task, as well as overall).
|
| 216 |
+
|
| 217 |
+
### 7.2 S3: Procedure
|
| 218 |
+
|
| 219 |
+
The study used a within-participants design with three factors: Design (the five designs described above), Task (the six interpretation tasks), and Dataset (Synthetic or Real-world). The main dependent measures were accuracy and completion time; we also collected subjective effort and preference scores. Designs and tasks were presented in random order (sampling without replacement).
|
| 220 |
+
|
| 221 |
+
### 7.3 S3: Results
|
| 222 |
+
|
| 223 |
+
After filtering based on response consistency, we had 54 participants (30 male, 22 female, 2 preferred not to answer) with a median age range of 31-39; 45 participants reported familiarity with data visualization interfaces.
|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
|
| 227 |
+
Figure 10: Study 3: Overall performance of the different designs.
|
| 228 |
+
|
| 229 |
+
We carried out $5 \times 6 \times 2$ RM-ANOVAs (Design $\times$ Task $\times$ Dataset) for both accuracy and completion time, with Bonferroni-corrected t-tests as follow-up. There was a significant main effect of Design $\left( {{F}_{4,{212}} = {19.2}, p < {.001}}\right)$ on accuracy. Post-hoc t-tests showed that Parallel Lines, Pie, and Side-by-Side were significantly more accurate than Color Blending and Thickness-Shade (Figure 10). There was also a main effect of Dataset $\left( {{F}_{1,{53}} = {13.8}, p < {.001}}\right)$ : participants were more accurate with the real-world dataset (66%) than with the synthetic dataset (59%).
|
| 230 |
+
|
| 231 |
+
There were also significant interactions between Design and other factors. First, there was a Design $\times$ Dataset interaction $\left( {{F}_{4,{212}} = }\right.$ ${10.7}, p < {.001})$ . As shown in Figure 11, the Color Blending design had substantially lower accuracy for the synthetic data compared to the other designs, and only Parallel Lines was equally accurate with both datasets. Second, there was a Design $\times$ Task interaction $\left( {{F}_{{20},{1060}} = {5.01}, p < {.001}}\right)$ . Figure 11 shows substantial differences in the tasks depending on the design: for example, the accuracy of the Color Blending design was substantially lower in Tasks 1 and 6, and the accuracy of Thickness-Shade was lower for Task 6.
|
| 232 |
+
|
| 233 |
+
For completion time, there were no main effects of Design $\left( {p > {.05}}\right)$ , but there were interactions between Design and Dataset $\left( {{F}_{4,{212}} = {2.91}, p < {.005}}\right)$ and between Design and Task $\left( {{F}_{{20},{1060}} = }\right.$ ${2.17}, p < {.001})$ .
|
| 234 |
+
|
| 235 |
+
Table 4: Study 3: Design Preference Survey
|
| 236 |
+
|
| 237 |
+
<table><tr><td rowspan="2">Design</td><td rowspan="2">Task 1</td><td colspan="4">Average Preference Scores</td><td/><td rowspan="2">Overall</td></tr><tr><td>Task 2</td><td>Task 3</td><td>Task 4</td><td>Task 5</td><td>Task 6</td></tr><tr><td>Parallel Lines</td><td>2.28</td><td>2.5</td><td>2.48</td><td>2.37</td><td>2.57</td><td>2.74</td><td>2.85</td></tr><tr><td>Color Blending</td><td>1.85</td><td>1.83</td><td>1.87</td><td>1.57</td><td>1.76</td><td>1.93</td><td>1.93</td></tr><tr><td>Pie</td><td>1.78</td><td>1.87</td><td>2.24</td><td>1.72</td><td>2.06</td><td>2.02</td><td>2.15</td></tr><tr><td>Thickness- Shade</td><td>2.04</td><td>2.09</td><td>2.09</td><td>1.87</td><td>2.23</td><td>2.03</td><td>2.17</td></tr><tr><td>Side-by-Side</td><td>2.69</td><td>2.74</td><td>2.85</td><td>2.37</td><td>2.69</td><td>2.59</td><td>2.93</td></tr></table>
|
| 238 |
+
|
| 239 |
+
S3: Subjective Effort and Preferences: We again asked participants to rate mental effort, overall effort, frustration, and perceived success with each design. Friedman tests showed significant differences for all questions $\left( {p < {.05}}\right)$ , with Parallel Lines and Side-by-Side scoring better than the other designs. We also asked users to rate the designs on a 0-4 scale (0: 'not preferred' to 4: 'highly preferred'). Participants rated the designs for each task as well as their overall preference. Mean participant scores are presented in Table 4 (higher values are better). The table highlights the top scores for each task and the top two designs for overall preference (Parallel Lines and Side-by-Side were the most-preferred designs).
|
| 240 |
+
|
| 241 |
+
### 7.4 Overall Discussion
|
| 242 |
+
|
| 243 |
+
The main finding of the third study is that all of the designs were successful for at least some of the tasks, and that the designs were similar in their performance - with the exception of Color Blending, which showed reduced accuracy compared to the other designs for Tasks 1 and 6. The study also clearly showed that integrating multiple variables into a single contour line results in visualizations that users can interpret successfully - as successfully as separate individual presentations (Side-by-Side). This is an important result for situations where designers need to provide a single larger view rather than divide the available space into pieces, as is needed for a side-by-side presentation.
|
| 244 |
+
|
| 245 |
+
Overall, two designs-Parallel Lines and Side-by-Side-performed best and had high preference scores. Both designs have separate encoding space for all the variables; in addition, the encoding for different attributes was similar, and they are intuitive to read without any close inspection of the legend. Side-by-Side had high preference scores for all tasks except Task 6: in this task, a higher mean preference for Parallel Lines is likely due to its symmetric encoding for all the attributes, which makes the value difference easier to estimate, whereas for Side-by-Side users need to compute the difference inspecting two separate views.
|
| 246 |
+
|
| 247 |
+
Interestingly, however, both the Parallel Lines and Side-by-Side designs have space constraints (Parallel Lines in terms of contour width, and Side-by-Side in terms of display space). For scenarios where a single view is required and the visibility of the background is important, neither of these designs may be feasible. In these cases, the Pie design appears to be a reasonable compromise because it had good accuracy and takes less space.
|
| 248 |
+
|
| 249 |
+
The Color Blending and Thickness-Shade designs both had poor performance on at least one task. This could be due to participants' unfamiliarity with the encoding, but the visual variables used in these designs may be more difficult to interpret overall. In addition, the encoding in these two designs was not symmetric compared to the other designs. Problems in Task 1 (for Color Blending) may have resulted from the need to estimate attribute value combinations, where the interpretation of the designs likely demanded a close inspection of the legend. In addition to the possibility of misinterpretation, this may have led to increased cognitive load or reduced effort for tasks using these designs. The same reasoning holds for the poor performance of Thickness-Shade for Task 6, where estimating the value difference between a pair of attributes encoded in different features such as thickness and color shade requires a careful reading of the legend.
|
| 250 |
+
|
| 251 |
+
Our results showed better performance with the real-world data than with the synthetic dataset (Figure 11), which may be due to differences in the underlying data distributions. The synthetic dataset consisted of almost all possible trend combinations for various attributes, so the corresponding visualizations consisted of highly varied feature combinations; in contrast, the visualizations generated from real-world data had fewer variations. It is likely that visualizations with real-world datasets are visually simpler, leading to improved accuracy and task completion time. These differences partially explain the significant interactions among design, dataset, and task combinations in accuracy, and interaction between design and dataset in completion time.
|
| 252 |
+
|
| 253 |
+
Based on the study results, we formulated a table of design recommendations (Table 5) that summarizes the preferred design choices for various tasks over three environments-general use, time-sensitive interpretation, and high accuracy requirements. The table shows some strengths for particular designs that are different from the overall discussion above: for example, if the task requires quick estimation for trends along a contour line, then Color Blending and Thickness-Shade may be the best design options.
|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
|
| 257 |
+
Figure 11: Task performances in Study3, for (top) different designs, and (bottom) different datasets.
|
| 258 |
+
|
| 259 |
+
Table 5: Design Recommendation Table
|
| 260 |
+
|
| 261 |
+
<table><tr><td>Domain</td><td>Time Sensitive</td><td>Environment Accuracy Sensitive</td><td>General</td></tr><tr><td>Compare different contour parts</td><td>Parallel Lines, Pie</td><td>All except Color Blending</td><td>All except Color Blending</td></tr><tr><td>Search for a trend across contour lines</td><td>Parallel Lines, Pie, Side- by-Side</td><td>Pie, Side-by-Side</td><td>Parallel Lines, Side-by- Side, Pie</td></tr><tr><td>Search for a trend along a contour line</td><td>Color Blend- ing, Thickness- Shade</td><td>All</td><td>All</td></tr><tr><td>Identify rate of change of a variable along a contour line</td><td>All except ${Pie}$</td><td>All</td><td>All</td></tr><tr><td>Identify the value difference on a contour part</td><td>Parallel Lines, Side-by- Side</td><td>Parallel Lines, Side-by- Side</td><td>Parallel Lines, Side-by- Side, Pie</td></tr></table>
|
| 262 |
+
|
| 263 |
+
## 8 LIMITATIONS AND FUTURE WORK
|
| 264 |
+
|
| 265 |
+
Our experience with the designs indicate some limitations in the research that provide opportunities for additional study. First, since we encode the variables at the pixel level, our current implementation does not scale well with large maps. However, rendering techniques using GPU acceleration can be used to overcome such an obstacle. Second, as the number of contour intervals grows, the contour lines may sometimes overlap. Therefore, finding an adaptive choice of contouring thresholds or allowing users to interactively choose the base attribute can be a valuable avenue of future research. Third, using existing contours as the basis for additional variables is limited by the density of these contours. Further work is needed on how to represent variables in areas with few contours: for example, adaptive sampling could be used to achieve minimum contour density across the map, or glyph-based techniques could be used to show important changes that occur between contours. Fourth, since we used colors in many of our designs, the interpretability of the designs could depend on the background map colour and texture. Therefore, real-world deployments of our technique will benefit from methods that tune color choices to the background map, or by implementing controls so that users can choose the opacity of the background map.
|
| 266 |
+
|
| 267 |
+
In addition, while the crowdsourced surveys have been found to be useful in gaining insights about our designs, additional controlled studies in the lab as well as focus groups with meteorological experts could provide more information. For example, the use of eye tracking for our approach would give more detail on how users interact visually with the different designs. Given the complex interaction among various factors that we observed, in-depth observation of the use of the designs in realistic tasks could help better understand some of the effects and interactions. Finally, we plan to apply our designs to different real-world datasets and explore different contour-based tasks and scenarios that can be used in real geospatial settings.
|
| 268 |
+
|
| 269 |
+
## 9 CONCLUSION
|
| 270 |
+
|
| 271 |
+
Contour plots are widely used, but standard techniques for adding multivariate visualizations onto these plots can clutter the display. To address this problem, we explored how contour lines on a geospatial map can be stylized to encode other attributes in the data. Such a multivariate representation can reduce visual clutter by leveraging the existing contour line space. We designed five types of visual encoding, and examined how various contour parameters such as width and contour levels influence task performances. Our crowdsourced study results showed that participants were able to perform several types of multivariate data analysis tasks with reasonable accuracy, which reveals the potential of our approach.
|
| 272 |
+
|
| 273 |
+
[1] N. Adrienko and G. Adrienko. Spatial generalization and aggregation
|
| 274 |
+
|
| 275 |
+
of massive movement data. IEEE Transactions on visualization and computer graphics, 17(2):205-219, 2010.
|
| 276 |
+
|
| 277 |
+
[2] B. Bach, C. Perin, Q. Ren, and P. Dragicevic. Ways of Visualizing Data on Curves. In In Proc. of TransImage, pp. 1-14. Edinburgh, United Kingdom, Apr. 2018.
|
| 278 |
+
|
| 279 |
+
[3] R. Borgo, J. Kehrer, D. H. Chung, E. Maguire, R. S. Laramee, H. Hauser, M. Ward, and M. Chen. Glyph-based visualization: Foundations, design guidelines, techniques and applications. In Eurographics (STARs), pp. 39-63, 2013.
|
| 280 |
+
|
| 281 |
+
[4] L. A. Bruckner. On chernoff faces. In Graphical representation of multivariate data, pp. 93-121. Elsevier, 1978.
|
| 282 |
+
|
| 283 |
+
[5] T. G. Burton, H. S. Rifai, Z. L. Hildenbrand, D. D. Carlton Jr, B. E. Fontenot, and K. A. Schug. Elucidating hydraulic fracturing impacts on groundwater quality using a regional geospatial statistical modeling approach. Science of the Total Environment, 545:114-126, 2016.
|
| 284 |
+
|
| 285 |
+
[6] J. Cao, Y. Yue, K. Zhang, J. Yang, and X. Zhang. Subsurface channel detection using color blending of seismic attribute volumes. International Journal of Signal Processing, Image Processing and Pattern Recognition, 8(12):157-170, 2015.
|
| 286 |
+
|
| 287 |
+
[7] A. Cedilnik and P. Rheingans. Procedural annotation of uncertain information. In In Proc. of Visualization, pp. 77-84. IEEE, 2000.
|
| 288 |
+
|
| 289 |
+
[8] S. Christophe, B. Duménieu, J. Turbet, C. Hoarau, N. Mellado, J. Ory, H. Loi, A. Masse, B. Arbelot, R. Vergne, et al. Map style formalization: Rendering techniques extension for cartography. In In Proc. of Expressive, pp. 59-68. The Eurographics Association, 2016.
|
| 290 |
+
|
| 291 |
+
[9] D. H. Chung. High-dimensional glyph-based visualization and interactive techniques. Swansea University (United Kingdom), 2014.
|
| 292 |
+
|
| 293 |
+
[10] D. H. S. Chung. High-dimensional glyph-based visualization and interactive techniques. PhD thesis, Swansea University, UK, 2014.
|
| 294 |
+
|
| 295 |
+
[11] J. Clark. The ishihara test for color blindness. American Journal of Physiological Optics, 1924.
|
| 296 |
+
|
| 297 |
+
[12] K. Crowston. Amazon mechanical turk: A research tool for organizations and information systems scholars. In A. Bhattacherjee and B. Fitzgerald, eds., Shaping the Future of ICT Research. Methods and Approaches, pp. 210-221. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
|
| 298 |
+
|
| 299 |
+
[13] M. J. De Smith, M. F. Goodchild, and P. Longley. Geospatial analysis: a comprehensive guide to principles, techniques and software tools. Troubador publishing ltd, 2007.
|
| 300 |
+
|
| 301 |
+
[14] D. Dorling, A. Barford, and M. Newman. Worldmapper: the world as you've never seen it before. IEEE transactions on visualization and computer graphics, 12(5):757-764, 2006.
|
| 302 |
+
|
| 303 |
+
[15] D. A. Ellsworth, C. E. Henze, and B. C. Nelson. Interactive visualization of high-dimensional petascale ocean data. In 2017 IEEE 7th Symposium on Large Data Analysis and Visualization (LDAV), pp. 36-44. IEEE, 2017.
|
| 304 |
+
|
| 305 |
+
[16] G. Fuchs and H. Schumann. Visualizing abstract data on maps. In Proc.. Eighth International Conference on Information Visualisation, 2004. IV 2004., pp. 139-144. IEEE, 2004.
|
| 306 |
+
|
| 307 |
+
[17] R. Fuchs and H. Hauser. Visualization of multi-variate scientific data. In Computer Graphics Forum, vol. 28, pp. 1670-1690. Wiley Online Library, 2009.
|
| 308 |
+
|
| 309 |
+
[18] J. Görtler, C. Schulz, D. Weiskopf, and O. Deussen. Bubble treemaps for uncertainty visualization. IEEE transactions on visualization and computer graphics, 24(1):719-728, 2017.
|
| 310 |
+
|
| 311 |
+
[19] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in psychology, vol. 52, pp. 139-183. Elsevier, 1988.
|
| 312 |
+
|
| 313 |
+
[20] C. G. Healey and J. T. Enns. Large datasets at a glance: Combining textures and colors in scientific visualization. IEEE transactions on visualization and computer graphics, 5(2):145-167, 1999.
|
| 314 |
+
|
| 315 |
+
[21] C. G. Healey and J. T. Enns. Attention and visual memory in visualization and computer graphics. IEEE Trans. Vis. Comput. Graph., 18(7):1170-1188, 2012. doi: 10.1109/TVCG.2011.127
|
| 316 |
+
|
| 317 |
+
[22] C. Hoarau, S. Christophe, and S. Mustière. Mixing, blending, merging or scrambling topographic maps and orthoimagery in geovisualizations. In International Cartographic Conference, 2013.
|
| 318 |
+
|
| 319 |
+
[23] X. Huang, Y. Zhao, C. Ma, J. Yang, X. Ye, and C. Zhang. Trajgraph: A graph-based visual analytics approach to studying urban network centralities using taxi trajectory data. IEEE transactions on visualization and computer graphics, 22(1):160-169, 2015.
|
| 320 |
+
|
| 321 |
+
[24] S. Kim, R. Maciejewski, A. Malik, Y. Jang, D. S. Ebert, and T. Isenberg. Bristle maps: A multivariate abstraction technique for geovisualiza-tion. IEEE Transactions on Visualization and Computer Graphics, 19(9):1438-1454, 2013.
|
| 322 |
+
|
| 323 |
+
[25] A. Leonowicz. Two-variable choropleth maps as a useful tool for visualization of geographical relationship. Geografija, 42:33-37, 2006.
|
| 324 |
+
|
| 325 |
+
[26] F. Liu and R. W. Picard. Periodicity, directionality, and randomness: Wold features for image modeling and retrieval. IEEE transactions on pattern analysis and machine intelligence, 18(7):722-733, 1996.
|
| 326 |
+
|
| 327 |
+
[27] L. Lu and H. Guo. Visualization of a digital elevation model. Data Science Journal, 6:481-484, 2007.
|
| 328 |
+
|
| 329 |
+
[28] A. M. MacEachren. Visualizing uncertain information. Cartographic perspectives, (13):10-19, 1992.
|
| 330 |
+
|
| 331 |
+
[29] L. McNabb and R. S. Laramee. Multivariate mapsa glyph-placement algorithm to support multivariate geospatial visualization. Information, 10(10):302, 2019.
|
| 332 |
+
|
| 333 |
+
[30] S. Y. Mhaske and D. Choudhury. Geospatial contour mapping of shear wave velocity for mumbai city. Natural Hazards, 59(1):317-327, 2011.
|
| 334 |
+
|
| 335 |
+
[31] A. Pang. Visualizing uncertainty in geospatial data. In In Proc. of the workshop on the intersections between geospatial information and information technology, vol. 10, p. 3823, 2001.
|
| 336 |
+
|
| 337 |
+
[32] M. S. Patterson. Multivariate Spatio-temporal Visualization of Socioeconomic Indicators Using Geographic Maps. University of California, Santa Cruz, 2008.
|
| 338 |
+
|
| 339 |
+
[33] C. Perin, T. Wun, R. Pusch, and S. Carpendale. Assessing the graphical perception of time and speed on $2\mathrm{\;d} +$ time trajectories. IEEE trans. on visualization and computer graphics, 24(1):698-708, 2017.
|
| 340 |
+
|
| 341 |
+
[34] P. Shanbhag, P. Rheingans, et al. Temporal visualization of planning polygons for efficient partitioning of geo-spatial data. In IEEE Symposium on Information Visualization (INFOVIS), pp. 211-218. IEEE, 2005.
|
| 342 |
+
|
| 343 |
+
[35] G. Sharma and R. Bala. Digital color imaging handbook. CRC press, 2017.
|
| 344 |
+
|
| 345 |
+
[36] H. H. Shenas and V. Interrante. Compositing color with texture for multi-variate visualization. In Proc. of the 3rd international conference on computer graphics and interactive techniques in Australasia and South East Asia, pp. 443-446, 2005.
|
| 346 |
+
|
| 347 |
+
[37] W. Shi, P. Fisher, and M. Goodchild. Spatial Data Quality. CRC Press, 2002.
|
| 348 |
+
|
| 349 |
+
[38] H. Tamura, S. Mori, and T. Yamawaki. Textural features corresponding to visual perception. IEEE Transactions on Systems, man, and cybernetics, 8(6):460-473, 1978.
|
| 350 |
+
|
| 351 |
+
[39] X. Tong, C. Li, and H.-W. Shen. Glyphlens: View-dependent occlusion management in the interactive glyph visualization. IEEE transactions on visualization and computer graphics, 23(1):891-900, 2016.
|
| 352 |
+
|
| 353 |
+
[40] Z. Wang, M. Lu, X. Yuan, J. Zhang, and H. Van De Wetering. Visual traffic jam analysis based on trajectory data. IEEE transactions on visualization and computer graphics, 19(12):2159-2168, 2013.
|
| 354 |
+
|
| 355 |
+
[41] M. O. Ward. Multivariate data glyphs: Principles and practice. In Handbook of data visualization, pp. 179-198. Springer, 2008.
|
| 356 |
+
|
| 357 |
+
[42] C. Ware. Information Visualization: Perception for Design. Morgan Kaufmann Publishers Inc., San Francisco, 2nd ed., 2004.
|
| 358 |
+
|
| 359 |
+
[43] C. Ware. Information Visualization, Second Edition: Perception for Design (Interactive Technologies). 04 2004.
|
| 360 |
+
|
| 361 |
+
[44] K. Wu and S. Zhang. Visualizing 2d scalar fields with hierarchical topology. In S. Liu, G. Scheuermann, and S. Takahashi, eds., 2015 IEEE Pacific Visualization Symposium, PacificVis 2015, Hangzhou, China, April 14-17, 2015, pp. 141-145. IEEE Computer Society, 2015.
|
| 362 |
+
|
| 363 |
+
[45] J. Zhang, A. Malik, B. Ahlbrand, N. Elmqvist, R. Maciejewski, and D. S. Ebert. Topogroups: Context-preserving visual illustration of multi-scale spatial aggregates. In Proc. of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2940-2951, 2017.
|
| 364 |
+
|
| 365 |
+
[46] J. Zhang, C. Surakitbanharn, N. Elmqvist, R. Maciejewski, Z. Qian, and D. S. Ebert. Topotext: Context-preserving text data exploration across multiple spatial scales. In Proc. of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2018.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/UMerutSI1p/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,356 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Contour Line Stylization to Visualize Multivariate Information
|
| 2 |
+
|
| 3 |
+
Category: n/a
|
| 4 |
+
|
| 5 |
+
< g r a p h i c s >
|
| 6 |
+
|
| 7 |
+
Figure 1: (left) A geographic map, and the contour plots of four climatic parameters A (albedo), B (soil moisture), C (pressure), and D (temperature) on a part of the map. (right) Four of our five designs that encode B, C, and D along the contour lines of A.
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Contour plots are widely used in geospatial data visualization as they provide natural interpretation of information across spatial scales. To compare a geospatial attribute against others, contour plots for the base attribute (e.g., elevation) are often overlaid, blended, or examined side by side with other attributes (e.g., temperature or pressure). Such visual inspection is challenging since overlay and color blending both clutter the visualization, and a side-by-side arrangement requires users to mentally integrate the information from different plots. Therefore, these approaches become less efficient as the number of attributes grows.
|
| 12 |
+
|
| 13 |
+
In this paper we examine the fundamental question of whether the base contour lines, which are already present in the map space, can be leveraged to visualize how other attributes relate to the base attribute. We present five different designs for stylizing contour lines, and investigate their interpretability using three crowdsourced studies. Our first two studies examined how contour width and number of contour intervals affect interpretability, using synthetic datasets where we controlled the underlying data distribution. We then compared the designs in a third study that used both synthetic and real-world meteorological data. Our studies show the effectiveness of stylizing contour lines to enrich the understanding of how different attributes relate to the reference contour plot, reveal trade-offs among design parameters, and provide designers with important insights into the factors that influence interpretability.
|
| 14 |
+
|
| 15 |
+
Index Terms: Human-centered computing-Visualization-Visualization techniques; Human-centered computing-Visualization-Visualization design and evaluation methods
|
| 16 |
+
|
| 17 |
+
§ 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Contour plots are widely used to visualize geospatial information on two-dimensional maps. Contour lines and contour intervals are two important features of a contour plot. A contour line (isoline) represents a fixed threshold value and connects map points having that value. A contour interval corresponds to a range of values within the bounds indicated by two successive threshold values.
|
| 20 |
+
|
| 21 |
+
The simplicity and rich information found in contour plots make them a popular choice for infographic posters and in geospatial data analysis $\left\lbrack {5,{13},{30}}\right\rbrack$ . Contour lines provide us with a potentially-useful visualization resource - a set of points that are already on the map. We can leverage these points to show other data attributes along the contour line, which can provide insights into how other geospatial attributes relate to the base attribute. To the best of our knowledge, effectiveness of contour line stylization and the boundaries of human perception to interpret them are not well understood.
|
| 22 |
+
|
| 23 |
+
In this paper, we examine how to stylize contour lines to provide useful additional information to the viewer (Figure 1). We do not entwine ourselves with any domain specific application, but rather attempt to improve our understanding of various facets of contour line stylization. The contour plots may result from geospatial datasets, mathematical surfaces, or even scatterplot densities. However, there exist several motivating scenarios (e.g., analyzing historical change in contour lines or understanding correlation) where contour stylization may be useful. Figure 2 shows such a motivating example based on front prediction in meteorological analysis. The development of a front depends on several factors such as temperature, moisture, wind direction and pressure. Figure 2 (left) shows front prediction by the National Oceanic and Atmospheric Administration (NOAA) Weather Prediction Center (WPC) archive, where the curved lines (red, blue or mixed) correspond to various types of fronts (warm, cold or stationary fronts, respectively). Note that such fronts can be derived using software or by painstaking inspection of the numbers plotted on the map representing various weather parameters. Figure 2 (right) shows our contour stylization for 4 weather variables (pressure, temperature, relative humidity and precipitable water). The contour lines represent isobars (pressure). The temperature, relative humidity and precipitable water are encoded in red, blue, and white lines respectively. Contour line stylization can readily reveal some potential cold fronts (yellow curves 1,3, and 5) and warm fronts (curves 2 and 4), which shows the potential of using contour line stylization alongside traditional visualizations.
|
| 24 |
+
|
| 25 |
+
Multivariate visualizations that encode data attributes into different preattentive perceptual features of a visual element (glyph) [3, 34,41] such as size, shape, color, and texture, are typical ways to visualize geospatial information on a map. A well-known limitation of a glyph-based visualization is that it clutters the map [10]. While a dense overlay occludes the view of the base map (Figure 3 (left)), a sparse overlay compromises perception of geospatial connectedness and lacks the gradient information that naturally comes from a contour plot (Figure 3 (right)).
|
| 26 |
+
|
| 27 |
+
Our Contribution: We consider geospatial data with four attributes $- \mathrm{A},\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ - and encode $\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ along the contour lines of A. We design five visual encodings and investigate whether users can interpret the attribute values (high, low), trends (increasing or decreasing), and relationships (similar or opposite trends) along a contour line, or across a set of contour lines. Since the encoding position of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ is determined by $\mathrm{A}$ ’s contour line, users may want to vary the number of contouring thresholds for A, or use a different base contour plot. Therefore, we describe how to design a synthetic dataset to examine the influence of various design parameters through controlled experiments.
|
| 28 |
+
|
| 29 |
+
< g r a p h i c s >
|
| 30 |
+
|
| 31 |
+
Figure 2: (left) Front detection by NOAA WPC. (right) detection using one of our five visualization techniques.
|
| 32 |
+
|
| 33 |
+
< g r a p h i c s >
|
| 34 |
+
|
| 35 |
+
Figure 3: Multivariate visualization with (left) glyphs occludes the map, and (right) grid stylization lacks the gradient information.
|
| 36 |
+
|
| 37 |
+
We conducted three crowdsourced studies that evaluate our designs. The first two studies reveal how contour width and the number of contour intervals influence the visual interpretability of our designs. The third study used both synthetic and real-world meteorological datasets to assess how the designs and datasets compared in terms of task completion time and accuracy, for common geospatial data analysis tasks. In addition to revealing insights into our designs, our experimental results also suggest that results obtained using synthetic datasets generalize to real-world datasets.
|
| 38 |
+
|
| 39 |
+
§ 2 RELATED WORK
|
| 40 |
+
|
| 41 |
+
§ 2.1 MULTIVARIATE VISUALIZATION ON A MAP
|
| 42 |
+
|
| 43 |
+
Geospatial data are often shown using choropleth maps [25, 28], and contour plots [27]. While choropleth maps and cartograms [14] reveal properties of a region, a contour plot helps to understand the data distribution on a map and find regions with similar properties. Data analysts often use color blending for finding probable correlations between two geospatial variables [15]. However, creating a high-quality bivariate choropleth or contour map requires careful choice of blending colors and textures [24].
|
| 44 |
+
|
| 45 |
+
Researchers have also attempted to construct trivariate choropleth maps using the CMY color model [6, 35]. Wu and Zhang [44] examined a 4-variate map that captures the contour band information for each variable in thin visual ribbons, and then overlays the ribbons for all four variables using four different colors. Overlaying glyphs $\left\lbrack {{29},{39}}\right\rbrack$ or charts $\left\lbrack {4,{16}}\right\rbrack$ on a map is a popular way to visualize geospatial information. Glyphs are often designed to encode data into features that can be perceived through preattentive visual channels [43]. A rich body of visualization design research examines how humans perceive various combinations of geometric, optical, relational, and semantic channels. We refer readers to recent surveys $\left\lbrack {9,{17},{41}}\right\rbrack$ for a detailed review of glyph design. Glyph based visualizations often must use a careful glyph positioning technique $\left\lbrack {{29},{44}}\right\rbrack$ , as creating glyphs for many data points on a map creates overlapping.
|
| 46 |
+
|
| 47 |
+
Various texture metrics such as contrast, coarseness, periodicity, and directionality $\left\lbrack {{26},{38}}\right\rbrack$ have been used to visualize multivariate data. Healey and Enns [20] introduced pexels that encode multidimensional datasets into multi-colored perceptual textures with height, density, and regularity properties. Shenas and Interrante [36] showed that color and texture can be combined to meaningfully convey multivariate information with four or more variables on a choropleth map.
|
| 48 |
+
|
| 49 |
+
§ 2.2 STYLIZATION OF LINES AND BOUNDARIES
|
| 50 |
+
|
| 51 |
+
Stylized lines naturally appear in the visualization of trajectory data. For example, traffic flow data are often color-coded on road networks as heatmaps $\left\lbrack {{23},{40}}\right\rbrack$ . Andrienko et al. $\left\lbrack 1\right\rbrack$ extracted characteristic points from car trajectories and aggregated them to create flows between cellular areas to reveal movement patterns in a city. They used stylization to depict various information about the aggregated flows. Huang et al. [23] modeled taxi trajectories using a graph. They stylized the streets based on node centrality and overlaid rose charts to visualize other traffic information. Perin et al. $\left\lbrack {2,{33}}\right\rbrack$ investigated combinations of thickness, monochromatic color scheme, and tick mark frequency on a line to encode time and speed on a two-dimensional line. They observed that encoding speed with a color scheme and time using one of the other two features improved user perception.
|
| 52 |
+
|
| 53 |
+
Geographic cluster visualization and map generation techniques have also considered line stylization. Christophe et al. [8] proposed a pipeline for generating artistic and cartographic maps that integrates linear stylization, patch-based region filling and vector texture generation. Kim et al. [24] created Bristle Maps that put bristles perpendicular to the linear elements (streets, subway lines) of the map and then encoded multivariate information into the length, density, color, orientation, and transparency of the bristles. Zhang et al. [45] introduced TopoGroups that aggregate spatial data into hierarchical clusters, and show information about geographic clusters on the cluster boundaries. Although TopoGroups summarizes cluster information along the boundary, Zhang et al. noted that users may mistakenly see the visualization as representing local statistics near the boundary. In subsequent work, Zhang et al. [46] proposed TopoText that replaces the boundaries using oriented text.
|
| 54 |
+
|
| 55 |
+
Visual encoding of lines and boundaries has been widely used in visualizing data uncertainty [7, 18]. Cedilnik and Rheingans [7] overlaid a regular grid on the map and then stylized the grid edges using blur, jitter, and wave. Data uncertainty has also been mapped to contour lines, where uncertainty is mapped to line color, thickness, and dash frequency [31]. Line stylization has also been used in cartograms. Görtler [18] proposed bubble treemap that represents uncertainty information using wavy circle boundaries, and varied wave frequency and amplitude based on the uncertainty. Patterson and Lodha [32] encoded five socio-economic variables simultaneously on a world map using country fill color, glyph fill color, glyph size, country boundary color, and cartogram distortion.
|
| 56 |
+
|
| 57 |
+
§ 3 VISUAL ENCODING
|
| 58 |
+
|
| 59 |
+
In this section we describe five contour-based designs (Figures 4) for encoding geospatial information with four-attributes: $\mathrm{A},\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ . We assume that all the attributes are numeric and positive. We create a set of contour lines using the A, and then encode the attributes B, $\mathrm{C}$ , and $\mathrm{D}$ along the contour lines of $\mathrm{A}$ using visual features.
|
| 60 |
+
|
| 61 |
+
< g r a p h i c s >
|
| 62 |
+
|
| 63 |
+
Figure 4: Encoding multivariate information using Parallel Lines, Color Blending, Pie, Thickness-Shade and Side-by-Side.
|
| 64 |
+
|
| 65 |
+
Rationale: For encoding, we used visual features that are preat-tentive [42] and intuitive to interpret, or has been used in prior research [33], e.g., line thickness, monochromatic color scheme, and pie slice. Most of our designs are based on the notion of channel separability [21], but we also kept color blending as it has often been used in the context for correlation analysis in geospatial data $\left\lbrack {{15},{22},{37}}\right\rbrack$ .
|
| 66 |
+
|
| 67 |
+
Design 1 (Parallel Lines): This design maps B, C, and D into three lines with distinct colors. The lines for $\mathrm{B}$ and $\mathrm{C}$ lie on opposite sides of the contour line of $\mathrm{A}$ , and the line for $\mathrm{D}$ follows the contour line of A. The data values are encoded using line width (between 0 and $w$ ), and the value of the attribute is linearly mapped to the range $\left\lbrack {0,w}\right\rbrack$ . If D’s value is 0, the base contour line A becomes visible.
|
| 68 |
+
|
| 69 |
+
Design 2 (Color Blending): This design encodes B and C with distinct colors, and then blends them on the contour line of A. The attribute $\mathrm{D}$ is mapped to the width of the contour line. Note that since the contour line of $\mathrm{A}$ has a non-zero width $u$ , the values of $\mathrm{D}$ are mapped to the linewidth range $\left\lbrack {u,w}\right\rbrack$ . Consequently, B and $\mathrm{C}$ remain visible even when $\mathrm{D}$ is 0 .
|
| 70 |
+
|
| 71 |
+
Design 3 (Pie): This design encodes B, C, and D using pie slices of distinct colors, and puts them together to create a pie icon. The only difference from a pie chart is that the sum of the values of $\mathrm{B}$ , $\mathrm{C}$ , and $\mathrm{D}$ may not be equal to the total pie area. The pie icons are placed successively along the contour line of A. The pie slices for B, C, D start at ${0}^{ \circ },{120}^{ \circ }$ and ${240}^{ \circ }$ (assuming the top as ${0}^{ \circ }$ ), and can grow clockwise to cover an angle of ${120}^{ \circ }$ . An attribute value is encoded into the angle covered by the corresponding pie slice.
|
| 72 |
+
|
| 73 |
+
Design 4 (Thickness-Shade): This design represents B and D using two distinct lines. The lines of B and D lie on opposite sides of the contour lines of $\mathrm{A}$ , with values encoded using line width. The values of $\mathrm{C}$ are encoded using a monochromatic color scheme, where the color appears on B’s line. A low C value corresponds to a lighter shade, and a high value to a darker shade. The minimum line width for $\mathrm{B}$ is set to a positive threshold $u$ , making a range of $\left\lbrack {u,w}\right\rbrack$ so that $\mathrm{C}$ remains visible even when $\mathrm{B}$ is 0 .
|
| 74 |
+
|
| 75 |
+
Design 5 (Side-by-Side): This design shows B, C, and D in separate side-by-side views. Each of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ is encoded using a distinct monochromatic color scheme. The color appears on the contour lines of A. We ensured that the width and height of each of the Side-by-Side views to be $\lceil \sqrt{A/3}\rceil$ , where $A$ is the total pixel area of any other design, assuming a square display.
|
| 76 |
+
|
| 77 |
+
§ 4 IMPLEMENTATION DETAILS AND DATASETS
|
| 78 |
+
|
| 79 |
+
The choice for contouring thresholds are application specific. But in our controlled experiment, we used $k$ -quantiles as the thresholds. This allows us to reduce visual clutter and to examine the design across a large number of contouring thresholds. We first computed the contour lines for A, and then further processed these polylines by dividing long line segments uniformly to create fine-grained polygonal chains. We then encoded the attributes by interpolating the values at the endpoints of these tiny segments. Figure 5 illustrates the parameters used for the design. Here $b,c$ and $d$ denote the normalized $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ values, respectively, and $t$ is a thickness factor that was used to linearly map the attribute values to the input line-thickness range. The number of discrete shades in the perceptual color scale depends on the number of contour intervals. For blending, we used CSS mix-blend-mode, where the scheme Multiply was chosen in a pilot study comparing 3 possible candidate schemes: Multiply, Darken and Difference.
|
| 80 |
+
|
| 81 |
+
Synthetic Data: For each of the four attributes, we created scatterplots consisting of four Gaussian clusters that were positioned randomly in the four quadrants. Each cluster had 40000 samples, with randomly varying covariance (2.5-7.5), 2 features ( $\mathrm{x}$ and $\mathrm{y}$ coordinate), and 4/6/8 classes (for 4, 6, and 8 contour intervals). The clusters were then interpolated and reshaped such that the point density plot for each cluster takes the shape of a peak or valley.
|
| 82 |
+
|
| 83 |
+
All the clusters of A, B, C, and D at a quadrant overlapped one another making various peak-valley combinations. This also allowed us to obtain scenarios where an attribute value increases or decreases across successive contour lines of A. To visualize all possible trends (increasing or decreasing) of $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ , we used all possible peak-valley combinations for these attributes. To imitate real-world topographic map patterns, we varied the cluster overlaps for A.
|
| 84 |
+
|
| 85 |
+
Real-World Meteorological Data: To test the designs on real-world data, we used real meteorological datasets ${}^{1}$ . For our study, we extracted 4 attributes from the dataset: Temperature, Pressure, Soil Moisture and Albedo from different geolocations.
|
| 86 |
+
|
| 87 |
+
Evaluation: A careful choice for various design parameters is important to achieve optimal readability for the designs. Two major factors that can influence the designs are width (the space allocated along the base contour line for encoding $\mathrm{B},\mathrm{C}$ , and $\mathrm{D}$ ), and the number of contouring thresholds for A. Therefore, we first conducted two studies to examine these factors and choose appropriate parameter values for the designs. In the final study, we evaluated the designs based on viewers' task performance. The first two studies were conducted on synthetic data and the final study included both synthetic and real-world data.
|
| 88 |
+
|
| 89 |
+
§ 5 STUDY 1 (CONTOUR WIDTH)
|
| 90 |
+
|
| 91 |
+
Our first study investigated the effect of contour width on design interpretability, as well as on tasks that use the underlying map.
|
| 92 |
+
|
| 93 |
+
Intuitively, increasing contour width should make the encoded variables easier to see and interpret, but will also increase occlusion of the map; in addition, wide contours may also overlap each other, depending on the density and shape of the contours. To investigate this trade-off, we set the number of contour intervals to 8 (i.e., 7 contouring thresholds), and then determined a range of widths to explore for each design. We used 8 contour intervals because this allows designers a reasonable spectrum of design choices, and gives us a reasonable range to investigate in Study 2 (described below).
|
| 94 |
+
|
| 95 |
+
Table 1 illustrates the width ranges used in Study 1. We chose minimum and maximum widths for each design based on informal testing with each design's encoding. The minimum width is determined by the number of pixels required to create the design and make the variation in the attributes noticeable. For example, Parallel Lines requires 9 pixels to encode the variation (low, mid, high) for each of the three attributes. The maximum width corresponds to the case when the successive linear elements are about to overlap.
|
| 96 |
+
|
| 97 |
+
${}^{1}$ anonymized
|
| 98 |
+
|
| 99 |
+
< g r a p h i c s >
|
| 100 |
+
|
| 101 |
+
Figure 5: Illustration for the implementation details for different designs.
|
| 102 |
+
|
| 103 |
+
Table 1: Different widths for Study 1
|
| 104 |
+
|
| 105 |
+
max width=
|
| 106 |
+
|
| 107 |
+
2*Design 3|c|Width Alternatives
|
| 108 |
+
|
| 109 |
+
2-4
|
| 110 |
+
Minimum Median Maximum
|
| 111 |
+
|
| 112 |
+
1-4
|
| 113 |
+
Parallel Lines 9 - 12
|
| 114 |
+
|
| 115 |
+
1-4
|
| 116 |
+
Color Blending 4 7 10
|
| 117 |
+
|
| 118 |
+
1-4
|
| 119 |
+
Pie 8 - 10
|
| 120 |
+
|
| 121 |
+
1-4
|
| 122 |
+
Thickness-Shade 8 - 10
|
| 123 |
+
|
| 124 |
+
1-4
|
| 125 |
+
Side-by-Side 1 2 4
|
| 126 |
+
|
| 127 |
+
1-4
|
| 128 |
+
|
| 129 |
+
We also selected a median width if there was enough difference between minimum and maximum that the median would have at least two pixel units from the extremes. Hence we only have the median width for Color Blending and Side-by-Side (e.g., for Parallel Lines, the encoding for $\mathrm{B},\mathrm{C}$ and $\mathrm{D}$ needs to be the same, so the next possible width choice after 9 is 12).
|
| 130 |
+
|
| 131 |
+
§ 5.1 S1: PARTICIPANTS, DATA, AND TASKS
|
| 132 |
+
|
| 133 |
+
We ran a crowdsourced study on Amazon Mechanical Turk (AMT) [12]. To be eligible, a participant needed to pass a color perception test (Ishihara test [11]), run the study on a desktop computer, reside in North America, and have at least an ${80}\%$ approval rate in AMT. We recorded 63 complete responses (32 male, 31 female, median age range 30-39).
|
| 134 |
+
|
| 135 |
+
All experimental tasks were created using synthetic data. We tested the 5 designs described earlier, with the width choices determined for that design (12 in total, see Table 1). Participants completed 57 tasks ( 4 tasks for each width that involved interpreting the variables encoded in the design, plus one additional task for three of the widths that involved reading the background map).
|
| 136 |
+
|
| 137 |
+
Rationale for Tasks: We chose the tasks to be general enough so that they can be applied in a variety of scenarios, and by considering use cases illustrated in Section 1. We deliberately designed 3-variable tasks since our knowledge of line stylization is most limited in this case. In our four interpretation tasks (Table 2), one involved identifying values, two involved looking for trends, and one involved comparing trends (e.g., Figure 2). For tasks 1 and 3, we marked four contour sections on the design (Figure 6 (left)), and participants selected the option that matched the requested combination or trend. For tasks 2 and 4, we drew four lines across the contours, and participants selected the option that best identified a specific trend (e.g., Figure 6 (right)).
|
| 138 |
+
|
| 139 |
+
The background map reading task used the Color Blending design with 3 of the widths(4,8, and 12). In this task, icons were placed in the underlying map, and participants were asked to count the number of icons and select an answer from 4 options. The reason for choosing Color Blending as a representative design for the background task is that it has three different width choices that are substantially different from each other. The icons were $8 \times 8$ pixels. The number of icons ranged between 18 to 22, and were placed randomly on the map. Therefore, in some cases the icons were partially hidden by the contour lines.
|
| 140 |
+
|
| 141 |
+
Table 2: Tasks with domains for Study 1
|
| 142 |
+
|
| 143 |
+
max width=
|
| 144 |
+
|
| 145 |
+
ID Task Domain
|
| 146 |
+
|
| 147 |
+
1-3
|
| 148 |
+
1 Select the marked contour region that best represents the following combination: high B, low C, and high D Compare different marked contour regions
|
| 149 |
+
|
| 150 |
+
1-3
|
| 151 |
+
2 Consider the contour regions intersected by the lines. Select the directed line that best represents the following trend: $\mathrm{B}$ and $\mathrm{D}$ both increase, and $\mathrm{C}$ decreases Interpret trends across contour lines
|
| 152 |
+
|
| 153 |
+
1-3
|
| 154 |
+
3 Select the marked contour region that, when moving clockwise, best represents the following trend: B decreases, $\mathrm{D}$ increases, and $\mathrm{C}$ stays the same Interpret trends along a contour line
|
| 155 |
+
|
| 156 |
+
1-3
|
| 157 |
+
4 Count the number of lines that show the following: B and $\mathrm{C}$ have the opposite trend to $\mathrm{D}$ Identify similar/ opposite trends across contour lines
|
| 158 |
+
|
| 159 |
+
1-3
|
| 160 |
+
|
| 161 |
+
< g r a p h i c s >
|
| 162 |
+
|
| 163 |
+
Figure 6: Contour-region task (left); Trend task (right)
|
| 164 |
+
|
| 165 |
+
S1 Hypotheses: We hypothesized that as width increases, accuracy will increase and completion time will decrease $\left( {h}_{1}\right)$ . We also hypothesized that the influence of width will be more noticeable for Parallel Lines, Pie, and Side-by-Side than for other designs $\left( {h}_{2}\right)$ (because Color Blending and Thickness-Shade could be more difficult to interpret for inexperienced users). Finally, we hypothesized that for the background task, increasing contour width will lead to lower accuracy and higher completion time $\left( {h}_{3}\right)$ .
|
| 166 |
+
|
| 167 |
+
§ 5.2 S1: PROCEDURE
|
| 168 |
+
|
| 169 |
+
Participants completed an informed consent form and then were shown a description of the designs and given a set of practice tasks to complete. After each practice task, the participant was told whether the response was correct or not, and were given an explanation of the correct answer with a brief justification. Participants then completed the 57 tasks as described above. After each design, participants completed a NASA-TLX-style effort questionnaire [19] and at the end of the study, they rated their familiarity with visualization interfaces and their preferences for each of the widths. Participants were asked to complete the tasks as quickly and accurately as possible. Each task started with a 'start' button, and ended when the participant selected one of the multiple-choice answer options and pressed 'next'. Before starting a new task, participants were shown a reminder that they could rest before continuing.
|
| 170 |
+
|
| 171 |
+
The study used a within-participants design, with contour width as the independent variable (considered separately for each design); dependent variables were accuracy, completion time, and subjective effort scores. Designs and tasks were presented in random order (sampling without replacement).
|
| 172 |
+
|
| 173 |
+
§ 5.3 S1: RESULTS
|
| 174 |
+
|
| 175 |
+
We applied additional filters to test whether participants were legitimately attempting the tasks (e.g., answering inconsistently and large time gaps in their surveys). After filtering, we had 44 participants (20 male, 24 female) with a median age range of 31-39.
|
| 176 |
+
|
| 177 |
+
S1: Interpretation tasks: For the four interpretation tasks involving the B, C, and D attributes, data were analyzed using repeated-measures ANOVAs for each design (because each design used a different set of widths); Bonferroni corrected paired t-tests were used for follow-up comparisons. Figure 7 (left) shows that accuracies increased slightly as contour width increased, and Figure 7 (middle) shows that completion times decreased overall as width increased.
|
| 178 |
+
|
| 179 |
+
We found significant effects of width on accuracy for the Parallel Lines design, and on completion time for Color Blending, Pie, and Side-by-Side. No effect of width was found for Thickness-Shade. For Parallel Lines (using widths 9 and 12), we found a significant effect of width on accuracy $\left( {{F}_{1,{43}} = {5.4},p < {.05}}\right)$ , with width 12 having higher accuracy than width 9. For Color Blending (widths $4,7,{10})$ , we found a significant effect of width on completion time $\left( {{F}_{2,{86}} = {8.09},p < {.05}}\right)$ ; Figure 7 (middle). Post-hoc t-tests showed that width 10 was faster than both width 7 and width 4 (all $p < {.05}$ ). For ${Pie}$ (widths 8 and 10), we found a significant effect of width on completion time $\left( {{F}_{1,{43}} = {9.46},p < {.05}}\right)$ . Width 10 was faster than width 8. For Side-by-Side (widths 1, 2, 4), we found a significant effect of width on completion time $\left( {{F}_{2,{86}} = {13.46},p < {.05}}\right)$ . Post-hoc t-tests showed widths 2 and 4 to be faster than width $1\left( {p < {.05}}\right)$ .
|
| 180 |
+
|
| 181 |
+
S1: Background Task: The background icon-counting task used design Color Blending with widths 4, 8, and 12. We found a significant effect of width on accuracy $\left( {{F}_{2,{86}} = {121.76},p < {.05}}\right)$ . Post-hoc t-tests showed significant differences among all 3 widths $\left( {p < {.05}}\right)$ . As shown in Figure 7 (right), the mean accuracy for width 4 was higher than 8, which was higher than that of 12 . There was no effect of width on completion time.
|
| 182 |
+
|
| 183 |
+
S1: Effort and Preference: We asked participants to rate their amount of mental effort, overall effort, frustration, and perceived success with each design. Friedman tests showed significant differences in all questions (all $p < {.005}$ ), with Parallel Lines and Side-by-Side rated better than the other designs. The width preference question reveals higher user preferences for the maximum (50% of participants) and median (41%) widths.
|
| 184 |
+
|
| 185 |
+
§ 5.4 S1: DISCUSSION
|
| 186 |
+
|
| 187 |
+
Increased contour widths for interpretation tasks led to improved completion time in three of the designs, and improved accuracy for one design, partially supporting hypothesis ${h}_{1}$ . We did not find any significant effect of width for Thickness-Shade, which partially supports our hypothesis ${h}_{2}$ that suggested the effects of width would be more obvious for some designs. Our results for the background task (effect of width on accuracy, but not on completion time) partially support hypothesis ${h}_{3}$ . Overall, the fact that there was only a minor effect of reduced width on interpretability (particularly for accuracy) means that width can often be safely reduced in scenarios where the visibility of the background is critical.
|
| 188 |
+
|
| 189 |
+
Based on these findings we chose to use the maximum width for each design in further studies. In the following section, we explore the influence of contour intervals, which is another important element of a contour plot.
|
| 190 |
+
|
| 191 |
+
§ 6 STUDY 2 (CONTOUR INTERVALS)
|
| 192 |
+
|
| 193 |
+
A higher number of contour intervals increases both the number of visual elements in the design and the degree of background occlusion Increasing the number of contours, however, also provides more data points for the other variables visualized on the contour, and so may increase the interpretability of these variables. Our study explores this trade-off using a study design similar to that used above.
|
| 194 |
+
|
| 195 |
+
§ 6.1 S2: PARTICIPANTS, DATA, AND TASKS
|
| 196 |
+
|
| 197 |
+
We ran the study on Amazon Mechanical Turk with the same eligibility criteria as in Study 1. We recorded 68 complete responses (40 male, 26 female, 1 non-binary, 1 preferred not to answer), aged 21-60 (median age range 21-29). None of the participants took part in Study 1. The study used the 5 designs described above, each with 3 contour interval alternatives (4,6, or 8 contours). Participants completed 60 tasks: the same 4 interpretation tasks from Study 1 for each combination of design and contour interval, and the background icon-counting task.
|
| 198 |
+
|
| 199 |
+
For analysing the interpretation tasks, the study used a within-participants design with three factors: Design (the five designs described above), Task (the four interpretation tasks from Study 1), and Number of Intervals (4, 6, or 8). The main dependent measures were accuracy and completion time; we also collected subjective effort and preference scores.
|
| 200 |
+
|
| 201 |
+
S2 Hypotheses: We hypothesized that more contour levels will result in better performance for the tasks that require analysis across contour lines $\left( {h}_{4}\right)$ . For the background task, we hypothesized that more contour levels will lead to lower accuracy and higher completion time $\left( {h}_{5}\right)$ , due to increased occlusion.
|
| 202 |
+
|
| 203 |
+
§ 6.2 S2: PROCEDURE
|
| 204 |
+
|
| 205 |
+
Similar to Study 1, participants went through the eligibility tests, design demonstration, and practice tasks. Then they completed the main study tasks and filled out the TLX-style effort surveys and overall preference questions. The data and tasks were the same as in Study 1, and designs and tasks were presented in random order.
|
| 206 |
+
|
| 207 |
+
§ 6.3 S2: RESULTS
|
| 208 |
+
|
| 209 |
+
After filtering the participants based on response consistency, we had 46 participants (28 male, 1 non-binary, 17 female) with median age range of 31-39.
|
| 210 |
+
|
| 211 |
+
S2: Interpretation tasks: We carried out $5 \times 4 \times 3$ RM-ANOVAs (Design $\times$ Task $\times$ Number of Intervals) for both accuracy and completion time, with Bonferroni-corrected t-tests as followup. There was a significant main effect of Intervals $\left( {{F}_{2,{90}} = {3.69}}\right.$ , $p < {.05}$ ) on accuracy. Post-hoc tests showed that 4 and 6 contour intervals had higher accuracy than 8 intervals (see Figure 8). There was also an interaction between Design and Task $\left( {{F}_{{12},{540}} = {2.09}}\right.$ , $p < {.05})$ . As can be seen in Figure 9, the Pie design had higher accuracy in Tasks 2 and 3 compared to the other designs. There was no main effect of Design on accuracy $\left( {{F}_{4,{180}} = {1.58},p = {0.18}}\right)$ . For completion time, there were no main effects of Design or Intervals $\left( {p > {.05}}\right)$ , but there was an interaction between Intervals and Task $\left( {{F}_{6,{270}} = {3.21},p < {.05}}\right)$ .
|
| 212 |
+
|
| 213 |
+
S2: Background Task: We found no main effects of the number of contour intervals on either completion time or accuracy for the icon-counting task.
|
| 214 |
+
|
| 215 |
+
S2: Subjective Effort and Preferences: Participants rated mental effort, overall effort, frustration, and perceived success with each design. Responses were similar across all designs, and Friedman tests showed a significant difference only for overall effort $\left( {p < {.05}}\right)$ , with the Pie design seen as requiring more effort than the others. We asked participants about their preference: 4 intervals (36%) and 6 intervals (39%) were preferred to 8 (25%).
|
| 216 |
+
|
| 217 |
+
< g r a p h i c s >
|
| 218 |
+
|
| 219 |
+
Figure 7: Study 1: (left and middle) Performances of the designs at different width choices for Tasks 1-4. Lines are connected to group the designs, but not to denote continuity of the width. (right) Accuracy and completion time for the icon-counting task.
|
| 220 |
+
|
| 221 |
+
< g r a p h i c s >
|
| 222 |
+
|
| 223 |
+
Figure 8: Study 2: Task performance with different contour intervals.
|
| 224 |
+
|
| 225 |
+
< g r a p h i c s >
|
| 226 |
+
|
| 227 |
+
Figure 9: Study 2: Task accuracy for the five designs.
|
| 228 |
+
|
| 229 |
+
§ 6.4 S2: DISCUSSION
|
| 230 |
+
|
| 231 |
+
Overall, 4 and 6 intervals performed better than 8 . One main reason for this result is that fewer contour intervals result in less visual clutter. Although there was a significant effect of contour levels on accuracy only for Task 3, there were significant interactions between contour intervals and task for completion time, which partially supports ${h}_{4}$ . We observed that higher contour levels slightly reduced mean accuracy for Tasks 1 and 3, where users may have found it difficult to follow along a line where there were other parallel lines in close proximity.
|
| 232 |
+
|
| 233 |
+
The interactions show that task performance is dependent on design, contour intervals, and task combinations. The significant interaction (for accuracy) between contour levels and tasks suggests that the association between contour level and designs depends on tasks. Similarly, for completion time, the association between design and contour level depends on tasks.
|
| 234 |
+
|
| 235 |
+
In the next study, we focus on how performance varies by design in both synthetic and real-world datasets. We used 4 contour intervals for all the designs.
|
| 236 |
+
|
| 237 |
+
§ 7 STUDY 3 (DESIGN COMPARISON)
|
| 238 |
+
|
| 239 |
+
§ 7.1 S3: PARTICIPANTS, DATA, AND TASKS
|
| 240 |
+
|
| 241 |
+
We ran the study on Amazon Mechanical Turk with the same eligibility criteria as in Study 1. We recorded 78 complete responses (41 male, 33 female, 2 non-binary, 1 preferred not to answer), median age range 30-39. None of them participated in Study 1 or 2.
|
| 242 |
+
|
| 243 |
+
We used two datasets for this study (one synthetic and one from a real-world scenario). Participants completed 6 different tasks for each design and dataset combination, which resulted in 60 tasks. Of the 6 tasks, 4 were similar to studies 1 and 2 . We added 2 additional tasks (Table 3) to examine whether users are able to interpret the extent of value changes of a single attribute (Task 5), and to estimate the difference between two attributes (Task 6).
|
| 244 |
+
|
| 245 |
+
Table 3: Tasks with domains for Study 3
|
| 246 |
+
|
| 247 |
+
max width=
|
| 248 |
+
|
| 249 |
+
ID Task Domain
|
| 250 |
+
|
| 251 |
+
1-3
|
| 252 |
+
5 Select the marked contour region that has the maximum change in (a given attribute) Estimate value changes along a contour line
|
| 253 |
+
|
| 254 |
+
1-3
|
| 255 |
+
6 Select the marked contour region that has the minimum difference between (a given pair of attributes) Estimate value difference along a contour line
|
| 256 |
+
|
| 257 |
+
1-3
|
| 258 |
+
|
| 259 |
+
Similar to Study 1, participants went through the eligibility tests, design demonstrations, and practice tasks. Then they completed the main study, the effort survey, and preference questions (in this study, participants stated their preference for one of the designs after each task, as well as overall).
|
| 260 |
+
|
| 261 |
+
§ 7.2 S3: PROCEDURE
|
| 262 |
+
|
| 263 |
+
The study used a within-participants design with three factors: Design (the five designs described above), Task (the six interpretation tasks), and Dataset (Synthetic or Real-world). The main dependent measures were accuracy and completion time; we also collected subjective effort and preference scores. Designs and tasks were presented in random order (sampling without replacement).
|
| 264 |
+
|
| 265 |
+
§ 7.3 S3: RESULTS
|
| 266 |
+
|
| 267 |
+
After filtering based on response consistency, we had 54 participants (30 male, 22 female, 2 preferred not to answer) with a median age range of 31-39; 45 participants reported familiarity with data visualization interfaces.
|
| 268 |
+
|
| 269 |
+
< g r a p h i c s >
|
| 270 |
+
|
| 271 |
+
Figure 10: Study 3: Overall performance of the different designs.
|
| 272 |
+
|
| 273 |
+
We carried out $5 \times 6 \times 2$ RM-ANOVAs (Design $\times$ Task $\times$ Dataset) for both accuracy and completion time, with Bonferroni-corrected t-tests as follow-up. There was a significant main effect of Design $\left( {{F}_{4,{212}} = {19.2},p < {.001}}\right)$ on accuracy. Post-hoc t-tests showed that Parallel Lines, Pie, and Side-by-Side were significantly more accurate than Color Blending and Thickness-Shade (Figure 10). There was also a main effect of Dataset $\left( {{F}_{1,{53}} = {13.8},p < {.001}}\right)$ : participants were more accurate with the real-world dataset (66%) than with the synthetic dataset (59%).
|
| 274 |
+
|
| 275 |
+
There were also significant interactions between Design and other factors. First, there was a Design $\times$ Dataset interaction $\left( {{F}_{4,{212}} = }\right.$ ${10.7},p < {.001})$ . As shown in Figure 11, the Color Blending design had substantially lower accuracy for the synthetic data compared to the other designs, and only Parallel Lines was equally accurate with both datasets. Second, there was a Design $\times$ Task interaction $\left( {{F}_{{20},{1060}} = {5.01},p < {.001}}\right)$ . Figure 11 shows substantial differences in the tasks depending on the design: for example, the accuracy of the Color Blending design was substantially lower in Tasks 1 and 6, and the accuracy of Thickness-Shade was lower for Task 6.
|
| 276 |
+
|
| 277 |
+
For completion time, there were no main effects of Design $\left( {p > {.05}}\right)$ , but there were interactions between Design and Dataset $\left( {{F}_{4,{212}} = {2.91},p < {.005}}\right)$ and between Design and Task $\left( {{F}_{{20},{1060}} = }\right.$ ${2.17},p < {.001})$ .
|
| 278 |
+
|
| 279 |
+
Table 4: Study 3: Design Preference Survey
|
| 280 |
+
|
| 281 |
+
max width=
|
| 282 |
+
|
| 283 |
+
2*Design 2*Task 1 4|c|Average Preference Scores X 2*Overall
|
| 284 |
+
|
| 285 |
+
3-7
|
| 286 |
+
Task 2 Task 3 Task 4 Task 5 Task 6
|
| 287 |
+
|
| 288 |
+
1-8
|
| 289 |
+
Parallel Lines 2.28 2.5 2.48 2.37 2.57 2.74 2.85
|
| 290 |
+
|
| 291 |
+
1-8
|
| 292 |
+
Color Blending 1.85 1.83 1.87 1.57 1.76 1.93 1.93
|
| 293 |
+
|
| 294 |
+
1-8
|
| 295 |
+
Pie 1.78 1.87 2.24 1.72 2.06 2.02 2.15
|
| 296 |
+
|
| 297 |
+
1-8
|
| 298 |
+
Thickness- Shade 2.04 2.09 2.09 1.87 2.23 2.03 2.17
|
| 299 |
+
|
| 300 |
+
1-8
|
| 301 |
+
Side-by-Side 2.69 2.74 2.85 2.37 2.69 2.59 2.93
|
| 302 |
+
|
| 303 |
+
1-8
|
| 304 |
+
|
| 305 |
+
S3: Subjective Effort and Preferences: We again asked participants to rate mental effort, overall effort, frustration, and perceived success with each design. Friedman tests showed significant differences for all questions $\left( {p < {.05}}\right)$ , with Parallel Lines and Side-by-Side scoring better than the other designs. We also asked users to rate the designs on a 0-4 scale (0: 'not preferred' to 4: 'highly preferred'). Participants rated the designs for each task as well as their overall preference. Mean participant scores are presented in Table 4 (higher values are better). The table highlights the top scores for each task and the top two designs for overall preference (Parallel Lines and Side-by-Side were the most-preferred designs).
|
| 306 |
+
|
| 307 |
+
§ 7.4 OVERALL DISCUSSION
|
| 308 |
+
|
| 309 |
+
The main finding of the third study is that all of the designs were successful for at least some of the tasks, and that the designs were similar in their performance - with the exception of Color Blending, which showed reduced accuracy compared to the other designs for Tasks 1 and 6. The study also clearly showed that integrating multiple variables into a single contour line results in visualizations that users can interpret successfully - as successfully as separate individual presentations (Side-by-Side). This is an important result for situations where designers need to provide a single larger view rather than divide the available space into pieces, as is needed for a side-by-side presentation.
|
| 310 |
+
|
| 311 |
+
Overall, two designs-Parallel Lines and Side-by-Side-performed best and had high preference scores. Both designs have separate encoding space for all the variables; in addition, the encoding for different attributes was similar, and they are intuitive to read without any close inspection of the legend. Side-by-Side had high preference scores for all tasks except Task 6: in this task, a higher mean preference for Parallel Lines is likely due to its symmetric encoding for all the attributes, which makes the value difference easier to estimate, whereas for Side-by-Side users need to compute the difference inspecting two separate views.
|
| 312 |
+
|
| 313 |
+
Interestingly, however, both the Parallel Lines and Side-by-Side designs have space constraints (Parallel Lines in terms of contour width, and Side-by-Side in terms of display space). For scenarios where a single view is required and the visibility of the background is important, neither of these designs may be feasible. In these cases, the Pie design appears to be a reasonable compromise because it had good accuracy and takes less space.
|
| 314 |
+
|
| 315 |
+
The Color Blending and Thickness-Shade designs both had poor performance on at least one task. This could be due to participants' unfamiliarity with the encoding, but the visual variables used in these designs may be more difficult to interpret overall. In addition, the encoding in these two designs was not symmetric compared to the other designs. Problems in Task 1 (for Color Blending) may have resulted from the need to estimate attribute value combinations, where the interpretation of the designs likely demanded a close inspection of the legend. In addition to the possibility of misinterpretation, this may have led to increased cognitive load or reduced effort for tasks using these designs. The same reasoning holds for the poor performance of Thickness-Shade for Task 6, where estimating the value difference between a pair of attributes encoded in different features such as thickness and color shade requires a careful reading of the legend.
|
| 316 |
+
|
| 317 |
+
Our results showed better performance with the real-world data than with the synthetic dataset (Figure 11), which may be due to differences in the underlying data distributions. The synthetic dataset consisted of almost all possible trend combinations for various attributes, so the corresponding visualizations consisted of highly varied feature combinations; in contrast, the visualizations generated from real-world data had fewer variations. It is likely that visualizations with real-world datasets are visually simpler, leading to improved accuracy and task completion time. These differences partially explain the significant interactions among design, dataset, and task combinations in accuracy, and interaction between design and dataset in completion time.
|
| 318 |
+
|
| 319 |
+
Based on the study results, we formulated a table of design recommendations (Table 5) that summarizes the preferred design choices for various tasks over three environments-general use, time-sensitive interpretation, and high accuracy requirements. The table shows some strengths for particular designs that are different from the overall discussion above: for example, if the task requires quick estimation for trends along a contour line, then Color Blending and Thickness-Shade may be the best design options.
|
| 320 |
+
|
| 321 |
+
< g r a p h i c s >
|
| 322 |
+
|
| 323 |
+
Figure 11: Task performances in Study3, for (top) different designs, and (bottom) different datasets.
|
| 324 |
+
|
| 325 |
+
Table 5: Design Recommendation Table
|
| 326 |
+
|
| 327 |
+
max width=
|
| 328 |
+
|
| 329 |
+
Domain Time Sensitive Environment Accuracy Sensitive General
|
| 330 |
+
|
| 331 |
+
1-4
|
| 332 |
+
Compare different contour parts Parallel Lines, Pie All except Color Blending All except Color Blending
|
| 333 |
+
|
| 334 |
+
1-4
|
| 335 |
+
Search for a trend across contour lines Parallel Lines, Pie, Side- by-Side Pie, Side-by-Side Parallel Lines, Side-by- Side, Pie
|
| 336 |
+
|
| 337 |
+
1-4
|
| 338 |
+
Search for a trend along a contour line Color Blend- ing, Thickness- Shade All All
|
| 339 |
+
|
| 340 |
+
1-4
|
| 341 |
+
Identify rate of change of a variable along a contour line All except ${Pie}$ All All
|
| 342 |
+
|
| 343 |
+
1-4
|
| 344 |
+
Identify the value difference on a contour part Parallel Lines, Side-by- Side Parallel Lines, Side-by- Side Parallel Lines, Side-by- Side, Pie
|
| 345 |
+
|
| 346 |
+
1-4
|
| 347 |
+
|
| 348 |
+
§ 8 LIMITATIONS AND FUTURE WORK
|
| 349 |
+
|
| 350 |
+
Our experience with the designs indicate some limitations in the research that provide opportunities for additional study. First, since we encode the variables at the pixel level, our current implementation does not scale well with large maps. However, rendering techniques using GPU acceleration can be used to overcome such an obstacle. Second, as the number of contour intervals grows, the contour lines may sometimes overlap. Therefore, finding an adaptive choice of contouring thresholds or allowing users to interactively choose the base attribute can be a valuable avenue of future research. Third, using existing contours as the basis for additional variables is limited by the density of these contours. Further work is needed on how to represent variables in areas with few contours: for example, adaptive sampling could be used to achieve minimum contour density across the map, or glyph-based techniques could be used to show important changes that occur between contours. Fourth, since we used colors in many of our designs, the interpretability of the designs could depend on the background map colour and texture. Therefore, real-world deployments of our technique will benefit from methods that tune color choices to the background map, or by implementing controls so that users can choose the opacity of the background map.
|
| 351 |
+
|
| 352 |
+
In addition, while the crowdsourced surveys have been found to be useful in gaining insights about our designs, additional controlled studies in the lab as well as focus groups with meteorological experts could provide more information. For example, the use of eye tracking for our approach would give more detail on how users interact visually with the different designs. Given the complex interaction among various factors that we observed, in-depth observation of the use of the designs in realistic tasks could help better understand some of the effects and interactions. Finally, we plan to apply our designs to different real-world datasets and explore different contour-based tasks and scenarios that can be used in real geospatial settings.
|
| 353 |
+
|
| 354 |
+
§ 9 CONCLUSION
|
| 355 |
+
|
| 356 |
+
Contour plots are widely used, but standard techniques for adding multivariate visualizations onto these plots can clutter the display. To address this problem, we explored how contour lines on a geospatial map can be stylized to encode other attributes in the data. Such a multivariate representation can reduce visual clutter by leveraging the existing contour line space. We designed five types of visual encoding, and examined how various contour parameters such as width and contour levels influence task performances. Our crowdsourced study results showed that participants were able to perform several types of multivariate data analysis tasks with reasonable accuracy, which reveals the potential of our approach.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,299 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AmbiTeam: Providing Team Awareness Through Ambient Displays
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Due to the COVID-19 pandemic, research is increasingly conducted remotely without the benefit of informal interactions that help maintain awareness of each collaborator's work progress. We developed AmbiTeam, an ambient display that shows activity related to the files of a team project, to help collaborations preserve a sense of the team's involvement while working remotely. We found that using AmbiTeam did have a quantifiable effect on researchers' perceptions of their collaborators' project prioritization. We also found that the use of the system motivated researchers to work on their collaborative projects. This effect is known as "the motivational presences of others," one of the key challenges that make distance work difficult. We discuss how ambient displays can support remote collaborative work by recreating the motivational presence of others.
|
| 8 |
+
|
| 9 |
+
Keywords: Collaboration; remote work; awareness; ambient display
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing-Human-Computer Interaction-Empirical studies in HCI-
|
| 12 |
+
|
| 13 |
+
## 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
With the advent of the COVID-19 pandemic, research is increasingly conducted remotely without the affordances of informal interactions that enhance fluidity and interactivity in teams. Remote collaboration has always faced numerous challenges, such as decreased awareness of colleagues and their context [33] and limited motivational sense of the presence of others [33]. Awareness of one's collaborators is necessary for ensuring that each teammate's contributions are compatible with the collaboration's collective activity [13]. It also plays an essential role in determining whether an individual's actions mesh with the group's goals and progress [13]. The motivational sense of the presence of others complements awareness by producing "social facilitation" effects, like driving people to work more when they are not alone [33].
|
| 16 |
+
|
| 17 |
+
Similarly, a researcher's perception of their collaborator's effort in a project can profoundly impact collaboration [10]. In particular, researchers tend to feel anxious about the success of their collaboration when they are concerned that competing priorities result in less commitment to the project [10]. The shift to remote work likely exacerbates this challenge since remote researchers lack the awareness of their collaborators' activities.
|
| 18 |
+
|
| 19 |
+
Together, these challenges pose a significant challenge to collaboration. It is essential that we address these challenges, given that the efficacy of science significantly improves when researchers from diverse backgrounds collaborate on a project [9]. We hypothesize that since a heightened awareness of a collaborator's research activities might reveal project prioritization, improved awareness could lessen the anxiety caused by uncertainty regarding a collaborator's investment. While various existing systems improve awareness in remote teams $\left\lbrack {6,7,{17},{18},{26},{28},{34}}\right\rbrack$ , no solution exists that solves the challenge of perceived prioritization.
|
| 20 |
+
|
| 21 |
+
To this end, we developed a system, AmbiTeam (shown in Figure 1) to improve a researcher's awareness of their collaborator's project-related activity. The system tracks and visualizes file changes in user-specified project directories to indicate how much effort or work a collaborator has put in on the project. We performed a user evaluation of the system with ten researchers in co-located and remote collaborations to investigate the effect of ambiently providing project-related activity information on a researcher's work behavior and perception of effort. We found that AmbiTeam had some impact on a researcher's motivation to work on the project as well as perceptions of their collaborators' effort. The key contributions of this paper are:
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
Figure 1: Example visualization of a team's work-related activity which was featured on a tablet with an ambient display in each of our user's workplaces. The visualization shows activity from five fictional teammates using randomly generated data. Each team member has an area graph where each point represents their activity for that day.
|
| 26 |
+
|
| 27 |
+
- Increased understanding of how to facilitate team awareness
|
| 28 |
+
|
| 29 |
+
- A deeper understanding of the motivating effect of awareness on work behavior
|
| 30 |
+
|
| 31 |
+
- New insights into the impact of increased awareness on perceptions of remote collaborators' effort
|
| 32 |
+
|
| 33 |
+
## 2 PRIOR WORK
|
| 34 |
+
|
| 35 |
+
We examine studies on awareness-based systems for supporting collaboration as well as existing solutions for unobtrusively providing information via ambient displays.
|
| 36 |
+
|
| 37 |
+
### 2.1 Awareness-Based Systems
|
| 38 |
+
|
| 39 |
+
Several technologies were developed to help remote workers become aware of their collaborator's research activities. For example, tools that inform members of remote teams about the timing of each other's activities and contributions have been shown to affect team coordination and learning [7]. Furthermore, systems that provide real-time, often visual, feedback about team behavior can mitigate "process-loss" (e.g., effort) in teams [18]. Some early technology (e.g., $\left\lbrack {6,{17},{28}}\right\rbrack$ ) featured permanently open audiovisual connections between locations, with the idea that providing unrestricted face-to-face communication would enable collaborative work as if the researchers were in the same room.
|
| 40 |
+
|
| 41 |
+
Recently, Glikson et al. [18] created a tool that visualizes effort, which is determined by measuring the number of keystrokes that members of a collaboration make in a task collaboration space. They found that this tool improved both team effort and performance [18]. A number of modern systems have been developed that typically focus on notifications to provide awareness [27] which are generally considered disruptive [2]. Given the importance of reducing "dramatic changes in work habits" [32], it is likely that an effective system needs to be as unobtrusive as possible.
|
| 42 |
+
|
| 43 |
+
### 2.2 Ambient Displays
|
| 44 |
+
|
| 45 |
+
In contrast to the methods employed by existing awareness systems, ambient displays are information sources designed to communicate contextual or background information in the periphery of the user's awareness and only require the user's attention when it is appropriate or desired [19]. Methods for conveying information via ambient displays include the use of light levels [11, 22], wind [29], temperature [41], music [4], and art [19]. For example, one of the earliest ambient systems, "ambientRoom", used visual displays of water ripples to convey information about the activities of a laboratory hamster and light patches to indicate the amount of human movement in an atrium [22]. Ambient displays are not limited to immersive environments and can also take the form of standalone media displays that allow multiple people to simultaneously receive information [11]. Applications of ambient displays include educating users about resource (e.g., water $\left\lbrack {{24},{26}}\right\rbrack$ and power $\left\lbrack {20}\right\rbrack$ ) consumption, improving driving $\left\lbrack {{12},{35}}\right\rbrack$ , monitoring finances $\left\lbrack {37}\right\rbrack$ , and assisting time management during meetings [31].
|
| 46 |
+
|
| 47 |
+
Some ambient systems have been developed to support collaboration by tackling the issues of determining availability $\left\lbrack {1,8}\right\rbrack$ . One system, "Nimio," used a series of physical toys to indicate the presence and availability of collaborators in separate offices [8]. Toys in one office would cause associated toys in other offices to light up with colored lights when they detected sound and movement, indicating that a collaborator was in their office and communicating whether the collaborator appeared to be busy. Alavi and Dillen-bourg [1] placed colored light boxes on tables in a student space that allowed students to indicate their presence, availability, and the coursework they were currently working on so that any given student could be aware of other students with whom they could collaborate.
|
| 48 |
+
|
| 49 |
+
Streng et al. [38] used ambient displays to convey information about the quality of collaboration between students working on a group task. In this paper, collaboration performance was measured by evaluating student adherence to a collaboration script that specified different phases and tasks to be carried out by individual team members. Performance information was communicated to the student participants either via a diagram featuring charts and numbers or an ambient art display showing a nature scene featuring trees, the sun or moon, and sometimes clouds and rain.
|
| 50 |
+
|
| 51 |
+
### 2.3 Research Questions and Study Goals
|
| 52 |
+
|
| 53 |
+
We hypothesize that promoting awareness by providing up-to-date information about a collaborator's project activities will affect a researcher's perception of their collaborator's effort. To avoid dramatically changing work habits, we pursue an ambient-based approach where information is conveyed without requiring the user's attention. To pursue these goals, we sought to answer the following questions:
|
| 54 |
+
|
| 55 |
+
RQ1. Can tracking file activity give teammates a sense of their teammates' efforts?
|
| 56 |
+
|
| 57 |
+
RQ2. Will ambient information on team project activities affect perceptions of collaborators' effort?
|
| 58 |
+
|
| 59 |
+
RQ3. What effects will providing team project activity information have on work behavior?
|
| 60 |
+
|
| 61 |
+
## 3 System Design
|
| 62 |
+
|
| 63 |
+
### 3.1 Privacy and Scope
|
| 64 |
+
|
| 65 |
+
Project effort is difficult to characterize as it includes activities that are impossible to track (e.g. thinking about a project) or are potentially sensitive (e.g. emails, phone calls). In order to respect the privacy of users, we avoid monitoring activities such as phone calls and emails and instead focus on the activity of files in user-specified project directories. This allows AmbiTeam to observe project activities related to the various stages of the research life-cycle identified by prior work [30]. For example, during experimentation, the system will be able to detect changes in electronic lab notebooks and cheat sheets used by researchers [30] as well as data. AmbiTeam will also observe data analysis by tracking changes in analysis code or scripts (also discussed in [30]) as well as generated output. Furthermore, the system will be able to monitor publication preparation by detecting changes in writing-related materials.
|
| 66 |
+
|
| 67 |
+
### 3.2 Activity Tracking
|
| 68 |
+
|
| 69 |
+
Activity is detected using a desktop application that monitors specified directories for file creation, deletion, and change events. Am-biTeam first prompts the user to select a directory to be watched, and on the back end, monitors the meta-data of the directory's files without viewing the file's contents. Once a file or directory in the watched directory is created, deleted, or changed, the user's ID and the time of the file event is encrypted and sent to a server.
|
| 70 |
+
|
| 71 |
+
### 3.3 Displaying Activity
|
| 72 |
+
|
| 73 |
+
The number of activities occurring each day for each user is visualized in the form of a point on an area graph. An area graph for each collaborator is displayed on a tablet, showing each day's cumulative activity in real time. The height of the graph on each day indicates the total amount of activity at that time and the area of the graph shows the total amount of activity over the course of a two week window. Activity is normalized across the team to facilitate comparisons between team members. Figure 1 shows an example.
|
| 74 |
+
|
| 75 |
+
## 4 METHOD
|
| 76 |
+
|
| 77 |
+
### 4.1 Participants
|
| 78 |
+
|
| 79 |
+
To determine whether AmbiTeam facilitates team awareness, we recruited 10 scientists who are part of four existing collaborations across four institutions in the United States aged 21 to ${33}(\mu = {27.3}$ , $\sigma = {3.5}$ , three females). Each of the collaborations is labeled A-D. The research area, title, and group of each participant is presented in Table 1. Participants were recruited inter-departmental email and our methodology was approved by our institutional review board. The configuration of the teams participating in this study ranged from fully remote (team A) to fully co-located (teams C and D). Team B had a mixed composition where participants B2 and B3 were co-located while B1 and B4 were each at different locations. All co-located teams worked in the same offices as their collaborators and reported working closely together.
|
| 80 |
+
|
| 81 |
+
Table 1: Participant backgrounds.
|
| 82 |
+
|
| 83 |
+
<table><tr><td>ID</td><td>Research Area</td><td>Title</td></tr><tr><td>A1</td><td>Biological Anthropology</td><td>Post-Doc</td></tr><tr><td>A2</td><td>Vertebrate Paleontology</td><td>Ph.D. Student</td></tr><tr><td>B1</td><td>Computer Vision and Machine Learning</td><td>Master's Student</td></tr><tr><td>B2</td><td>Computational Linguistics</td><td>Post-Doc</td></tr><tr><td>B3</td><td>Computer Vision and Human- Computer Interaction</td><td>Master's Student</td></tr><tr><td>B4</td><td>Human-Computer Interaction</td><td>Ph.D. Student</td></tr><tr><td>C1</td><td>CyberSecurity</td><td>Ph.D. Student</td></tr><tr><td>C2</td><td>CyberSecurity</td><td>Ph.D. Student</td></tr><tr><td>D1</td><td>CyberSecurity</td><td>Ph.D. Student</td></tr><tr><td>D2</td><td>CyberSecurity</td><td>Ph.D. Student</td></tr></table>
|
| 84 |
+
|
| 85 |
+
Our participants sought to answer a variety of scientific questions, which can be broadly summarized as:
|
| 86 |
+
|
| 87 |
+
- Understanding Faunal Change: identifying what happens to animals during the major climate events called the paleocene-eocene thermal maximum. (Team A).
|
| 88 |
+
|
| 89 |
+
- Enable Communicative Mechanisms Between Humans and Computers: bringing together human's natural language capability and computers' data processing capability to allow peer-to-peer collaboration between humans and computers. (Team B).
|
| 90 |
+
|
| 91 |
+
- Personalized Computer Security: using personal information to accomplish security tasks like authentication. This includes extracting nuanced personal information (e.g., vocal characteristics) from easily obtained information, such as pictures of people's faces. (Teams C & D).
|
| 92 |
+
|
| 93 |
+
### 4.2 Procedure
|
| 94 |
+
|
| 95 |
+
Participants were each given a tablet with AmbiTeam's display, had the activity monitor installed on their work computers, and were instructed on how both the activity monitor and the visualization worked. Participants then completed a pre-test where they estimated the amount of effort that each participating researcher is putting into the project, including themselves, on a scale from 1 to 9 with 1 being "very low" and 9 being "very high." Participants were also asked to explain the reasoning behind their rankings. Over the course of four weeks, on two randomly chosen days a week, participants were asked to repeat this assessment via email. During this time, AmbiTeam's visualization was turned off in order to prevent participants from consulting the visualization, since the goal was to determine whether the system's use affected their perception, not whether they could read the chart. To minimize visualization downtime, participants were given up to 24 hours to respond with their assessment.
|
| 96 |
+
|
| 97 |
+
At the end of the study, we conducted semi-structured interviews with the participants. By using the semi-structured interview technique, we were able to cover additional topics as they were encountered, reducing the likelihood that important issues were overlooked [25]. When possible, interviews took place at each of the participant's primary workspaces (offices or labs). Participants located at remote locations participated in the interviews over Zoom [21]. Interviews were approximately 30 minutes in duration and were recorded in audio format, then transcribed.
|
| 98 |
+
|
| 99 |
+
Participants were first asked to educate us about the collaborative research that they participated in during the study including their roles on the project(s) and the goal(s) of the research. We then asked participants to discuss their experiences using AmbiTeam as well as any changes they would propose and their likelihood of using the system in the future.
|
| 100 |
+
|
| 101 |
+
### 4.3 Qualitative Data Analysis
|
| 102 |
+
|
| 103 |
+
We performed a bottom-up analysis of participants' responses by constructing an affinity diagram (a.k.a. the KJ method) [5, 39] to expose prevailing themes. This approach is similar to qualitative coding and follows the same steps for qualitative analysis via coding as outlined by Auerback and Silverstein [3]. This is an appropriate method for semi-structured interviews as qualitative coding results in the possibility of applying the same code to different sections of the interview [23]. Moreover, affinity diagramming has had widespread use for qualitative data analysis over the last 50 years [36].
|
| 104 |
+
|
| 105 |
+
## 5 RESULTS
|
| 106 |
+
|
| 107 |
+
Participant's responses to interview questions and bi-weekly assessments provided insight into their experiences regarding AmbiTeam.
|
| 108 |
+
|
| 109 |
+
### 5.1 Interactions with the System
|
| 110 |
+
|
| 111 |
+
Most participants reported briefly looking at the visualization multiple times a day, often because the visualization was placed within their general field of view (although care was taken to ensure that the visualization did not obstruct the view of the participant's workstation). However, participants did not intentionally check the visualization for updates, indicating that the information generally stayed in the background.
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 2: AmbiTeam's components shown in A1's workspace. The visualization was placed in a different location in the periphery of A1's attention during the study.
|
| 116 |
+
|
| 117 |
+
"It wasn't like I checked it intentionally several times a day. It was more of that I leaned back in the chair to think about something and while looking at other things in my desk. I would see it." Cl
|
| 118 |
+
|
| 119 |
+
The information gleaned from the visualization was typically combined with information gathered during communications with collaborators. This information included knowledge about circumstances (e.g., job interviews, other papers and projects), project deadlines and updates, and each researcher's role in the project. In some instances the fact that collaborators were communicating at all was enough of an indication that those researchers were prioritizing the project. Participant B3, however, based their ratings solely on their communications with their collaborators because they did not trust AmbiTeam.
|
| 120 |
+
|
| 121 |
+
"I couldn't place enough trust in the system yet to factor in positively or negatively into my perception of prioriti-zation." B3
|
| 122 |
+
|
| 123 |
+
Most participants explicitly stated that using the system did not interrupt their workflow. This was partly due to the placement of the visualization within the user's workspace. Furthermore, the file tracking software was passive in nature such that once the user had selected their directories, no further action was needed. Participant C1 also remarked that the passive nature of the data collection resulted in more information than their usual workflow, because their usual workflow (GIT) relies on user to push information.
|
| 124 |
+
|
| 125 |
+
### 5.2 Determining Engagement
|
| 126 |
+
|
| 127 |
+
To determine whether tracking file activity can give teammates a sense of their teammates efforts (RQ1), we asked open-ended questions during each bi-weekly assessment and conducted a follow-up interview at the end of the study. We found that participants felt Am-biTeam's monitoring method gave a measure of user engagement.
|
| 128 |
+
|
| 129 |
+
"Tracking over time as you change it, it's simple so it does give you a measure of whether or not the person is engaged. Or not engaged. So I think it's a good measurement of that" Cl
|
| 130 |
+
|
| 131 |
+
However, participants reported several activities that were not tracked by the system that were integral to their work. In general, these activities were related to collaboration, idea development, and management. Some of the suggested activities are likely fairly easy to take into account, such as tracking the number of files in a directory (e.g., a library of literature for a project), the size of files (e.g., as figures get made, manuscript and code gets written), written meeting minutes, and the number of times a program is run. Others could be tracked by the existing software if the users change their behavior, such as making handwritten notes in a digital notebook as opposed to on physical pieces of paper.
|
| 132 |
+
|
| 133 |
+
However, many of the suggested activities (e.g., tracking emails, phone calls, internet searches, time spent on the top window of a computer) are difficult to take into account without invading privacy. Several participants stated that they wouldn't want personal data to be tracked unless it's somehow necessary for the team. Even then, participants requested caution when setting up AmbiTeam in order to prevent project-sensitive data from being tracked. For example, during the set up of group D, participants deliberately chose directories that contained metadata and statistics about the participants in their studies but did not contain identifiable data.
|
| 134 |
+
|
| 135 |
+
Finally, participants believed that for optimal use, the files and activities chosen for monitoring depend on the context of the user's work. They suggested that some metrics would be more suited to some roles than others. For example, since B4 was running user studies, the length of their files represents the amount of data collected and is more indicative of work than the number of files, which merely reflects the number of participants. Certain file types, such as those automatically created by ArcGIS [16] (a Geographic Information System Mapping Technology used by A1) and TensorFlow [40] models (a tool for building machine learning models used by B1) are automatically generated in bulk and don't necessarily indicate massive amounts of effort.
|
| 136 |
+
|
| 137 |
+
### 5.3 Perceptions of Effort
|
| 138 |
+
|
| 139 |
+
We wanted to know if AmbiTeam affected researchers' perceptions of their collaborators' effort on a project (RQ2). To do this we test whether there is a correlation between the average activity levels of their collaboration (as measured by our system) and the researchers' perception of how much effort their collaborator was putting in. We performed a Pearson's product-moment correlation test on participant's average displayed activity (activity) and the change in personal ratings (personal ratings). We found no correlation $(r = {0.09}$ , $p > {0.05}$ ) between personal ratings and activity. We also performed a Pearson's product-moment correlation test on activity and the change in ratings assigned to them by their collaborators (collaborator ratings). We found a weak positive correlation $(r = {0.22}$ , $p = {0.011}$ ) between collaborator ratings and the activity-as each participant's apparent activity increased, their collaborator's ratings of them increased. In summary, using AmbiTeam did not affect user's reported perception of their own effort; however, it did affect the user's perceptions of their collaborator's effort.
|
| 140 |
+
|
| 141 |
+
### 5.4 User Behaviors
|
| 142 |
+
|
| 143 |
+
To answer what affects the provision of team project activity information had on work behavior (RQ3), we asked open-ended questions during each bi-weekly assessment and conducted a follow-up interview at the end of the study. We found that on the whole, participants did not believe that using the system changed their collaborators' behaviors. However, many reported changing their own behaviors. In some cases, participants changed the way that their work was conducted to boost visibility and ensure that their collaborators knew that they were involved. For example, participant A2 described a time when they were creating a wiki for their project online. However, since AmbiTeam was unable to track the changes made to their online wiki, A2 wrote much of the text for the wiki on a text editor that saved changes to a file tracked by the system before uploading the text to the wiki. This ensured that their efforts to update the wiki appeared on the visualization. In addition to this, several participants mentioned saving their files more frequently so that their changes would register as activity and appear on the visualization.
|
| 144 |
+
|
| 145 |
+
Many participants reported that AmbiTeam made them feel more motivated to work on their projects. Sometimes this was due to participants noticing a lull in their own activity, which reminded them to work on the project. Motivation was also often attributed to seeing their collaborator's activity.
|
| 146 |
+
|
| 147 |
+
"Having a view of other people are working hard and then you don't want to be the last one. It's like a challenge." D2
|
| 148 |
+
|
| 149 |
+
Participant A2 noted that the system as had a positive impact due to its effect on motivation and a desire to work effectively.
|
| 150 |
+
|
| 151 |
+
"Positive, because it helped motivate me to make the project a priority even though it's not the most fun thing to work on." A2
|
| 152 |
+
|
| 153 |
+
### 5.5 Future Directions and Applications
|
| 154 |
+
|
| 155 |
+
All participants stated that they would be willing to use AmbiTeam, or a refined version of AmbiTeam, in the future for either professional or casual use. Several participants mentioned a desire to use the system in research collaborations to keep abreast of what their collaborators were up to. For example, participant C1 mentioned using the prior day's activity "I could glance at as sort of like a morning statistics for yesterday." Another use of the system would be for a project manager to balance the workload across researchers on a project, as described by participant B3 "I probably would want to use it just to see how much work my each of my teammates is doing so that the load is balanced out evenly."
|
| 156 |
+
|
| 157 |
+
Other participants reported that they would use AmbiTeam in a classroom setting both as a student working with group-mates that they don't know well or didn't pick and as professors managing class groups.
|
| 158 |
+
|
| 159 |
+
"I've had problems in the past ... they didn't do anything until the last week and even then in the last week, you know. I may have built the vast majority of it. They still get the same amount of credit." Cl.
|
| 160 |
+
|
| 161 |
+
Several participants also stated that they would use AmbiTeam for personal use. Participant A1 described not being interested in worrying about their collaborator's productivity, but was interested in using the system to take a "long term perspective" and revisit their own project-related activity. The goal would be to have a better understanding of the work that they had done in the past. In a similar vein, participant B2, a self-proclaimed "data junky" expressed an interest in using AmbiTeam to gain a deeper insight into their workflow. A1 also disclosed a belief that AmbiTeam could be useful for recent Ph.D. graduates who have transitioned from working solely on their dissertation to managing multiple projects and needing to have a better grasp of their priorities. Finally, A2 expressed an interest in using the system with a friend to stay motivated to work.
|
| 162 |
+
|
| 163 |
+
"In the same way that it's better to go to the gym with a friend because it motivates you because even on that one day when you really don't feel like going they'll go and then they'll help you get over that hump." A2
|
| 164 |
+
|
| 165 |
+
Participants also expressed a desire to extend AmbiTeam to support additional tasks. For example, participants conveyed an interest in integrating AmbiTeam with task management systems, allowing users to connect the activity shown on the visualization with specific tasks and goals. Participant C2 also suggested incorporating a messaging system that would allow a user to contact a collaborator when they notice a lull in activity.
|
| 166 |
+
|
| 167 |
+
"[If] I made some changes that we needed to discuss that I could just look look at my collaborator and just tap ... saying hey, there's something that needs to be discussed." C2
|
| 168 |
+
|
| 169 |
+
## 6 Discussion
|
| 170 |
+
|
| 171 |
+
### 6.1 Motivational presence of others
|
| 172 |
+
|
| 173 |
+
Many of the participants reported feeling more motivated and productive while using AmbiTeam. These feelings can likely be attributed to the motivational presence of others [33]. Our participants' responses indicated they were aware of being watched by their teammates, which changed their behavior, as described by B1:
|
| 174 |
+
|
| 175 |
+
"Because I know we are being tracked, I want to make use of time to work efficiently." B1
|
| 176 |
+
|
| 177 |
+
Researchers often use the presence of specific teammates in a shared space to guide their work [15]. Similarly, our participants also reported feeling motivated by seeing their collaborators work on the project, as stated by $\mathrm{C}2$ :
|
| 178 |
+
|
| 179 |
+
"Every single time that happened I was like, oh he's working, I should probably work on it too." C2
|
| 180 |
+
|
| 181 |
+
Unfortunately, these effects often dissipate once the participant no longer has a sense of the presence of their collaborators. Depending on the scientific questions that they seek to answer, researchers may spend time away from their desks where AmbiTeam is set up to perform fieldwork. More investigation is necessary to determine whether the increased motivation facilitated by the system is sustained when researchers are unable to access AmbiTeam.
|
| 182 |
+
|
| 183 |
+
### 6.2 Remote vs. Co-located Projects
|
| 184 |
+
|
| 185 |
+
Given the difficulties that researchers have maintaining awareness of their collaborators' work progress at remote locations without the ability to casually "look over their shoulder" [33], we expected that AmbiTeam would have a smaller effect on co-located participants' perceptions of their collaborators. In fact, participants from the co-located teams reported having an easier time determining their co-located collaborators' effort and reported having a smaller effect on their perception of their collaborator's priorities.
|
| 186 |
+
|
| 187 |
+
However, we found that AmbiTeam sometimes provided similar benefits to co-located participants as it did to remote participants. One co-located participant (C1) indicated that using AmbiTeam provided more information about their collaborator's effort than they got from their frequent communications with their collaborator - despite sitting next to each other. In this case, the information provided by AmbiTeam caused this co-located participant to change their expectations to take their collaborator's conflicting priorities into account. It's important to note that neither participant on Team C reported experiencing any negative effects from AmbiTeam's use. This finding indicates that AmbiTeam can be an effective tool even in co-located projects.
|
| 188 |
+
|
| 189 |
+
### 6.3 Privacy vs. Accurate Activity Tracking
|
| 190 |
+
|
| 191 |
+
During the post-study interviews, participants mentioned several activities that are part of their workflow that were not tracked by AmbiTeam during the study. However, tracking several of these activities would involve significant privacy violations, namely tracking in-person conversations, emails, and internet browsing history. This leads to the question of how to balance accurate activity tracking with maintaining user's privacy. It is possible that tracking additional, less-sensitive information (e.g., file length, degree to which a file has been changed) paired with customized tracking on a per-project and per-user basis may provide enough information that monitoring more-sensitive information like communications between collaborators is unnecessary. Further research is necessary to determine whether this is the case.
|
| 192 |
+
|
| 193 |
+
### 6.4 Future Work
|
| 194 |
+
|
| 195 |
+
One of the many dangers of remote work is loss of motivation. In co-located work, the presence of others has a large and important impact on teammates motivation [33]. We believe AmbiTeam was able to capture some of the motivational presence of others in remote work using an ambient display. In future work, we will explore other ways in which ambient displays can increase motivation.
|
| 196 |
+
|
| 197 |
+
Although tracking file activity allows us to gain some measure of effort, it does not encompass many important steps of work (thinking, discussing etc.). Future work can explore the use of different metrics for providing team awareness, such as the amount of progress on given tasks. In addition, future work can also explore long term effects of systems like AmbiTeam to determine whether the immediate increase in productivity due to being watched decreases over long periods of time and see if tensions arise due to the limited display of team member's contributions.
|
| 198 |
+
|
| 199 |
+
We evaluated AmbiTeam with collaborations of academic researchers who, while pursuing different research questions, had similar workflows. It is likely that all knowledge workers (workers who apply knowledge acquired through formal training to develop services and products [14]) can benefit from a system like Am-biTeam given that they generally have high amounts of screen time. However, it is less clear whether ambient displays work for all types of workers, including those whose jobs are very different from that of a knowledge worker (e.g., service work). In organizations with a clear hierarchy, does the role of the user affect the usefulness of AmbiTeam? Are there types of ambient data from a CEO that would motivate workers? For this reason, future work includes exploring the use of AmbiTeam in a variety of contexts of work.
|
| 200 |
+
|
| 201 |
+
It is also unclear how well ambient displays work for providing activity information in large teams. Our assessment of AmbiTeam was with small teams of 2-4 people. How well will a system like AmbiTeam work for an entire organization? Given that organizations are frequently divided into smaller teams, is there even a need for systems like AmbiTeam to work with large collaborations?
|
| 202 |
+
|
| 203 |
+
Many collaborations are highly temporally dispersed, sometimes operating across extreme time zone differences. In these situations, such as with a 12 hour time zone difference, people aren't working at the same time. Can we still effectively summarize progress from their work? Is the provision of activity information about a coworker who is not working at the same time still motivating?
|
| 204 |
+
|
| 205 |
+
## 7 CONCLUSION
|
| 206 |
+
|
| 207 |
+
In this paper, we described and evaluated a system meant to assist researchers experiencing the problem of perceived prioritization. We found that, despite shortcomings with regards to activity tracking, AmbiTeam had some effect on user's perceptions of their collaborators' effort as well as their motivation to work on their collaborative project. This work has implications for creating effective awareness-based technology for supporting collaborative work, particularly the recommendation that future awareness systems consider (a) using file activity to measure effort and (b) implementing ambient displays that do not interrupt the user's workflow.
|
| 208 |
+
|
| 209 |
+
## REFERENCES
|
| 210 |
+
|
| 211 |
+
[1] H. S. Alavi and P. Dillenbourg. Flag: An ambient awareness tool to support informal collaborative learning. In Proceedings of the 16th ACM International Conference on Supporting Group Work, GROUP '10, p. 315-316. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1880071.1880127
|
| 212 |
+
|
| 213 |
+
[2] L. Ardissono and G. Bosio. Context-dependent awareness support in open collaboration environments. User Modeling and User-Adapted Interaction, 22(3):223-254, 2012.
|
| 214 |
+
|
| 215 |
+
[3] C. Auerbach and L. B. Silverstein. Qualitative data: An introduction to coding and analysis, vol. 21. NYU press, 2003.
|
| 216 |
+
|
| 217 |
+
[4] L. Barrington, M. J. Lyons, D. Diegmann, and S. Abe. Ambient display using musical effects. In Proceedings of the 11th International Conference on Intelligent User Interfaces, IUI '06, p. 372-374. Asso-
|
| 218 |
+
|
| 219 |
+
ciation for Computing Machinery, New York, NY, USA, 2006. doi: 10. 1145/1111449.1111541
|
| 220 |
+
|
| 221 |
+
[5] H. Beyer and K. Holtzblatt. Contextual Design: Defining Customer-centered Systems. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1998.
|
| 222 |
+
|
| 223 |
+
[6] S. A. Bly, S. R. Harrison, and S. Irwin. Media spaces: bringing people together in a video, audio, and computing environment. Communications of the ACM, 36(1):28-46, 1993.
|
| 224 |
+
|
| 225 |
+
[7] D. Bodemer and J. Dehler. Group awareness in cscl environments. Computers in Human Behavior, 27(3):1043-1045, 2011.
|
| 226 |
+
|
| 227 |
+
[8] J. Brewer, A. Williams, and P. Dourish. A handle on what's going on: Combining tangible interfaces and ambient displays for collaborative groups. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction, TEI '07, p. 3-10. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1226969. 1226971
|
| 228 |
+
|
| 229 |
+
[9] D. T. Campbell. Ethnocentrism of disciplines and the fish-scale model of omniscience. Interdisciplinary relationships in the social sciences, 328:348, 1969.
|
| 230 |
+
|
| 231 |
+
[10] E. Chung, N. Kwon, and J. Lee. Understanding scientific collaboration in the research life cycle: Bio-and nanoscientists' motivations, information-sharing and communication practices, and barriers to collaboration. Journal of the association for information science and technology, 67(8):1836-1848, 2016.
|
| 232 |
+
|
| 233 |
+
[11] A. Dahley, C. Wisneski, and H. Ishii. Water lamp and pinwheels: Ambient projection of digital information into architectural space. In CHI 98 Conference Summary on Human Factors in Computing Systems, CHI '98, p. 269-270. Association for Computing Machinery, New York, NY, USA, 1998. doi: 10.1145/286498.286750
|
| 234 |
+
|
| 235 |
+
[12] M. De Marchi, J. Eriksson, and A. G. Forbes. Transittrace: Route planning using ambient displays. In Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, SIGSPATIAL '15. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2820783.2820857
|
| 236 |
+
|
| 237 |
+
[13] P. Dourish and V. Bellotti. Awareness and coordination in shared workspaces. In Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work, CSCW '92, p. 107-114. Association for Computing Machinery, New York, NY, USA, 1992. doi: 10.1145/ 143457.143468
|
| 238 |
+
|
| 239 |
+
[14] P. Drucker and P. Drucker. Landmarks of Tomorrow: A Report on the New "post-modern" World. Transaction Publishers, New Brunswick, USA, 1996.
|
| 240 |
+
|
| 241 |
+
[15] T. Erickson, D. N. Smith, W. A. Kellogg, M. Laff, J. T. Richards, and E. Bradner. Socially translucent systems: Social proxies, persistent conversation, and the design of "babble". In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '99, p. 72-79. Association for Computing Machinery, New York, NY, USA, 1999. doi: 10.1145/302979.302997
|
| 242 |
+
|
| 243 |
+
[16] Esri. Arcgis online, 2020.
|
| 244 |
+
|
| 245 |
+
[17] W. W. Gaver, A. Sellen, C. Heath, and P. Luff. One is not enough: Multiple views in a media space. In Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems, CHI '93, p. 335-341. Association for Computing Machinery, New York, NY, USA, 1993. doi: 10.1145/169059.169268
|
| 246 |
+
|
| 247 |
+
[18] E. Glikson, A. W. Wolley, P. Gupta, and Y. J. Kim. Visualized automatic feedback in virtual teams. Frontiers in psychology, 10:814, 2019.
|
| 248 |
+
|
| 249 |
+
[19] J. M. Heiner, S. E. Hudson, and K. Tanaka. The information percolator: Ambient information display in a decorative object. In Proceedings of the 12th Annual ACM Symposium on User Interface Software and Technology, UIST '99, p. 141-148. Association for Computing Machinery, New York, NY, USA, 1999. doi: 10.1145/320719.322595
|
| 250 |
+
|
| 251 |
+
[20] F. Heller and J. Borchers. Powersocket: Towards on-outlet power consumption visualization. In CHI '11 Extended Abstracts on Human Factors in Computing Systems, CHI EA '11, p. 1981-1986. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/ 1979742.1979901
|
| 252 |
+
|
| 253 |
+
[21] Z. V. C. Inc. Video conferencing, web conferencing, webinars, screen
|
| 254 |
+
|
| 255 |
+
sharing, 2020.
|
| 256 |
+
|
| 257 |
+
[22] H. Ishii, C. Wisneski, S. Brave, A. Dahley, M. Gorbet, B. Ullmer, and P. Yarin. Ambientroom: Integrating ambient media with architec-
|
| 258 |
+
|
| 259 |
+
tural space. In CHI 98 Conference Summary on Human Factors in Computing Systems, CHI '98, p. 173-174. Association for Computing Machinery, New York, NY, USA, 1998. doi: 10.1145/286498.286652
|
| 260 |
+
|
| 261 |
+
[23] E. Jun, B. A. Jo, N. Oliveira, and K. Reinecke. Digestif: Promoting science communication in online experiments. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):1-26, 2018.
|
| 262 |
+
|
| 263 |
+
[24] K. Kappel and T. Grechenig. "show-me": Water consumption at a glance to promote water conservation in the shower. In Proceedings of the 4th International Conference on Persuasive Technology, Persuasive '09. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1541948.1541984
|
| 264 |
+
|
| 265 |
+
[25] J. Lazar, J. H. Feng, and H. Hochheiser. Research Methods in Human-Computer Interaction. Wiley Publishing, Chichester, United Kingdom, 2010.
|
| 266 |
+
|
| 267 |
+
[26] G. López and L. A. Guerrero. Notifications for collaborative documents editing. In R. Hervás, S. Lee, C. Nugent, and J. Bravo, eds., Ubiquitous Computing and Ambient Intelligence. Personalisation and User Adapted Services, pp. 80-87. Springer International Publishing, Cham, 2014.
|
| 268 |
+
|
| 269 |
+
[27] G. Lopez and L. A. Guerrero. Awareness supporting technologies used in collaborative systems: A systematic literature review. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW '17, pp. 808-820. ACM, New York, NY, USA, 2017. doi: 10.1145/2998181.2998281
|
| 270 |
+
|
| 271 |
+
[28] M. M. Mantei, R. M. Baecker, A. J. Sellen, W. A. S. Buxton, T. Milligan, and B. Wellman. Experiences in the use of a media space. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '91, p. 203-208. Association for Computing Machinery, New York, NY, USA, 1991. doi: 10.1145/108844.108888
|
| 272 |
+
|
| 273 |
+
[29] M. Minakuchi and S. Nakamura. Collaborative ambient systems by blow displays. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction, TEI '07, p. 105-108. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/ 1226969.1226992
|
| 274 |
+
|
| 275 |
+
[30] S. Morrison-Smith, C. Boucher, A. Bunt, and J. Ruiz. Elucidating the role and use of bioinformatics software in life science research. In Proceedings of the 2015 British HCI Conference, British HCI '15, p. 230-238. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2783446.2783581
|
| 276 |
+
|
| 277 |
+
[31] V. Occhialini, H. van Essen, and B. Eggen. Design and evaluation of an ambient display to support time management during meetings. In P. Campos, N. Graham, J. Jorge, N. Nunes, P. Palanque, and M. Winck-ler, eds., Human-Computer Interaction - INTERACT 2011, pp. 263- 280. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011.
|
| 278 |
+
|
| 279 |
+
[32] K. Olesen and M. D. Myers. Trying to improve communication and collaboration with information technology: an action research project which failed. Information Technology & People, 12(4):317-332, 1999.
|
| 280 |
+
|
| 281 |
+
[33] J. S. Olson and G. M. Olson. Bridging Distance: Empirical studies of distributed teams. Human-Computer Interaction in Management Information Systems, 2:27-30, 2006.
|
| 282 |
+
|
| 283 |
+
[34] B. Otjacques, R. McCall, and F. Feltz. An ambient workplace for raising awareness of internet-based cooperation. In Y. Luo, ed., Cooperative Design, Visualization, and Engineering, pp. 275-286. Springer Berlin Heidelberg, Berlin, Heidelberg, 2006.
|
| 284 |
+
|
| 285 |
+
[35] M. D. Rodríguez, R. R. Roa, J. E. Ibarra, and C. M. Curlango. In-car ambient displays for safety driving gamification. In Proceedings of the 5th Mexican Conference on Human-Computer Interaction, MexIHC '14, p. 26-29. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2676690.2676701
|
| 286 |
+
|
| 287 |
+
[36] R. Scupin. The kj method: A technique for analyzing data derived from japanese ethnology. Human organization, pp. 233-237, 1997.
|
| 288 |
+
|
| 289 |
+
[37] X. Shen and P. Eades. Using $\mathfrak{j}i\mathcal{j}$ moneycolor $i\mathcal{l}i$ , to represent financial data. In Proceedings of the 2005 Asia-Pacific Symposium on Information Visualisation - Volume 45, APVis '05, p. 125-129. Australian Computer Society, Inc., AUS, 2005.
|
| 290 |
+
|
| 291 |
+
[38] S. Streng, K. Stegmann, H. Hußmann, and F. Fischer. Metaphor or diagram? comparing different representations for group mirrors. In
|
| 292 |
+
|
| 293 |
+
Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group: Design: Open 24/7, OZCHI '09, p. 249-256. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1738826.1738866
|
| 294 |
+
|
| 295 |
+
[39] H. Subramonyam, S. M. Drucker, and E. Adar. Affinity lens: data-assisted affinity diagramming with augmented reality. In Proceedings of the 2019 CHI conference on human factors in computing systems, pp. 1-13, 2019.
|
| 296 |
+
|
| 297 |
+
[40] G. B. Team. Tensorflow, 2021.
|
| 298 |
+
|
| 299 |
+
[41] R. Wettach, C. Behrens, A. Danielsson, and T. Ness. A thermal information display for mobile applications. In Proceedings of the 9th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI '07, p. 182-185. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/ 1377999.1378004
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/Xh_BzLS_3p/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,241 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ AMBITEAM: PROVIDING TEAM AWARENESS THROUGH AMBIENT DISPLAYS
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Due to the COVID-19 pandemic, research is increasingly conducted remotely without the benefit of informal interactions that help maintain awareness of each collaborator's work progress. We developed AmbiTeam, an ambient display that shows activity related to the files of a team project, to help collaborations preserve a sense of the team's involvement while working remotely. We found that using AmbiTeam did have a quantifiable effect on researchers' perceptions of their collaborators' project prioritization. We also found that the use of the system motivated researchers to work on their collaborative projects. This effect is known as "the motivational presences of others," one of the key challenges that make distance work difficult. We discuss how ambient displays can support remote collaborative work by recreating the motivational presence of others.
|
| 8 |
+
|
| 9 |
+
Keywords: Collaboration; remote work; awareness; ambient display
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing-Human-Computer Interaction-Empirical studies in HCI-
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
With the advent of the COVID-19 pandemic, research is increasingly conducted remotely without the affordances of informal interactions that enhance fluidity and interactivity in teams. Remote collaboration has always faced numerous challenges, such as decreased awareness of colleagues and their context [33] and limited motivational sense of the presence of others [33]. Awareness of one's collaborators is necessary for ensuring that each teammate's contributions are compatible with the collaboration's collective activity [13]. It also plays an essential role in determining whether an individual's actions mesh with the group's goals and progress [13]. The motivational sense of the presence of others complements awareness by producing "social facilitation" effects, like driving people to work more when they are not alone [33].
|
| 16 |
+
|
| 17 |
+
Similarly, a researcher's perception of their collaborator's effort in a project can profoundly impact collaboration [10]. In particular, researchers tend to feel anxious about the success of their collaboration when they are concerned that competing priorities result in less commitment to the project [10]. The shift to remote work likely exacerbates this challenge since remote researchers lack the awareness of their collaborators' activities.
|
| 18 |
+
|
| 19 |
+
Together, these challenges pose a significant challenge to collaboration. It is essential that we address these challenges, given that the efficacy of science significantly improves when researchers from diverse backgrounds collaborate on a project [9]. We hypothesize that since a heightened awareness of a collaborator's research activities might reveal project prioritization, improved awareness could lessen the anxiety caused by uncertainty regarding a collaborator's investment. While various existing systems improve awareness in remote teams $\left\lbrack {6,7,{17},{18},{26},{28},{34}}\right\rbrack$ , no solution exists that solves the challenge of perceived prioritization.
|
| 20 |
+
|
| 21 |
+
To this end, we developed a system, AmbiTeam (shown in Figure 1) to improve a researcher's awareness of their collaborator's project-related activity. The system tracks and visualizes file changes in user-specified project directories to indicate how much effort or work a collaborator has put in on the project. We performed a user evaluation of the system with ten researchers in co-located and remote collaborations to investigate the effect of ambiently providing project-related activity information on a researcher's work behavior and perception of effort. We found that AmbiTeam had some impact on a researcher's motivation to work on the project as well as perceptions of their collaborators' effort. The key contributions of this paper are:
|
| 22 |
+
|
| 23 |
+
< g r a p h i c s >
|
| 24 |
+
|
| 25 |
+
Figure 1: Example visualization of a team's work-related activity which was featured on a tablet with an ambient display in each of our user's workplaces. The visualization shows activity from five fictional teammates using randomly generated data. Each team member has an area graph where each point represents their activity for that day.
|
| 26 |
+
|
| 27 |
+
* Increased understanding of how to facilitate team awareness
|
| 28 |
+
|
| 29 |
+
* A deeper understanding of the motivating effect of awareness on work behavior
|
| 30 |
+
|
| 31 |
+
* New insights into the impact of increased awareness on perceptions of remote collaborators' effort
|
| 32 |
+
|
| 33 |
+
§ 2 PRIOR WORK
|
| 34 |
+
|
| 35 |
+
We examine studies on awareness-based systems for supporting collaboration as well as existing solutions for unobtrusively providing information via ambient displays.
|
| 36 |
+
|
| 37 |
+
§ 2.1 AWARENESS-BASED SYSTEMS
|
| 38 |
+
|
| 39 |
+
Several technologies were developed to help remote workers become aware of their collaborator's research activities. For example, tools that inform members of remote teams about the timing of each other's activities and contributions have been shown to affect team coordination and learning [7]. Furthermore, systems that provide real-time, often visual, feedback about team behavior can mitigate "process-loss" (e.g., effort) in teams [18]. Some early technology (e.g., $\left\lbrack {6,{17},{28}}\right\rbrack$ ) featured permanently open audiovisual connections between locations, with the idea that providing unrestricted face-to-face communication would enable collaborative work as if the researchers were in the same room.
|
| 40 |
+
|
| 41 |
+
Recently, Glikson et al. [18] created a tool that visualizes effort, which is determined by measuring the number of keystrokes that members of a collaboration make in a task collaboration space. They found that this tool improved both team effort and performance [18]. A number of modern systems have been developed that typically focus on notifications to provide awareness [27] which are generally considered disruptive [2]. Given the importance of reducing "dramatic changes in work habits" [32], it is likely that an effective system needs to be as unobtrusive as possible.
|
| 42 |
+
|
| 43 |
+
§ 2.2 AMBIENT DISPLAYS
|
| 44 |
+
|
| 45 |
+
In contrast to the methods employed by existing awareness systems, ambient displays are information sources designed to communicate contextual or background information in the periphery of the user's awareness and only require the user's attention when it is appropriate or desired [19]. Methods for conveying information via ambient displays include the use of light levels [11, 22], wind [29], temperature [41], music [4], and art [19]. For example, one of the earliest ambient systems, "ambientRoom", used visual displays of water ripples to convey information about the activities of a laboratory hamster and light patches to indicate the amount of human movement in an atrium [22]. Ambient displays are not limited to immersive environments and can also take the form of standalone media displays that allow multiple people to simultaneously receive information [11]. Applications of ambient displays include educating users about resource (e.g., water $\left\lbrack {{24},{26}}\right\rbrack$ and power $\left\lbrack {20}\right\rbrack$ ) consumption, improving driving $\left\lbrack {{12},{35}}\right\rbrack$ , monitoring finances $\left\lbrack {37}\right\rbrack$ , and assisting time management during meetings [31].
|
| 46 |
+
|
| 47 |
+
Some ambient systems have been developed to support collaboration by tackling the issues of determining availability $\left\lbrack {1,8}\right\rbrack$ . One system, "Nimio," used a series of physical toys to indicate the presence and availability of collaborators in separate offices [8]. Toys in one office would cause associated toys in other offices to light up with colored lights when they detected sound and movement, indicating that a collaborator was in their office and communicating whether the collaborator appeared to be busy. Alavi and Dillen-bourg [1] placed colored light boxes on tables in a student space that allowed students to indicate their presence, availability, and the coursework they were currently working on so that any given student could be aware of other students with whom they could collaborate.
|
| 48 |
+
|
| 49 |
+
Streng et al. [38] used ambient displays to convey information about the quality of collaboration between students working on a group task. In this paper, collaboration performance was measured by evaluating student adherence to a collaboration script that specified different phases and tasks to be carried out by individual team members. Performance information was communicated to the student participants either via a diagram featuring charts and numbers or an ambient art display showing a nature scene featuring trees, the sun or moon, and sometimes clouds and rain.
|
| 50 |
+
|
| 51 |
+
§ 2.3 RESEARCH QUESTIONS AND STUDY GOALS
|
| 52 |
+
|
| 53 |
+
We hypothesize that promoting awareness by providing up-to-date information about a collaborator's project activities will affect a researcher's perception of their collaborator's effort. To avoid dramatically changing work habits, we pursue an ambient-based approach where information is conveyed without requiring the user's attention. To pursue these goals, we sought to answer the following questions:
|
| 54 |
+
|
| 55 |
+
RQ1. Can tracking file activity give teammates a sense of their teammates' efforts?
|
| 56 |
+
|
| 57 |
+
RQ2. Will ambient information on team project activities affect perceptions of collaborators' effort?
|
| 58 |
+
|
| 59 |
+
RQ3. What effects will providing team project activity information have on work behavior?
|
| 60 |
+
|
| 61 |
+
§ 3 SYSTEM DESIGN
|
| 62 |
+
|
| 63 |
+
§ 3.1 PRIVACY AND SCOPE
|
| 64 |
+
|
| 65 |
+
Project effort is difficult to characterize as it includes activities that are impossible to track (e.g. thinking about a project) or are potentially sensitive (e.g. emails, phone calls). In order to respect the privacy of users, we avoid monitoring activities such as phone calls and emails and instead focus on the activity of files in user-specified project directories. This allows AmbiTeam to observe project activities related to the various stages of the research life-cycle identified by prior work [30]. For example, during experimentation, the system will be able to detect changes in electronic lab notebooks and cheat sheets used by researchers [30] as well as data. AmbiTeam will also observe data analysis by tracking changes in analysis code or scripts (also discussed in [30]) as well as generated output. Furthermore, the system will be able to monitor publication preparation by detecting changes in writing-related materials.
|
| 66 |
+
|
| 67 |
+
§ 3.2 ACTIVITY TRACKING
|
| 68 |
+
|
| 69 |
+
Activity is detected using a desktop application that monitors specified directories for file creation, deletion, and change events. Am-biTeam first prompts the user to select a directory to be watched, and on the back end, monitors the meta-data of the directory's files without viewing the file's contents. Once a file or directory in the watched directory is created, deleted, or changed, the user's ID and the time of the file event is encrypted and sent to a server.
|
| 70 |
+
|
| 71 |
+
§ 3.3 DISPLAYING ACTIVITY
|
| 72 |
+
|
| 73 |
+
The number of activities occurring each day for each user is visualized in the form of a point on an area graph. An area graph for each collaborator is displayed on a tablet, showing each day's cumulative activity in real time. The height of the graph on each day indicates the total amount of activity at that time and the area of the graph shows the total amount of activity over the course of a two week window. Activity is normalized across the team to facilitate comparisons between team members. Figure 1 shows an example.
|
| 74 |
+
|
| 75 |
+
§ 4 METHOD
|
| 76 |
+
|
| 77 |
+
§ 4.1 PARTICIPANTS
|
| 78 |
+
|
| 79 |
+
To determine whether AmbiTeam facilitates team awareness, we recruited 10 scientists who are part of four existing collaborations across four institutions in the United States aged 21 to ${33}(\mu = {27.3}$ , $\sigma = {3.5}$ , three females). Each of the collaborations is labeled A-D. The research area, title, and group of each participant is presented in Table 1. Participants were recruited inter-departmental email and our methodology was approved by our institutional review board. The configuration of the teams participating in this study ranged from fully remote (team A) to fully co-located (teams C and D). Team B had a mixed composition where participants B2 and B3 were co-located while B1 and B4 were each at different locations. All co-located teams worked in the same offices as their collaborators and reported working closely together.
|
| 80 |
+
|
| 81 |
+
Table 1: Participant backgrounds.
|
| 82 |
+
|
| 83 |
+
max width=
|
| 84 |
+
|
| 85 |
+
ID Research Area Title
|
| 86 |
+
|
| 87 |
+
1-3
|
| 88 |
+
A1 Biological Anthropology Post-Doc
|
| 89 |
+
|
| 90 |
+
1-3
|
| 91 |
+
A2 Vertebrate Paleontology Ph.D. Student
|
| 92 |
+
|
| 93 |
+
1-3
|
| 94 |
+
B1 Computer Vision and Machine Learning Master's Student
|
| 95 |
+
|
| 96 |
+
1-3
|
| 97 |
+
B2 Computational Linguistics Post-Doc
|
| 98 |
+
|
| 99 |
+
1-3
|
| 100 |
+
B3 Computer Vision and Human- Computer Interaction Master's Student
|
| 101 |
+
|
| 102 |
+
1-3
|
| 103 |
+
B4 Human-Computer Interaction Ph.D. Student
|
| 104 |
+
|
| 105 |
+
1-3
|
| 106 |
+
C1 CyberSecurity Ph.D. Student
|
| 107 |
+
|
| 108 |
+
1-3
|
| 109 |
+
C2 CyberSecurity Ph.D. Student
|
| 110 |
+
|
| 111 |
+
1-3
|
| 112 |
+
D1 CyberSecurity Ph.D. Student
|
| 113 |
+
|
| 114 |
+
1-3
|
| 115 |
+
D2 CyberSecurity Ph.D. Student
|
| 116 |
+
|
| 117 |
+
1-3
|
| 118 |
+
|
| 119 |
+
Our participants sought to answer a variety of scientific questions, which can be broadly summarized as:
|
| 120 |
+
|
| 121 |
+
* Understanding Faunal Change: identifying what happens to animals during the major climate events called the paleocene-eocene thermal maximum. (Team A).
|
| 122 |
+
|
| 123 |
+
* Enable Communicative Mechanisms Between Humans and Computers: bringing together human's natural language capability and computers' data processing capability to allow peer-to-peer collaboration between humans and computers. (Team B).
|
| 124 |
+
|
| 125 |
+
* Personalized Computer Security: using personal information to accomplish security tasks like authentication. This includes extracting nuanced personal information (e.g., vocal characteristics) from easily obtained information, such as pictures of people's faces. (Teams C & D).
|
| 126 |
+
|
| 127 |
+
§ 4.2 PROCEDURE
|
| 128 |
+
|
| 129 |
+
Participants were each given a tablet with AmbiTeam's display, had the activity monitor installed on their work computers, and were instructed on how both the activity monitor and the visualization worked. Participants then completed a pre-test where they estimated the amount of effort that each participating researcher is putting into the project, including themselves, on a scale from 1 to 9 with 1 being "very low" and 9 being "very high." Participants were also asked to explain the reasoning behind their rankings. Over the course of four weeks, on two randomly chosen days a week, participants were asked to repeat this assessment via email. During this time, AmbiTeam's visualization was turned off in order to prevent participants from consulting the visualization, since the goal was to determine whether the system's use affected their perception, not whether they could read the chart. To minimize visualization downtime, participants were given up to 24 hours to respond with their assessment.
|
| 130 |
+
|
| 131 |
+
At the end of the study, we conducted semi-structured interviews with the participants. By using the semi-structured interview technique, we were able to cover additional topics as they were encountered, reducing the likelihood that important issues were overlooked [25]. When possible, interviews took place at each of the participant's primary workspaces (offices or labs). Participants located at remote locations participated in the interviews over Zoom [21]. Interviews were approximately 30 minutes in duration and were recorded in audio format, then transcribed.
|
| 132 |
+
|
| 133 |
+
Participants were first asked to educate us about the collaborative research that they participated in during the study including their roles on the project(s) and the goal(s) of the research. We then asked participants to discuss their experiences using AmbiTeam as well as any changes they would propose and their likelihood of using the system in the future.
|
| 134 |
+
|
| 135 |
+
§ 4.3 QUALITATIVE DATA ANALYSIS
|
| 136 |
+
|
| 137 |
+
We performed a bottom-up analysis of participants' responses by constructing an affinity diagram (a.k.a. the KJ method) [5, 39] to expose prevailing themes. This approach is similar to qualitative coding and follows the same steps for qualitative analysis via coding as outlined by Auerback and Silverstein [3]. This is an appropriate method for semi-structured interviews as qualitative coding results in the possibility of applying the same code to different sections of the interview [23]. Moreover, affinity diagramming has had widespread use for qualitative data analysis over the last 50 years [36].
|
| 138 |
+
|
| 139 |
+
§ 5 RESULTS
|
| 140 |
+
|
| 141 |
+
Participant's responses to interview questions and bi-weekly assessments provided insight into their experiences regarding AmbiTeam.
|
| 142 |
+
|
| 143 |
+
§ 5.1 INTERACTIONS WITH THE SYSTEM
|
| 144 |
+
|
| 145 |
+
Most participants reported briefly looking at the visualization multiple times a day, often because the visualization was placed within their general field of view (although care was taken to ensure that the visualization did not obstruct the view of the participant's workstation). However, participants did not intentionally check the visualization for updates, indicating that the information generally stayed in the background.
|
| 146 |
+
|
| 147 |
+
< g r a p h i c s >
|
| 148 |
+
|
| 149 |
+
Figure 2: AmbiTeam's components shown in A1's workspace. The visualization was placed in a different location in the periphery of A1's attention during the study.
|
| 150 |
+
|
| 151 |
+
"It wasn't like I checked it intentionally several times a day. It was more of that I leaned back in the chair to think about something and while looking at other things in my desk. I would see it." Cl
|
| 152 |
+
|
| 153 |
+
The information gleaned from the visualization was typically combined with information gathered during communications with collaborators. This information included knowledge about circumstances (e.g., job interviews, other papers and projects), project deadlines and updates, and each researcher's role in the project. In some instances the fact that collaborators were communicating at all was enough of an indication that those researchers were prioritizing the project. Participant B3, however, based their ratings solely on their communications with their collaborators because they did not trust AmbiTeam.
|
| 154 |
+
|
| 155 |
+
"I couldn't place enough trust in the system yet to factor in positively or negatively into my perception of prioriti-zation." B3
|
| 156 |
+
|
| 157 |
+
Most participants explicitly stated that using the system did not interrupt their workflow. This was partly due to the placement of the visualization within the user's workspace. Furthermore, the file tracking software was passive in nature such that once the user had selected their directories, no further action was needed. Participant C1 also remarked that the passive nature of the data collection resulted in more information than their usual workflow, because their usual workflow (GIT) relies on user to push information.
|
| 158 |
+
|
| 159 |
+
§ 5.2 DETERMINING ENGAGEMENT
|
| 160 |
+
|
| 161 |
+
To determine whether tracking file activity can give teammates a sense of their teammates efforts (RQ1), we asked open-ended questions during each bi-weekly assessment and conducted a follow-up interview at the end of the study. We found that participants felt Am-biTeam's monitoring method gave a measure of user engagement.
|
| 162 |
+
|
| 163 |
+
"Tracking over time as you change it, it's simple so it does give you a measure of whether or not the person is engaged. Or not engaged. So I think it's a good measurement of that" Cl
|
| 164 |
+
|
| 165 |
+
However, participants reported several activities that were not tracked by the system that were integral to their work. In general, these activities were related to collaboration, idea development, and management. Some of the suggested activities are likely fairly easy to take into account, such as tracking the number of files in a directory (e.g., a library of literature for a project), the size of files (e.g., as figures get made, manuscript and code gets written), written meeting minutes, and the number of times a program is run. Others could be tracked by the existing software if the users change their behavior, such as making handwritten notes in a digital notebook as opposed to on physical pieces of paper.
|
| 166 |
+
|
| 167 |
+
However, many of the suggested activities (e.g., tracking emails, phone calls, internet searches, time spent on the top window of a computer) are difficult to take into account without invading privacy. Several participants stated that they wouldn't want personal data to be tracked unless it's somehow necessary for the team. Even then, participants requested caution when setting up AmbiTeam in order to prevent project-sensitive data from being tracked. For example, during the set up of group D, participants deliberately chose directories that contained metadata and statistics about the participants in their studies but did not contain identifiable data.
|
| 168 |
+
|
| 169 |
+
Finally, participants believed that for optimal use, the files and activities chosen for monitoring depend on the context of the user's work. They suggested that some metrics would be more suited to some roles than others. For example, since B4 was running user studies, the length of their files represents the amount of data collected and is more indicative of work than the number of files, which merely reflects the number of participants. Certain file types, such as those automatically created by ArcGIS [16] (a Geographic Information System Mapping Technology used by A1) and TensorFlow [40] models (a tool for building machine learning models used by B1) are automatically generated in bulk and don't necessarily indicate massive amounts of effort.
|
| 170 |
+
|
| 171 |
+
§ 5.3 PERCEPTIONS OF EFFORT
|
| 172 |
+
|
| 173 |
+
We wanted to know if AmbiTeam affected researchers' perceptions of their collaborators' effort on a project (RQ2). To do this we test whether there is a correlation between the average activity levels of their collaboration (as measured by our system) and the researchers' perception of how much effort their collaborator was putting in. We performed a Pearson's product-moment correlation test on participant's average displayed activity (activity) and the change in personal ratings (personal ratings). We found no correlation $(r = {0.09}$ , $p > {0.05}$ ) between personal ratings and activity. We also performed a Pearson's product-moment correlation test on activity and the change in ratings assigned to them by their collaborators (collaborator ratings). We found a weak positive correlation $(r = {0.22}$ , $p = {0.011}$ ) between collaborator ratings and the activity-as each participant's apparent activity increased, their collaborator's ratings of them increased. In summary, using AmbiTeam did not affect user's reported perception of their own effort; however, it did affect the user's perceptions of their collaborator's effort.
|
| 174 |
+
|
| 175 |
+
§ 5.4 USER BEHAVIORS
|
| 176 |
+
|
| 177 |
+
To answer what affects the provision of team project activity information had on work behavior (RQ3), we asked open-ended questions during each bi-weekly assessment and conducted a follow-up interview at the end of the study. We found that on the whole, participants did not believe that using the system changed their collaborators' behaviors. However, many reported changing their own behaviors. In some cases, participants changed the way that their work was conducted to boost visibility and ensure that their collaborators knew that they were involved. For example, participant A2 described a time when they were creating a wiki for their project online. However, since AmbiTeam was unable to track the changes made to their online wiki, A2 wrote much of the text for the wiki on a text editor that saved changes to a file tracked by the system before uploading the text to the wiki. This ensured that their efforts to update the wiki appeared on the visualization. In addition to this, several participants mentioned saving their files more frequently so that their changes would register as activity and appear on the visualization.
|
| 178 |
+
|
| 179 |
+
Many participants reported that AmbiTeam made them feel more motivated to work on their projects. Sometimes this was due to participants noticing a lull in their own activity, which reminded them to work on the project. Motivation was also often attributed to seeing their collaborator's activity.
|
| 180 |
+
|
| 181 |
+
"Having a view of other people are working hard and then you don't want to be the last one. It's like a challenge." D2
|
| 182 |
+
|
| 183 |
+
Participant A2 noted that the system as had a positive impact due to its effect on motivation and a desire to work effectively.
|
| 184 |
+
|
| 185 |
+
"Positive, because it helped motivate me to make the project a priority even though it's not the most fun thing to work on." A2
|
| 186 |
+
|
| 187 |
+
§ 5.5 FUTURE DIRECTIONS AND APPLICATIONS
|
| 188 |
+
|
| 189 |
+
All participants stated that they would be willing to use AmbiTeam, or a refined version of AmbiTeam, in the future for either professional or casual use. Several participants mentioned a desire to use the system in research collaborations to keep abreast of what their collaborators were up to. For example, participant C1 mentioned using the prior day's activity "I could glance at as sort of like a morning statistics for yesterday." Another use of the system would be for a project manager to balance the workload across researchers on a project, as described by participant B3 "I probably would want to use it just to see how much work my each of my teammates is doing so that the load is balanced out evenly."
|
| 190 |
+
|
| 191 |
+
Other participants reported that they would use AmbiTeam in a classroom setting both as a student working with group-mates that they don't know well or didn't pick and as professors managing class groups.
|
| 192 |
+
|
| 193 |
+
"I've had problems in the past ... they didn't do anything until the last week and even then in the last week, you know. I may have built the vast majority of it. They still get the same amount of credit." Cl.
|
| 194 |
+
|
| 195 |
+
Several participants also stated that they would use AmbiTeam for personal use. Participant A1 described not being interested in worrying about their collaborator's productivity, but was interested in using the system to take a "long term perspective" and revisit their own project-related activity. The goal would be to have a better understanding of the work that they had done in the past. In a similar vein, participant B2, a self-proclaimed "data junky" expressed an interest in using AmbiTeam to gain a deeper insight into their workflow. A1 also disclosed a belief that AmbiTeam could be useful for recent Ph.D. graduates who have transitioned from working solely on their dissertation to managing multiple projects and needing to have a better grasp of their priorities. Finally, A2 expressed an interest in using the system with a friend to stay motivated to work.
|
| 196 |
+
|
| 197 |
+
"In the same way that it's better to go to the gym with a friend because it motivates you because even on that one day when you really don't feel like going they'll go and then they'll help you get over that hump." A2
|
| 198 |
+
|
| 199 |
+
Participants also expressed a desire to extend AmbiTeam to support additional tasks. For example, participants conveyed an interest in integrating AmbiTeam with task management systems, allowing users to connect the activity shown on the visualization with specific tasks and goals. Participant C2 also suggested incorporating a messaging system that would allow a user to contact a collaborator when they notice a lull in activity.
|
| 200 |
+
|
| 201 |
+
"[If] I made some changes that we needed to discuss that I could just look look at my collaborator and just tap ... saying hey, there's something that needs to be discussed." C2
|
| 202 |
+
|
| 203 |
+
§ 6 DISCUSSION
|
| 204 |
+
|
| 205 |
+
§ 6.1 MOTIVATIONAL PRESENCE OF OTHERS
|
| 206 |
+
|
| 207 |
+
Many of the participants reported feeling more motivated and productive while using AmbiTeam. These feelings can likely be attributed to the motivational presence of others [33]. Our participants' responses indicated they were aware of being watched by their teammates, which changed their behavior, as described by B1:
|
| 208 |
+
|
| 209 |
+
"Because I know we are being tracked, I want to make use of time to work efficiently." B1
|
| 210 |
+
|
| 211 |
+
Researchers often use the presence of specific teammates in a shared space to guide their work [15]. Similarly, our participants also reported feeling motivated by seeing their collaborators work on the project, as stated by $\mathrm{C}2$ :
|
| 212 |
+
|
| 213 |
+
"Every single time that happened I was like, oh he's working, I should probably work on it too." C2
|
| 214 |
+
|
| 215 |
+
Unfortunately, these effects often dissipate once the participant no longer has a sense of the presence of their collaborators. Depending on the scientific questions that they seek to answer, researchers may spend time away from their desks where AmbiTeam is set up to perform fieldwork. More investigation is necessary to determine whether the increased motivation facilitated by the system is sustained when researchers are unable to access AmbiTeam.
|
| 216 |
+
|
| 217 |
+
§ 6.2 REMOTE VS. CO-LOCATED PROJECTS
|
| 218 |
+
|
| 219 |
+
Given the difficulties that researchers have maintaining awareness of their collaborators' work progress at remote locations without the ability to casually "look over their shoulder" [33], we expected that AmbiTeam would have a smaller effect on co-located participants' perceptions of their collaborators. In fact, participants from the co-located teams reported having an easier time determining their co-located collaborators' effort and reported having a smaller effect on their perception of their collaborator's priorities.
|
| 220 |
+
|
| 221 |
+
However, we found that AmbiTeam sometimes provided similar benefits to co-located participants as it did to remote participants. One co-located participant (C1) indicated that using AmbiTeam provided more information about their collaborator's effort than they got from their frequent communications with their collaborator - despite sitting next to each other. In this case, the information provided by AmbiTeam caused this co-located participant to change their expectations to take their collaborator's conflicting priorities into account. It's important to note that neither participant on Team C reported experiencing any negative effects from AmbiTeam's use. This finding indicates that AmbiTeam can be an effective tool even in co-located projects.
|
| 222 |
+
|
| 223 |
+
§ 6.3 PRIVACY VS. ACCURATE ACTIVITY TRACKING
|
| 224 |
+
|
| 225 |
+
During the post-study interviews, participants mentioned several activities that are part of their workflow that were not tracked by AmbiTeam during the study. However, tracking several of these activities would involve significant privacy violations, namely tracking in-person conversations, emails, and internet browsing history. This leads to the question of how to balance accurate activity tracking with maintaining user's privacy. It is possible that tracking additional, less-sensitive information (e.g., file length, degree to which a file has been changed) paired with customized tracking on a per-project and per-user basis may provide enough information that monitoring more-sensitive information like communications between collaborators is unnecessary. Further research is necessary to determine whether this is the case.
|
| 226 |
+
|
| 227 |
+
§ 6.4 FUTURE WORK
|
| 228 |
+
|
| 229 |
+
One of the many dangers of remote work is loss of motivation. In co-located work, the presence of others has a large and important impact on teammates motivation [33]. We believe AmbiTeam was able to capture some of the motivational presence of others in remote work using an ambient display. In future work, we will explore other ways in which ambient displays can increase motivation.
|
| 230 |
+
|
| 231 |
+
Although tracking file activity allows us to gain some measure of effort, it does not encompass many important steps of work (thinking, discussing etc.). Future work can explore the use of different metrics for providing team awareness, such as the amount of progress on given tasks. In addition, future work can also explore long term effects of systems like AmbiTeam to determine whether the immediate increase in productivity due to being watched decreases over long periods of time and see if tensions arise due to the limited display of team member's contributions.
|
| 232 |
+
|
| 233 |
+
We evaluated AmbiTeam with collaborations of academic researchers who, while pursuing different research questions, had similar workflows. It is likely that all knowledge workers (workers who apply knowledge acquired through formal training to develop services and products [14]) can benefit from a system like Am-biTeam given that they generally have high amounts of screen time. However, it is less clear whether ambient displays work for all types of workers, including those whose jobs are very different from that of a knowledge worker (e.g., service work). In organizations with a clear hierarchy, does the role of the user affect the usefulness of AmbiTeam? Are there types of ambient data from a CEO that would motivate workers? For this reason, future work includes exploring the use of AmbiTeam in a variety of contexts of work.
|
| 234 |
+
|
| 235 |
+
It is also unclear how well ambient displays work for providing activity information in large teams. Our assessment of AmbiTeam was with small teams of 2-4 people. How well will a system like AmbiTeam work for an entire organization? Given that organizations are frequently divided into smaller teams, is there even a need for systems like AmbiTeam to work with large collaborations?
|
| 236 |
+
|
| 237 |
+
Many collaborations are highly temporally dispersed, sometimes operating across extreme time zone differences. In these situations, such as with a 12 hour time zone difference, people aren't working at the same time. Can we still effectively summarize progress from their work? Is the provision of activity information about a coworker who is not working at the same time still motivating?
|
| 238 |
+
|
| 239 |
+
§ 7 CONCLUSION
|
| 240 |
+
|
| 241 |
+
In this paper, we described and evaluated a system meant to assist researchers experiencing the problem of perceived prioritization. We found that, despite shortcomings with regards to activity tracking, AmbiTeam had some effect on user's perceptions of their collaborators' effort as well as their motivation to work on their collaborative project. This work has implications for creating effective awareness-based technology for supporting collaborative work, particularly the recommendation that future awareness systems consider (a) using file activity to measure effort and (b) implementing ambient displays that do not interrupt the user's workflow.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,227 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Audiovisual AR concepts for laparoscopic subsurface structure navigation
|
| 2 |
+
|
| 3 |
+
Authors anonymized for peer review.
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Background: The identification of subsurface structures during resection wound repair is a challenge during minimally invasive partial nephrectomy. Specifically, major blood vessels and branches of the urinary collecting system need to be localized under time pressure as target or risk structures during suture placement. Methods: This work presents concepts for AR visualization and auditory guidance based on tool position that support this task. We evaluated the concepts in a laboratory user study with a simplified, simulated task: The localization of subsurface target points in a healthy kidney phantom. We evaluated the task time, localization accuracy, and perceived workload for our concepts and a control condition without navigation support. Results: The AR visualization improved the accuracy and perceived workload over the control condition. We observed similar, non-significant trends for the auditory display. Conclusions: Further, clinically realistic evaluation is pending. Our initial results indicate the potential benefits of our concepts in supporting laparoscopic resection wound repair.
|
| 8 |
+
|
| 9 |
+
Index Terms: Human-centered computing-Visualization-; Human-centered computing-Human computer interaction (HCI)- Interaction paradigms-Mixed / augmented reality; Human-centered computing-Human computer interaction (HCI)-Interaction devices-Sound-based input / output; Applied computing-Life and medical sciences——
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
### 1.1 Motivation
|
| 14 |
+
|
| 15 |
+
The field of augmented reality (AR) for laparoscopic surgery has inspired broad research over the past decade [2]. This research aims to alleviate the challenges that are associated with the indirect access in such operations. One challenging operation that has attracted much attention from the research community is laparoscopic or robot-assisted partial nephrectomy (LPN/RPN) [17, 19]. LPN/RPN is the standard treatment for early-stage renal cancer. The operation's objective is to remove the intact (i.e., entire) tumor from the kidney while preserving as much healthy kidney tissue as possible. Three challenging phases in this operation can particularly benefit from image guidance or AR navigation support [19]: i) the management of renal blood vessels before the tumor resection, ii) the intraoperative resection planning and the resection, iii) the repair of the resection wound after the tumor removal. Although numerous solutions have been proposed to support urologists during the first two phases [19], no dedicated AR solutions exist for the third. Specifically, urologists need to identify major blood vessels or branches of the urinary collecting system that have been severed or that lie closely under the resection wound's surface and could be damaged during suturing. One additional challenging factor is that this surgical phase is performed under time pressure due to the risk of ischemic damage or increased blood loss (depending on the vascular clamping strategy). There are some technical challenges in providing correct AR registration and meaningful navigation support during this phase. One challenge that affects the visualization of AR information is the removal of renal tissue volume that leaves an undefined tissue surface that is inside the original organ borders. In this work, we present an AR visualization and an auditory display concept that rely on the position of a tracked surgical tool to support the urologist in identifying and locating subsurface structures. We also report a preliminary proof-of-concept evaluation through a user study with an abstracted task. AR registration and the clinical evaluation of our concepts lie outside the scope of this work.
|
| 16 |
+
|
| 17 |
+
### 1.2 Related work
|
| 18 |
+
|
| 19 |
+
Multiple reviews provide a comprehensive overview of navigation support approaches for LPN/RPN [3, 17, 19]. Although no dedicated solutions exist to support urologists during the resection wound repair phase, one application has been reported, in which the general AR model of intrarenal structures was used during renorrhaphy [24]. This approach, however, does not address the unknown resection wound surface geometry and potential occlusion issues. Moreover, multiple solutions have been proposed to visualize intrarenal vascular structures. These include solutions in which a preoperative model of the vascular structure is rendered in an AR overlay [23,30]. This may be less informative after an unknown tissue volume has been resected. Other methods rely on real-time detection of subsurface vessels $\left\lbrack {1,{18},{29}}\right\rbrack$ . However, these are unlikely to perform well when the vessels are clamped (suppressing blood flow and pulsation) or when the organ surface is occluded by blood. Outside of LPN/RPN, such as in angiography exploration, visualization methods have been developed to communicate the spatial arrangement of vessels. These include the chromadepth [28] and pseudo-chromadepth methods $\left\lbrack {{20},{26}}\right\rbrack$ , which map vessel depth information to color hue gradients. Kersten-Oertel et al. [21] showed that color hue mapping, along with contrast grading, performs well in conveying depth information for vascular structures. The visualization of structures based on tool position has inspired work both inside and outside of the field of LPN/RPN: Singla et al. [27] proposed visualizing the tool position in relation to the tumor prior to resection in LPN/RPN. Multiple visualizations have been proposed for the spatial relationship between surgical needles and the surrounding vasculature [15]. However, these visualizations explore the application of minimally-invasive needle interventions where the instrument is moving in between the structures of interest.
|
| 20 |
+
|
| 21 |
+
In addition to visual approaches to supporting LPN/RPN as well as other navigated applications, recent works have shown that using sound to augment or replace visual cues can be employed to aid task completion. By using so-called auditory display, changes in a set of navigation parameters can be mapped to changes in parameters of a real-time sound synthesizer. This can be found in common automobile parking assistance systems: the distance of the automobile to a surrounding object is mapped to the inter-onset-interval (i.e., the time between tones) of a simple synthesizer. Using auditory display has been motivated by the desire to increase clinician awareness, replacing the lost sense of touch when using teleoperated devices, or help clinicians correctly interpret and follow navigation paths. There have been, however, relatively few applications of auditory display in medical navigation. Evaluations have been performed for radiofrequency ablation [5], temporal bone drilling [8], skull base surgery [9], soft tissue resection [13], and telerobotic surgery [6, 22]. These have shown auditory display to improve recognition of structure distance and accuracy and diminish cognitive workload and rates of clinical complication. Disadvantages have included increased non-target tissue removal and more lengthy task completion times. For a thorough overview of auditory display in medical interventions, see [4].
|
| 22 |
+
|
| 23 |
+
## 2 NAVIGATION METHODS
|
| 24 |
+
|
| 25 |
+
We pursued two routes to provide navigation content to the urologist: The first approach is the AR visualization of preoperative anatomical information in a video-see through setting. The second approach is an auditory display.
|
| 26 |
+
|
| 27 |
+
### 2.1 AR visualization
|
| 28 |
+
|
| 29 |
+
Our AR concept aims to provide information about intrarenal risk structures to the urologists. We, therefore, based our visualization on preoperative three-dimensional (3D) image data of the intrarenal vasculature and collecting system. These were segmented and exported as surface models. We assumed that the resection volume and resulting wound geometry are unknown. Simply overlaying the preoperative models onto the laparoscopic video stream would include all risk structures that were resected with the resection volume. We, therefore, propose a tool-based visualization. In this concept, only information about risk structures in front of a pointing tool are rendered and overlaid onto the video stream. To this end, the urologist can place a spatially tracked pointing tool on the newly created organ surface (i.e., resection ground) and see the risk structures beneath. We placed a virtual circular plane perpendicular to the tool axis with a diameter of ${20}\mathrm{\;{mm}}$ around the tooltip. The structures in front of this plane (following the tool direction) are projected orthogonally onto the plane and rendered accordingly. The two different structure types are visualized with two different color scales (Figure 1a). The scales visualize the distance between a given structure and the plane. The scale ends are equivalent to a minimum and maximum probing depth that can be set for different applications. The scale hues were selected based on two criteria: Firstly, we investigated which hues provide good contrast visibility in front of laparoscopic videos. Secondly, the choice of yellow for urinary tracts and blue-magenta for blood vessels is consistent with conventions in anatomical illustrations and should be intuitive for medical professionals. For the urinary tract, color brightness and transparency are changed across the spectrum. For the blood vessels, color hue, brightness, and transparency are used. These color spectrums aim to combine the color gradient and fog concepts that were identified as promising approaches by Kersten-Oertel et al. [21]. An example for the resulting visualization (using a printed kidney phantom) is provided in Figure 1b. The blue line marks the measured tool axis.
|
| 30 |
+
|
| 31 |
+
### 2.2 Audio navigation
|
| 32 |
+
|
| 33 |
+
After iterative preliminary designs were evaluated informally with 12 participants, an auditory display consisting of two contrasting sounds was developed to represent the structures. The sound of running water was selected to represent the collecting system, and a synthesized tone was created to represent the vessels. The size and number of the vessels in the scanning area are encoded in a three-level density score. Density is then mapped to the water pressure for the collecting system, and the tone's pitch for vessels, with higher pressure and pitch indicating a denser structure. Finally, the rhythm of each tone is a translation of the distance between the instrument tip and the closest point on the targeted structure, with a faster rhythm representing lesser distance. To express the density of the collecting system, the water pressure is manipulated to produce three conditions, i.e., low, medium, and high pressure; representing low, medium, and high density. The water tone is triggered every 250, 500, and 2000ms, depending on the distance: inside, close, and far.
|
| 34 |
+
|
| 35 |
+
0% 75%: 100%: 0/0/102/204 255/0/255/255 75%: 100%: 102/102/0/204 255/255/0/255 0/0/52/0 0% 52/52/0/0
|
| 36 |
+
|
| 37 |
+
(a) color spectrum for blood vessels (top) and urinary tract (bottom). The color values are in RGBA format.
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
(b) Laparoscopic view of a printed kidney phantom with the visual AR overlay.
|
| 42 |
+
|
| 43 |
+
Figure 1: AR visualization.
|
| 44 |
+
|
| 45 |
+
A distant structure resembles an uninterrupted flow of water, and a nearby structure is heard as rhythmic splashes. Inside the structure, a rapid splashing rhythm is accompanied by an alert sound.
|
| 46 |
+
|
| 47 |
+
### 2.3 Prototype implementation
|
| 48 |
+
|
| 49 |
+
We implemented our overall software prototype and its visualization in Unity 2018 (Unity Software, USA). The auditory display was implemented using Pure Data [25].
|
| 50 |
+
|
| 51 |
+
#### 2.3.1 Augmented reality infrastructure
|
| 52 |
+
|
| 53 |
+
The laparoscopic video stream was generated with an EinsteinVision ${}^{\circledR }$ 3.0 laparoscope (B. Braun Melsungen AG, Germany) with a ${30}^{ \circ }$ optic in monoscopic mode. We used standard laparoscopic graspers as a pointing tool. The camera head and the tool were tracked with a NDI Polaris Spectra passive infrared tracking camera (Northern Digital Inc., Canada). We calibrated the laparoscopic camera based on a pinhole model [31] as implemented in the OpenCV library ${}^{1}$ [7]. We used a pattern of ChArUCo markers [12] for the camera calibration. The external camera parameters (i.e., the spatial transformation between the laparoscope's tracking markers and the camera position) were determined with a spatially tracked calibration body. The spatial transformation between the tool's tracking markers and its tip was determined with pivot calibration using the NDI Toolbox software (Northern Digital Inc.). The rotational transformation between the tracking markers and the tool axis was measured with our calibration body. The resulting laparoscopic video stream with or without AR overlay was displayed on a 24 inch screen. AR registration for this surgical phase was outside of scope for this study. The kidney registration was based on the predefined spatial transformation between our kidney phantom and its tracking geometry (see Study setup).
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
${}^{1}$ We used the commercially available OpenCV for Unity package (Enox Software, Japan)
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
#### 2.3.2 AR visualization implementation
|
| 62 |
+
|
| 63 |
+
The circular plane was placed at the tooltip and perpendicular to the tool's axis as provided by the real-time tracking data. The registration between the visualization and the camera were provided by the abovementioned tool and camera calibration and the real-time tracking data. The plane was then overlaid with a mesh with a rectangular vertex arrangement. The vertices had a density of 64 pts $/{\mathrm{{mm}}}^{2}$ and served as virtual pixels. We conducted a ray-casting request for each vertex. For each ray that hit the surface mesh of the structures in our virtual model, the respective vertex was colored according to the type and ray collision distance of that structure. The visualization was permanently activated in our study prototype.
|
| 64 |
+
|
| 65 |
+
#### 2.3.3 Auditory display implementation
|
| 66 |
+
|
| 67 |
+
The synthesized tone contrasts the water sound to ensure distinction between the sounds. The synthesized sound is created from the base frequencies of ${65.4}\mathrm{\;{Hz}},{130.8}\mathrm{\;{Hz}}$ , and ${261.6}\mathrm{\;{Hz}}(\mathrm{C}2,\mathrm{C}3$ , and $\mathrm{C}4$ notes) and harmonized by each frequency's first to eighth harmonics, creating a complex tone. The density of the vessels is measured on ray casting requests that are equivalent to the visual implementation. The number of virtual pixels that would depict a given structure type determin the density for that type. This density is then encoded in the pitch of the tone, meaning that ${65.4}\mathrm{\;{Hz}},{130.8}\mathrm{\;{Hz}}$ , and ${261.6}\mathrm{\;{Hz}}$ (C2, C3, and C4 notes) represent low, medium, and high density, respectively. The repetition time of the tones expresses the distance between the instrument tip and the closest point on the targeted vessel. Similar to the water sound, a continuous tone represents a far-away vessel, while a close vessel is heard as the tone being repeated every ${500}\mathrm{\;{ms}}$ with a duration of ${400}\mathrm{\;{ms}}$ . Being inside the vessel triggers an alert sound played every ${125}\mathrm{\;{ms}}$ accompanied by the tone every ${250}\mathrm{\;{ms}}$ .
|
| 68 |
+
|
| 69 |
+
## 3 EVALUATION METHODS
|
| 70 |
+
|
| 71 |
+
We conducted a simulated-use proof-of-concept evaluation study with $\mathrm{N} = {11}$ participants to investigate whether our concepts effectively support the urologists in locating subsurface structures in laparoscopic surgery.
|
| 72 |
+
|
| 73 |
+
### 3.1 Study task
|
| 74 |
+
|
| 75 |
+
The specific challenges of identifying relevant subsurface structures for suture placement in resection wound repair are difficult to replicate in a laboratory setting. We devised a study task that aimed to imitate the identification of specific structures beneath an organ surface: Participants were presented with a printed kidney phantom in a simulated laparoscopic environment. We also displayed a 3D model of the same kidney on a 24 inch screen. This virtual model included surface meshes of the vessel tree and collecting system inside that kidney (Figure 2). Participants could manipulate the view of that model by panning, rotating, and zooming. For each study trial, we marked a point on the internal structures (a blood or urine vessel) in the virtual model with a red dot (Figure 2). The target points were arranged into four clusters to prevent familiarization with the target structures throughout the experiment. The participants were then asked to point the surgical tool at the location of that subsurface point in the physical phantom as accurately and as quickly as possible by placing the tool on the surface and orienting it such that the tool's direction pointed towards the internal target point.
|
| 76 |
+
|
| 77 |
+
### 3.2 Study design
|
| 78 |
+
|
| 79 |
+
Our study investigated the impact of the visual and auditory support on the performance and perceived workload of the navigation task. We examined two independent variables with two levels each $\left( {2 \times 2\text{design}}\right)$ : The presence or absence of the visual support and the presence or absence of the auditory support. The condition in which neither support modality was present was the control condition. Three dependent variables were measured and analyzed: Firstly, we measured the task completion time. Time started counting when the target point was displayed. It stopped when participants gave a verbal cue that they were confident they were pointing at the target as accurately as possible. Secondly, we measured how accurately they pointed the tool. Accuracy was measured as the closest distance between the tool's axis and the target point (point-to-ray distance). Finally, we used the NASA Task Load Index (NASA-TLX) [14] questionnaire as an indicator for the perceived workload. The NASA-TLX questionnaire is based on six contributing dimensions of subjectively perceived workload. The weighted ratings for each dimension are combined into an overall workload score.
|
| 80 |
+
|
| 81 |
+
Upper kidney pole
|
| 82 |
+
|
| 83 |
+
Figure 2: Virtual kidney model with the target point clusters. The model is shown from a medial-anterior perspective, corresponding to the participant's position.
|
| 84 |
+
|
| 85 |
+
### 3.3 Study sample
|
| 86 |
+
|
| 87 |
+
Eleven (11) participants took part in our study (six females, five males). All participants were medical students between their third and fifth year of training. Participants were aged between 24 and 33 years (median $= {25}$ years). All participants were right-handed. Four participants reported between one and five hours of experience with laparoscopic interaction (median $= 3\mathrm{\;h}$ ) and seven participants reported between one and 15 hours of AR experience (median $= 2\mathrm{\;h}$ ). Finally, eight participants reported to be trained in playing a musical instrument. No participants reported any untreated vision or hearing impairments.
|
| 88 |
+
|
| 89 |
+
### 3.4 Study setup
|
| 90 |
+
|
| 91 |
+
The virtual kidney model and its physical phantom were created from a public database of abdominal computed tomography imaging data [16]. We segmented a healthy left kidney using 3D Slicer [11] and exported the parenchymal surface, the vessel tree, and the urinary collecting system as separate surface models. The parenchymal surface model was printed with the fused deposition modeling method and equipped with an adapter for passive tracking markers (Figure 3a). The phantom was placed in a cardboard box to simulate a laparoscopic working environment (Figure 3b). The screen with the laparoscopic video stream was placed opposite the participant and the screen with the virtual model viewer was placed to the participant's right. A mouse was provided to interact with the model viewer and a standard commercial multimedia speaker was included for the auditory display. The overall study setup is shown in Figure 4.
|
| 92 |
+
|
| 93 |
+
(a) Kidney phantom with tracking marker adapter. (b) Cardboard box with tool holes.
|
| 94 |
+
|
| 95 |
+
Figure 3: Components of the simulated laparoscopic environment.
|
| 96 |
+
|
| 97 |
+
### 3.5 Study procedure
|
| 98 |
+
|
| 99 |
+
Participants' written consent and demographic data were collected upon arrival. The participants then received an introduction to the visualization and auditory display of the data. Participants conducted one trial block per navigation method. In each trial block, they were asked to locate the three points of one cluster, with one trial per point. After each trial block, one NASA TLX questionnaire was completed for the respective navigation method. The order of the navigation methods and the assignment between the point clusters and the navigation methods were counterbalanced. The order in which the points had to be located within each trial block was permutated.
|
| 100 |
+
|
| 101 |
+
### 3.6 Data analysis
|
| 102 |
+
|
| 103 |
+
During initial data exploration, we noticed a trend that participants took more time to complete the task in the first trial they attempted with each method than in the second and third trials. Therefore, the first trial for each method and participant was regarded as a training trial and excluded from the analysis. The data (time and accuracy) from the remaining two trials from each block were averaged and a repeated-measures two-way analysis of variance (ANOVA) was conducted for each dependent variable.
|
| 104 |
+
|
| 105 |
+
## 4 RESULTS
|
| 106 |
+
|
| 107 |
+
The descriptive results for the three dependent variables are listed in Table 1. We found significant main effects for the presence of visual display onto the accuracy $\left( {\mathrm{p} < {0.001}}\right)$ and the NASA TLX rating $\left( {\mathrm{p} = {0.03}}\right)$ . The ANOVA results are listed in Table 2. The significant effects are plotted in Figure 5.
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Figure 4: Overall study setup.
|
| 112 |
+
|
| 113 |
+
## 5 DISCUSSION
|
| 114 |
+
|
| 115 |
+
### 5.1 Discussion of results
|
| 116 |
+
|
| 117 |
+
The most evident result from our evaluation is that the visual display increases the accuracy and reduces the perceived workload of identifying subsurface vascular and urinary structures in our simplified task. At the same time, the visual display method did not reduce the task completion time. Generally, there were non-significant trends that all tested conditions with visual or auditory display performed more accurately and tended to cause a lower perceived workload. However, the navigation support conditions tended to perform less quickly than the control condition. This may be due to the fact that the required mental spatial transformations are reduced, but a greater amount of information needs to be processed by the participants.
|
| 118 |
+
|
| 119 |
+
This explanation is also supported by the result that the combined auditory and visual display performed worse than the visual-only condition within our sample. While this trend is not statistically significant, it poses a question: Were the auditory display designs somewhat misleading or distracting, or is the combination of multimodal channels for the same information in itself potentially hindering in this task? The trend of auditory support performing slightly better than the control condition within our sample (no significance) may indicate that the latter explanation is more likely. Another aspect may be users' lower familiarity with auditory navigation than visual cues. Further training and greater participant experience may also reduce the difference in performance between he visual and auditory navigation aids.
|
| 120 |
+
|
| 121 |
+
The AR visualization was well suited for the abstracted task in our proof-of-concept evaluation. In the clinical context, a semitransparent display of our visualization may be better suited to prevent occlusion of the relevant surgical area. This occlusion can further be reduced by providing a means to interactively activate or deactivate the visualization.
|
| 122 |
+
|
| 123 |
+
Finally, the absolute values we measured for our dependent variables are less meaningful than the comparative effects we found for our navigation conditions. Multiple design factors limit the clinical validity of our study, including the exclusion of a registration pipeline. This means that the absolute task time or pointing error may well deviate from the reported descriptive results.
|
| 124 |
+
|
| 125 |
+
### 5.2 General discussion
|
| 126 |
+
|
| 127 |
+
Our tests yielded preliminary and successful proof-of-concept results for the audiovisual AR support for resection wound repair. The results indicate that audio guidance may be helpful but the benefit could not be significantly within our sample. However, there are limitations to the clinical validity of our prototypes and our study setup.
|
| 128 |
+
|
| 129 |
+
Table 1: Descriptive results for all dependent variables. All entries are in the format <mean value (standard deviation)>.
|
| 130 |
+
|
| 131 |
+
<table><tr><td>Navigation condition</td><td>Task completion time [s]</td><td>Accuracy [mm]</td><td>NASA-TLX</td></tr><tr><td>No support</td><td>29.92 (18.59)</td><td>12.54 (4.21)</td><td>14.14 (2.18)</td></tr><tr><td>Auditory support</td><td>41.44 (22.9)</td><td>9.69 (5.14)</td><td>12.93 (3.46)</td></tr><tr><td>Visual support</td><td>36.02 (19.21)</td><td>4.39 (3.49)</td><td>10.87 (3.86)</td></tr><tr><td>Auditory and visual support</td><td>38.12 (22.48)</td><td>6.45 (5.08)</td><td>11.93 (3.3)</td></tr></table>
|
| 132 |
+
|
| 133 |
+
Table 2: ANOVA results for all variables. AD: Auditory display, VD: Visual display. All cells are in the format $< \mathrm{F}$ value (degrees of freedom); p value>.
|
| 134 |
+
|
| 135 |
+
<table><tr><td>Dependent variable</td><td>Main effect AD</td><td>Main effect VD</td><td>Interaction AD:VD</td></tr><tr><td>Task completion time</td><td>1.41 $\left( {1,{10}}\right) ;{0.263}$</td><td>0.17(1,10); 0.688</td><td>1.47(1,10); 0.253</td></tr><tr><td>Accuracy</td><td>${0.11}\left( {1,{10}}\right) ;{0.748}$</td><td>28.01(1,10); <0.001*</td><td>2.67(1,10); 0.133</td></tr><tr><td>NASA TLX</td><td>0.01(1,10); 0.911</td><td>6.35(1,10); 0.03*</td><td>1.47(1,10); 0.253</td></tr></table>
|
| 136 |
+
|
| 137 |
+
Pointing error 10 With visual support Condition (a) Visual display main effect on the pointing ac-With visual support Condition (b) Visual display main effect on the NASA TLX mm] 5 0 Without visual support curacy. 15 NASA TLX rating 10 5 Without visual support rating.
|
| 138 |
+
|
| 139 |
+
Figure 5: Significant ANOVA main effects. The error bars represent standard errors.
|
| 140 |
+
|
| 141 |
+
First and foremost, the study task is an abstraction of the actual surgical task: The surgical task requires not only the identification of major subsurface structures but also the judgment and selection of a suture path. This task limitation went along with an abstract laparoscopic environment and surgical site. Our kidney phantom imitated an in-vivo kidney only in its geometric properties. The color, biomechanical behavior, and surgical surroundings did not resemble their real clinical equivalents. Moreover, the phantom was simplified in that it was based on an intact kidney rather than containing a resection wound. While this simplification is an additional limitation to our study's clinical validity, we believe that introducing a phantom with a resection bed will only be meaningful in combination with a more complex simulated task. This is because, in a realistic setting, the urologist will be familiar with the wound and aware of potential landmarks (like intentionally severed vessels) to help navigate. This would not have been the case in our simplified task and for our participants. One further step in improving the phantom for increased realism may be the simulation of the deformation that occurs. This may be achieved by producing one preoperative phantom and one intraoperative phantom based on simulated intra-operative deformation (e.g., using the Simulation Open Framework Architecture [10])
|
| 142 |
+
|
| 143 |
+
Another aspect to improve the clinical validity of our evaluation could be a more realistic task. The most valid performance parameter, however, would be the frequency of suture setting errors. Because these are not very frequent, the study would require a large sample consisting of experienced urologists. This is logistically challenging. We, therefore, regard our preliminary evaluation as a good first indication for the aptitude of our navigation support methods.
|
| 144 |
+
|
| 145 |
+
Future evaluation with a more realistic phantom and task should include the overlay of AR structures on the simulated resection wound as an (additional) reference condition. Moreover, AR registration was excluded from our study's scope to focus the investigation on the tested information presentation methods. A dedicated registration method for post-resection AR has been previously proposed [reference anonymized for peer review]. This could be combined with the dedicated AR concepts reported in this article for future, high-fidelity evaluations.
|
| 146 |
+
|
| 147 |
+
The participants were medical students with limited laparoscopic experience: They were less trained in the spatial cognitive processes that are involved in laparoscopic navigation than the experienced urologists who would be the intended users for a support system like ours. Thus, the navigation methods presented in this article will need to be further evaluated in clinically realistic settings. This may include testing on an in-vivo or ex-vivo human or porcine kidney phantom. This, however, requires an effective AR registration that is compromised by the time pressure (for in-vivo phantoms) or postmortem deformation in ex-vivo phantoms.
|
| 148 |
+
|
| 149 |
+
Beyond more clinically valid evaluation, some other research questions arise from our work: Firstly, some design iteration and comparison should be implemented to evaluate whether the limited success of our auditory display was due to the specific designs or due to a limited aptitude of the auditory modality for such information. Secondly, further visualizations should be developed and compared with our first proposal to identify an ideal information visualization. Finally, it should be investigated whether other procedures with soft tissue resection (e.g., liver or brain surgery) may benefit from similar navigation support systems for the resection wound repair.
|
| 150 |
+
|
| 151 |
+
## 6 CONCLUSION
|
| 152 |
+
|
| 153 |
+
This work introduces and tests an audiovisual AR concept to support urologists during the resection wound repair phase in LPN/RPN. To our knowledge, these are the first dedicated solutions that have been proposed for this particular challenge. These concepts have been preliminarily evaluated in a laboratory-based study with an abstracted task. Although the results only represent a proof-of-concept evaluation, we believe that they indicate the potential of our concepts. The next steps for this work include the integration of a targeted AR registration solution and the integrated prototype's evaluation in a clinically realistic setting. Pending this work, we believe that the concepts presented in this article sketch a promising path to a clinically meaningful AR navigation system for minimally invasive, oncological resection wound repair.
|
| 154 |
+
|
| 155 |
+
## ACKNOWLEDGMENTS
|
| 156 |
+
|
| 157 |
+
Funding information anonymized for peer review.
|
| 158 |
+
|
| 159 |
+
## REFERENCES
|
| 160 |
+
|
| 161 |
+
[1] A. Amir-Khalili, G. Hamarneh, J.-M. Peyrat, J. Abinahed, O. Al-Alao, A. Al-Ansari, and R. Abugharbieh. Automatic segmentation of occluded vasculature via pulsatile motion analysis in endoscopic robot-assisted partial nephrectomy video. Medical Image Analysis, 25(1):103-110, 2015. doi: 10.1016/j.media. 2015.04.010
|
| 162 |
+
|
| 163 |
+
[2] S. Bernhardt, S. Nicolau, L. Soler, and C. Doignon. The status of augmented reality in laparoscopic surgery as of 2016. Medical Image Analysis, 37:66-90, 2017.
|
| 164 |
+
|
| 165 |
+
[3] R. Bertolo, A. Hung, F. Porpiglia, P. Bove, M. Schleicher, and P. Das-gupta. Systematic review of augmented reality in urological interventions: the evidences of an impact on surgical outcomes are yet to come. World journal of urology, 38(9):2167-2176, 2019. doi: 10. 1007/s00345-019-02711-z
|
| 166 |
+
|
| 167 |
+
[4] D. Black, C. Hansen, A. Nabavi, R. Kikinis, and H. Hahn. A Survey of auditory display in image-guided interventions. International journal of computer assisted radiology and surgery, 12(10):1665-1676, 2017. doi: 10.1007/s11548-017-1547-z
|
| 168 |
+
|
| 169 |
+
[5] D. Black, J. Hettig, M. Luz, C. Hansen, R. Kikinis, and H. Hahn. Auditory feedback to support image-guided medical needle placement. International journal of computer assisted radiology and surgery, 12(9):1655-1663, 2017. doi: 10.1007/s11548-017-1537-1
|
| 170 |
+
|
| 171 |
+
[6] D. Black, S. Lilge, C. Fellmann, A. V. Reinschluessel, L. Kreuer, A. Nabavi, H. K. Hahn, R. Kikinis, and J. Burgner-Kahrs. Auditory Display for Telerobotic Transnasal Surgery Using a Continuum Robot. Journal of Medical Robotics Research, 04(02):1950004, 2019. doi: 10 .1142/S2424905X19500041
|
| 172 |
+
|
| 173 |
+
[7] G. Bradski. The OpenCV Library. Dr. Dobb's Journal of Software Tools, (25):120-125, 2000.
|
| 174 |
+
|
| 175 |
+
[8] B. Cho, M. Oka, N. Matsumoto, R. Ouchida, J. Hong, and M. Hashizume. Warning navigation system using real-time safe region monitoring for otologic surgery. International journal of computer assisted radiology and surgery, 8(3):395-405, 2013. doi: 10.1007/s11548-012-0797-z
|
| 176 |
+
|
| 177 |
+
[9] B. J. Dixon, M. J. Daly, H. Chan, A. Vescan, I. J. Witterick, and J. C. Irish. Augmented real-time navigation with critical structure proximity alerts for endoscopic skull base surgery. The Laryngoscope, 124(4):853-859, 2014. doi: 10.1002/lary.24385
|
| 178 |
+
|
| 179 |
+
[10] F. c. Faure, C. Duriez, H. Delingette, J. Allard, B. Gilles, S. Marchesseau, H. Talbot, H. Courtecuisse, G. Bousquet, I. Peterlik, and S. Cotin. SOFA: A Multi-Model Framework for Interactive Physical Simulation. In Yohan Payan, ed., Soft Tissue Biomechanical Modeling for Computer Assisted Surgery, vol. 11 of Studies in Mechanobiology, Tissue Engineering and Biomaterials, pp. 283-321. Springer, 2012. doi: 10.1007/ 8415_2012_125
|
| 180 |
+
|
| 181 |
+
[11] A. Fedorov, R. Beichel, J. Kalpathy-Cramer, J. Finet, J.-C. Fillion-Robin, S. Pujol, C. Bauer, D. Jennings, F. Fennessy, M. Sonka, J. Buatti, S. Aylward, J. V. Miller, S. Pieper, and R. Kikinis. 3D Slicer as an image computing platform for the Quantitative Imaging Network.
|
| 182 |
+
|
| 183 |
+
Magnetic resonance imaging, 30(9):1323-1341, 2012. doi: 10.1016/j. mri.2012.05.001
|
| 184 |
+
|
| 185 |
+
[12] S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J.
|
| 186 |
+
|
| 187 |
+
Marín-Jiménez. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition, 47(6):2280- 2292, 2014. doi: 10.1016/j.patcog.2014.01.005
|
| 188 |
+
|
| 189 |
+
[13] C. Hansen, D. Black, C. Lange, F. Rieber, W. Lamadé, M. Donati, K. J. Oldhafer, and H. K. Hahn. Auditory support for resection guidance in navigated liver surgery. The international journal of medical robotics + computer assisted surgery : MRCAS, 9(1):36-43, 2013. doi: 10.1002/rcs. 1466
|
| 190 |
+
|
| 191 |
+
[14] S. G. Hart and L. E. Staveland. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In P. A. Hancock and N. Meshkati, eds., Human mental workload, vol. 52 of Advances in Psychology, pp. 139-183. North-Holland, Amsterdam and New York and New York, N.Y., U.S.A, 1988. doi: 10.1016/S0166 $- {4115}\left( {08}\right) {62386} - 9$
|
| 192 |
+
|
| 193 |
+
[15] F. Heinrich, G. Schmidt, F. Jungmann, and C. Hansen. Augmented Reality Visualisation Concepts to Support Intraoperative Distance Estimation. In Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technolog VRST '19. ACM, New York, NY, USA, 2019. doi: 10.1145/3359996. 3364818
|
| 194 |
+
|
| 195 |
+
[16] N. Heller, N. Sathianathen, A. Kalapara, E. Walczak, K. Moore, H. Kaluzniak, J. Rosenberg, P. Blake, Z. Rengel, M. Oestreich, J. Dean, M. Tradewell, A. Shah, R. Tejpaul, Z. Edgerton, M. Peterson, S. Raza, S. Regmi, N. Papanikolopoulos, and C. Weight. The KiTS 19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT Semantic Segmentations, and Surgical Outcomes.
|
| 196 |
+
|
| 197 |
+
[17] A. Hughes-Hallett, P. Pratt, E. Mayer, S. Martin, A. Darzi, and J. Vale. Image guidance for all-TilePro display of 3-dimensionally reconstructed images in robotic partial nephrectomy. Urology, 84(1):237- 242, 2014. doi: 10.1016/j.urology.2014.02.051
|
| 198 |
+
|
| 199 |
+
[18] E. S. Hyams, M. Perlmutter, and M. D. Stifelman. A prospective evaluation of the utility of laparoscopic Doppler technology during minimally invasive partial nephrectomy. Urology, 77(3):617-620, 2011. doi: 10.1016/j.urology.2010.05.011
|
| 200 |
+
|
| 201 |
+
[19] F. Joeres, D. Schindele, M. Luz, S. Blaschke, N. Russwinkel, M. Schostak, and C. Hansen. How well do software assistants for minimally invasive partial nephrectomy meet surgeon information needs? A cognitive task analysis and literature review study. PloS one, 14(7):e0219920, 2019. doi: 10.1371/journal.pone. 0219920
|
| 202 |
+
|
| 203 |
+
[20] A. Joshi, X. Qian, D. P. Dione, K. R. Bulsara, C. K. Breuer, A. J. Sinusas, and X. Papademetris. Effective visualization of complex vascular structures using a non-parametric vessel detection method. IEEE transactions on visualization and computer graphics, 14(6):1603-1610, 2008. doi: 10.1109/TVCG.2008.123
|
| 204 |
+
|
| 205 |
+
[21] M. Kersten-Oertel, S. J.-S. Chen, and D. L. Collins. An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery. IEEE transactions on visualization and computer graphics,20(3):391- 403, 2014. doi: 10.1109/TVCG.2013.240
|
| 206 |
+
|
| 207 |
+
[22] M. Kitagawa, D. Dokko, A. M. Okamura, and D. D. Yuh. Effect of sensory substitution on suture-manipulation forces for robotic surgical systems. The Journal of thoracic and cardiovascular surgery, 129(1):151- 158, 2005. doi: 10.1016/j.jtevs.2004.05.029
|
| 208 |
+
|
| 209 |
+
[23] K. Nakamura, Y. Naya, S. Zenbutsu, K. Araki, S. Cho, S. Ohta, N. Ni-hei, H. Suzuki, T. Ichikawa, and T. Igarashi. Surgical navigation using three-dimensional computed tomography images fused intraoperatively with live video. Journal of endourology, 24(4):521-524, 2010. doi: 10 .1089/end.2009.0365
|
| 210 |
+
|
| 211 |
+
[24] F. Porpiglia, E. Checcucci, D. Amparore, F. Piramide, G. Volpi, S. Granato, P. Verri, M. Manfredi, A. Bellin, P. Piazzolla, R. Autorino, I. Morra, C. Fiori, and A. Mottrie. Three-dimensional Augmented Reality Robot-assisted Partial Nephrectomy in Case of Complex Tumours (PADUA $\geq {10}$ ): A New Intraoperative Tool Overcoming the Ultrasound Guidance. European urology, 78(2):229-238, 2019. doi: 10.1016/j.eururo.2019.11.024
|
| 212 |
+
|
| 213 |
+
[25] M. Puckette. Pure Data: Another integrated computer music environment.
|
| 214 |
+
|
| 215 |
+
Proceedings of the second intercollege computer music concerts, (1):37-41, 1996.
|
| 216 |
+
|
| 217 |
+
[26] T. Ropinski, F. Steinicke, and K. Hinrichs. Visually Supporting Depth Perception in Angiography Imaging. In A. Butz, ed., Smart graphics, vol. 4073 of Lecture Notes in Computer Science, pp. 93-104. Springer, Berlin, op. 2006. doi: 10.1007/11795018_9
|
| 218 |
+
|
| 219 |
+
[27] R. Singla, P. Edgcumbe, P. Pratt, C. Nguan, and R. Rohling. Intra-operative ultrasound-based augmented reality guidance for laparo-scopic surgery. Healthcare technology letters, 4(5):204-209, 2017. doi: 10.1049/htl.2017.0063
|
| 220 |
+
|
| 221 |
+
[28] R. A. Steenblik. The Chromostereoscopic Process: A Novel Single Image Stereoscopic Process. SPIE Proceedings, p. 27. SPIE, 1987. doi: 10.1117/12.940117
|
| 222 |
+
|
| 223 |
+
[29] S. Tobis, J. Knopf, C. Silvers, J. Yao, H. Rashid, G. Wu, and D. Goli-janin. Near infrared fluorescence imaging with robotic assisted laparo-scopic partial nephrectomy: initial clinical experience for renal cortical tumors. The Journal of Urology, 186(1):47-52, 2011. doi: 10.1016/j. juro.2011.02.2701
|
| 224 |
+
|
| 225 |
+
[30] D. Wang, B. Zhang, X. Yuan, X. Zhang, and C. Liu. Preoperative planning and real-time assisted navigation by three-dimensional individual digital model in partial nephrectomy with three-dimensional laparoscopic system. International Journal of Computer Assisted Radiology and Surgery, 10(9):1461-1468, 2015. doi: 10.1007/s11548-015-1148-7
|
| 226 |
+
|
| 227 |
+
[31] Z. Zhang. A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22:1330-1334, 2000.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/e2xKwBi2RIq/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,182 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ AUDIOVISUAL AR CONCEPTS FOR LAPAROSCOPIC SUBSURFACE STRUCTURE NAVIGATION
|
| 2 |
+
|
| 3 |
+
Authors anonymized for peer review.
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Background: The identification of subsurface structures during resection wound repair is a challenge during minimally invasive partial nephrectomy. Specifically, major blood vessels and branches of the urinary collecting system need to be localized under time pressure as target or risk structures during suture placement. Methods: This work presents concepts for AR visualization and auditory guidance based on tool position that support this task. We evaluated the concepts in a laboratory user study with a simplified, simulated task: The localization of subsurface target points in a healthy kidney phantom. We evaluated the task time, localization accuracy, and perceived workload for our concepts and a control condition without navigation support. Results: The AR visualization improved the accuracy and perceived workload over the control condition. We observed similar, non-significant trends for the auditory display. Conclusions: Further, clinically realistic evaluation is pending. Our initial results indicate the potential benefits of our concepts in supporting laparoscopic resection wound repair.
|
| 8 |
+
|
| 9 |
+
Index Terms: Human-centered computing-Visualization-; Human-centered computing-Human computer interaction (HCI)- Interaction paradigms-Mixed / augmented reality; Human-centered computing-Human computer interaction (HCI)-Interaction devices-Sound-based input / output; Applied computing-Life and medical sciences——
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
§ 1.1 MOTIVATION
|
| 14 |
+
|
| 15 |
+
The field of augmented reality (AR) for laparoscopic surgery has inspired broad research over the past decade [2]. This research aims to alleviate the challenges that are associated with the indirect access in such operations. One challenging operation that has attracted much attention from the research community is laparoscopic or robot-assisted partial nephrectomy (LPN/RPN) [17, 19]. LPN/RPN is the standard treatment for early-stage renal cancer. The operation's objective is to remove the intact (i.e., entire) tumor from the kidney while preserving as much healthy kidney tissue as possible. Three challenging phases in this operation can particularly benefit from image guidance or AR navigation support [19]: i) the management of renal blood vessels before the tumor resection, ii) the intraoperative resection planning and the resection, iii) the repair of the resection wound after the tumor removal. Although numerous solutions have been proposed to support urologists during the first two phases [19], no dedicated AR solutions exist for the third. Specifically, urologists need to identify major blood vessels or branches of the urinary collecting system that have been severed or that lie closely under the resection wound's surface and could be damaged during suturing. One additional challenging factor is that this surgical phase is performed under time pressure due to the risk of ischemic damage or increased blood loss (depending on the vascular clamping strategy). There are some technical challenges in providing correct AR registration and meaningful navigation support during this phase. One challenge that affects the visualization of AR information is the removal of renal tissue volume that leaves an undefined tissue surface that is inside the original organ borders. In this work, we present an AR visualization and an auditory display concept that rely on the position of a tracked surgical tool to support the urologist in identifying and locating subsurface structures. We also report a preliminary proof-of-concept evaluation through a user study with an abstracted task. AR registration and the clinical evaluation of our concepts lie outside the scope of this work.
|
| 16 |
+
|
| 17 |
+
§ 1.2 RELATED WORK
|
| 18 |
+
|
| 19 |
+
Multiple reviews provide a comprehensive overview of navigation support approaches for LPN/RPN [3, 17, 19]. Although no dedicated solutions exist to support urologists during the resection wound repair phase, one application has been reported, in which the general AR model of intrarenal structures was used during renorrhaphy [24]. This approach, however, does not address the unknown resection wound surface geometry and potential occlusion issues. Moreover, multiple solutions have been proposed to visualize intrarenal vascular structures. These include solutions in which a preoperative model of the vascular structure is rendered in an AR overlay [23,30]. This may be less informative after an unknown tissue volume has been resected. Other methods rely on real-time detection of subsurface vessels $\left\lbrack {1,{18},{29}}\right\rbrack$ . However, these are unlikely to perform well when the vessels are clamped (suppressing blood flow and pulsation) or when the organ surface is occluded by blood. Outside of LPN/RPN, such as in angiography exploration, visualization methods have been developed to communicate the spatial arrangement of vessels. These include the chromadepth [28] and pseudo-chromadepth methods $\left\lbrack {{20},{26}}\right\rbrack$ , which map vessel depth information to color hue gradients. Kersten-Oertel et al. [21] showed that color hue mapping, along with contrast grading, performs well in conveying depth information for vascular structures. The visualization of structures based on tool position has inspired work both inside and outside of the field of LPN/RPN: Singla et al. [27] proposed visualizing the tool position in relation to the tumor prior to resection in LPN/RPN. Multiple visualizations have been proposed for the spatial relationship between surgical needles and the surrounding vasculature [15]. However, these visualizations explore the application of minimally-invasive needle interventions where the instrument is moving in between the structures of interest.
|
| 20 |
+
|
| 21 |
+
In addition to visual approaches to supporting LPN/RPN as well as other navigated applications, recent works have shown that using sound to augment or replace visual cues can be employed to aid task completion. By using so-called auditory display, changes in a set of navigation parameters can be mapped to changes in parameters of a real-time sound synthesizer. This can be found in common automobile parking assistance systems: the distance of the automobile to a surrounding object is mapped to the inter-onset-interval (i.e., the time between tones) of a simple synthesizer. Using auditory display has been motivated by the desire to increase clinician awareness, replacing the lost sense of touch when using teleoperated devices, or help clinicians correctly interpret and follow navigation paths. There have been, however, relatively few applications of auditory display in medical navigation. Evaluations have been performed for radiofrequency ablation [5], temporal bone drilling [8], skull base surgery [9], soft tissue resection [13], and telerobotic surgery [6, 22]. These have shown auditory display to improve recognition of structure distance and accuracy and diminish cognitive workload and rates of clinical complication. Disadvantages have included increased non-target tissue removal and more lengthy task completion times. For a thorough overview of auditory display in medical interventions, see [4].
|
| 22 |
+
|
| 23 |
+
§ 2 NAVIGATION METHODS
|
| 24 |
+
|
| 25 |
+
We pursued two routes to provide navigation content to the urologist: The first approach is the AR visualization of preoperative anatomical information in a video-see through setting. The second approach is an auditory display.
|
| 26 |
+
|
| 27 |
+
§ 2.1 AR VISUALIZATION
|
| 28 |
+
|
| 29 |
+
Our AR concept aims to provide information about intrarenal risk structures to the urologists. We, therefore, based our visualization on preoperative three-dimensional (3D) image data of the intrarenal vasculature and collecting system. These were segmented and exported as surface models. We assumed that the resection volume and resulting wound geometry are unknown. Simply overlaying the preoperative models onto the laparoscopic video stream would include all risk structures that were resected with the resection volume. We, therefore, propose a tool-based visualization. In this concept, only information about risk structures in front of a pointing tool are rendered and overlaid onto the video stream. To this end, the urologist can place a spatially tracked pointing tool on the newly created organ surface (i.e., resection ground) and see the risk structures beneath. We placed a virtual circular plane perpendicular to the tool axis with a diameter of ${20}\mathrm{\;{mm}}$ around the tooltip. The structures in front of this plane (following the tool direction) are projected orthogonally onto the plane and rendered accordingly. The two different structure types are visualized with two different color scales (Figure 1a). The scales visualize the distance between a given structure and the plane. The scale ends are equivalent to a minimum and maximum probing depth that can be set for different applications. The scale hues were selected based on two criteria: Firstly, we investigated which hues provide good contrast visibility in front of laparoscopic videos. Secondly, the choice of yellow for urinary tracts and blue-magenta for blood vessels is consistent with conventions in anatomical illustrations and should be intuitive for medical professionals. For the urinary tract, color brightness and transparency are changed across the spectrum. For the blood vessels, color hue, brightness, and transparency are used. These color spectrums aim to combine the color gradient and fog concepts that were identified as promising approaches by Kersten-Oertel et al. [21]. An example for the resulting visualization (using a printed kidney phantom) is provided in Figure 1b. The blue line marks the measured tool axis.
|
| 30 |
+
|
| 31 |
+
§ 2.2 AUDIO NAVIGATION
|
| 32 |
+
|
| 33 |
+
After iterative preliminary designs were evaluated informally with 12 participants, an auditory display consisting of two contrasting sounds was developed to represent the structures. The sound of running water was selected to represent the collecting system, and a synthesized tone was created to represent the vessels. The size and number of the vessels in the scanning area are encoded in a three-level density score. Density is then mapped to the water pressure for the collecting system, and the tone's pitch for vessels, with higher pressure and pitch indicating a denser structure. Finally, the rhythm of each tone is a translation of the distance between the instrument tip and the closest point on the targeted structure, with a faster rhythm representing lesser distance. To express the density of the collecting system, the water pressure is manipulated to produce three conditions, i.e., low, medium, and high pressure; representing low, medium, and high density. The water tone is triggered every 250, 500, and 2000ms, depending on the distance: inside, close, and far.
|
| 34 |
+
|
| 35 |
+
< g r a p h i c s >
|
| 36 |
+
|
| 37 |
+
(a) color spectrum for blood vessels (top) and urinary tract (bottom). The color values are in RGBA format.
|
| 38 |
+
|
| 39 |
+
< g r a p h i c s >
|
| 40 |
+
|
| 41 |
+
(b) Laparoscopic view of a printed kidney phantom with the visual AR overlay.
|
| 42 |
+
|
| 43 |
+
Figure 1: AR visualization.
|
| 44 |
+
|
| 45 |
+
A distant structure resembles an uninterrupted flow of water, and a nearby structure is heard as rhythmic splashes. Inside the structure, a rapid splashing rhythm is accompanied by an alert sound.
|
| 46 |
+
|
| 47 |
+
§ 2.3 PROTOTYPE IMPLEMENTATION
|
| 48 |
+
|
| 49 |
+
We implemented our overall software prototype and its visualization in Unity 2018 (Unity Software, USA). The auditory display was implemented using Pure Data [25].
|
| 50 |
+
|
| 51 |
+
§ 2.3.1 AUGMENTED REALITY INFRASTRUCTURE
|
| 52 |
+
|
| 53 |
+
The laparoscopic video stream was generated with an EinsteinVision ${}^{\circledR }$ 3.0 laparoscope (B. Braun Melsungen AG, Germany) with a ${30}^{ \circ }$ optic in monoscopic mode. We used standard laparoscopic graspers as a pointing tool. The camera head and the tool were tracked with a NDI Polaris Spectra passive infrared tracking camera (Northern Digital Inc., Canada). We calibrated the laparoscopic camera based on a pinhole model [31] as implemented in the OpenCV library ${}^{1}$ [7]. We used a pattern of ChArUCo markers [12] for the camera calibration. The external camera parameters (i.e., the spatial transformation between the laparoscope's tracking markers and the camera position) were determined with a spatially tracked calibration body. The spatial transformation between the tool's tracking markers and its tip was determined with pivot calibration using the NDI Toolbox software (Northern Digital Inc.). The rotational transformation between the tracking markers and the tool axis was measured with our calibration body. The resulting laparoscopic video stream with or without AR overlay was displayed on a 24 inch screen. AR registration for this surgical phase was outside of scope for this study. The kidney registration was based on the predefined spatial transformation between our kidney phantom and its tracking geometry (see Study setup).
|
| 54 |
+
|
| 55 |
+
${}^{1}$ We used the commercially available OpenCV for Unity package (Enox Software, Japan)
|
| 56 |
+
|
| 57 |
+
§ 2.3.2 AR VISUALIZATION IMPLEMENTATION
|
| 58 |
+
|
| 59 |
+
The circular plane was placed at the tooltip and perpendicular to the tool's axis as provided by the real-time tracking data. The registration between the visualization and the camera were provided by the abovementioned tool and camera calibration and the real-time tracking data. The plane was then overlaid with a mesh with a rectangular vertex arrangement. The vertices had a density of 64 pts $/{\mathrm{{mm}}}^{2}$ and served as virtual pixels. We conducted a ray-casting request for each vertex. For each ray that hit the surface mesh of the structures in our virtual model, the respective vertex was colored according to the type and ray collision distance of that structure. The visualization was permanently activated in our study prototype.
|
| 60 |
+
|
| 61 |
+
§ 2.3.3 AUDITORY DISPLAY IMPLEMENTATION
|
| 62 |
+
|
| 63 |
+
The synthesized tone contrasts the water sound to ensure distinction between the sounds. The synthesized sound is created from the base frequencies of ${65.4}\mathrm{\;{Hz}},{130.8}\mathrm{\;{Hz}}$ , and ${261.6}\mathrm{\;{Hz}}(\mathrm{C}2,\mathrm{C}3$ , and $\mathrm{C}4$ notes) and harmonized by each frequency's first to eighth harmonics, creating a complex tone. The density of the vessels is measured on ray casting requests that are equivalent to the visual implementation. The number of virtual pixels that would depict a given structure type determin the density for that type. This density is then encoded in the pitch of the tone, meaning that ${65.4}\mathrm{\;{Hz}},{130.8}\mathrm{\;{Hz}}$ , and ${261.6}\mathrm{\;{Hz}}$ (C2, C3, and C4 notes) represent low, medium, and high density, respectively. The repetition time of the tones expresses the distance between the instrument tip and the closest point on the targeted vessel. Similar to the water sound, a continuous tone represents a far-away vessel, while a close vessel is heard as the tone being repeated every ${500}\mathrm{\;{ms}}$ with a duration of ${400}\mathrm{\;{ms}}$ . Being inside the vessel triggers an alert sound played every ${125}\mathrm{\;{ms}}$ accompanied by the tone every ${250}\mathrm{\;{ms}}$ .
|
| 64 |
+
|
| 65 |
+
§ 3 EVALUATION METHODS
|
| 66 |
+
|
| 67 |
+
We conducted a simulated-use proof-of-concept evaluation study with $\mathrm{N} = {11}$ participants to investigate whether our concepts effectively support the urologists in locating subsurface structures in laparoscopic surgery.
|
| 68 |
+
|
| 69 |
+
§ 3.1 STUDY TASK
|
| 70 |
+
|
| 71 |
+
The specific challenges of identifying relevant subsurface structures for suture placement in resection wound repair are difficult to replicate in a laboratory setting. We devised a study task that aimed to imitate the identification of specific structures beneath an organ surface: Participants were presented with a printed kidney phantom in a simulated laparoscopic environment. We also displayed a 3D model of the same kidney on a 24 inch screen. This virtual model included surface meshes of the vessel tree and collecting system inside that kidney (Figure 2). Participants could manipulate the view of that model by panning, rotating, and zooming. For each study trial, we marked a point on the internal structures (a blood or urine vessel) in the virtual model with a red dot (Figure 2). The target points were arranged into four clusters to prevent familiarization with the target structures throughout the experiment. The participants were then asked to point the surgical tool at the location of that subsurface point in the physical phantom as accurately and as quickly as possible by placing the tool on the surface and orienting it such that the tool's direction pointed towards the internal target point.
|
| 72 |
+
|
| 73 |
+
§ 3.2 STUDY DESIGN
|
| 74 |
+
|
| 75 |
+
Our study investigated the impact of the visual and auditory support on the performance and perceived workload of the navigation task. We examined two independent variables with two levels each $\left( {2 \times 2\text{ design }}\right)$ : The presence or absence of the visual support and the presence or absence of the auditory support. The condition in which neither support modality was present was the control condition. Three dependent variables were measured and analyzed: Firstly, we measured the task completion time. Time started counting when the target point was displayed. It stopped when participants gave a verbal cue that they were confident they were pointing at the target as accurately as possible. Secondly, we measured how accurately they pointed the tool. Accuracy was measured as the closest distance between the tool's axis and the target point (point-to-ray distance). Finally, we used the NASA Task Load Index (NASA-TLX) [14] questionnaire as an indicator for the perceived workload. The NASA-TLX questionnaire is based on six contributing dimensions of subjectively perceived workload. The weighted ratings for each dimension are combined into an overall workload score.
|
| 76 |
+
|
| 77 |
+
< g r a p h i c s >
|
| 78 |
+
|
| 79 |
+
Figure 2: Virtual kidney model with the target point clusters. The model is shown from a medial-anterior perspective, corresponding to the participant's position.
|
| 80 |
+
|
| 81 |
+
§ 3.3 STUDY SAMPLE
|
| 82 |
+
|
| 83 |
+
Eleven (11) participants took part in our study (six females, five males). All participants were medical students between their third and fifth year of training. Participants were aged between 24 and 33 years (median $= {25}$ years). All participants were right-handed. Four participants reported between one and five hours of experience with laparoscopic interaction (median $= 3\mathrm{\;h}$ ) and seven participants reported between one and 15 hours of AR experience (median $= 2\mathrm{\;h}$ ). Finally, eight participants reported to be trained in playing a musical instrument. No participants reported any untreated vision or hearing impairments.
|
| 84 |
+
|
| 85 |
+
§ 3.4 STUDY SETUP
|
| 86 |
+
|
| 87 |
+
The virtual kidney model and its physical phantom were created from a public database of abdominal computed tomography imaging data [16]. We segmented a healthy left kidney using 3D Slicer [11] and exported the parenchymal surface, the vessel tree, and the urinary collecting system as separate surface models. The parenchymal surface model was printed with the fused deposition modeling method and equipped with an adapter for passive tracking markers (Figure 3a). The phantom was placed in a cardboard box to simulate a laparoscopic working environment (Figure 3b). The screen with the laparoscopic video stream was placed opposite the participant and the screen with the virtual model viewer was placed to the participant's right. A mouse was provided to interact with the model viewer and a standard commercial multimedia speaker was included for the auditory display. The overall study setup is shown in Figure 4.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Figure 3: Components of the simulated laparoscopic environment.
|
| 92 |
+
|
| 93 |
+
§ 3.5 STUDY PROCEDURE
|
| 94 |
+
|
| 95 |
+
Participants' written consent and demographic data were collected upon arrival. The participants then received an introduction to the visualization and auditory display of the data. Participants conducted one trial block per navigation method. In each trial block, they were asked to locate the three points of one cluster, with one trial per point. After each trial block, one NASA TLX questionnaire was completed for the respective navigation method. The order of the navigation methods and the assignment between the point clusters and the navigation methods were counterbalanced. The order in which the points had to be located within each trial block was permutated.
|
| 96 |
+
|
| 97 |
+
§ 3.6 DATA ANALYSIS
|
| 98 |
+
|
| 99 |
+
During initial data exploration, we noticed a trend that participants took more time to complete the task in the first trial they attempted with each method than in the second and third trials. Therefore, the first trial for each method and participant was regarded as a training trial and excluded from the analysis. The data (time and accuracy) from the remaining two trials from each block were averaged and a repeated-measures two-way analysis of variance (ANOVA) was conducted for each dependent variable.
|
| 100 |
+
|
| 101 |
+
§ 4 RESULTS
|
| 102 |
+
|
| 103 |
+
The descriptive results for the three dependent variables are listed in Table 1. We found significant main effects for the presence of visual display onto the accuracy $\left( {\mathrm{p} < {0.001}}\right)$ and the NASA TLX rating $\left( {\mathrm{p} = {0.03}}\right)$ . The ANOVA results are listed in Table 2. The significant effects are plotted in Figure 5.
|
| 104 |
+
|
| 105 |
+
< g r a p h i c s >
|
| 106 |
+
|
| 107 |
+
Figure 4: Overall study setup.
|
| 108 |
+
|
| 109 |
+
§ 5 DISCUSSION
|
| 110 |
+
|
| 111 |
+
§ 5.1 DISCUSSION OF RESULTS
|
| 112 |
+
|
| 113 |
+
The most evident result from our evaluation is that the visual display increases the accuracy and reduces the perceived workload of identifying subsurface vascular and urinary structures in our simplified task. At the same time, the visual display method did not reduce the task completion time. Generally, there were non-significant trends that all tested conditions with visual or auditory display performed more accurately and tended to cause a lower perceived workload. However, the navigation support conditions tended to perform less quickly than the control condition. This may be due to the fact that the required mental spatial transformations are reduced, but a greater amount of information needs to be processed by the participants.
|
| 114 |
+
|
| 115 |
+
This explanation is also supported by the result that the combined auditory and visual display performed worse than the visual-only condition within our sample. While this trend is not statistically significant, it poses a question: Were the auditory display designs somewhat misleading or distracting, or is the combination of multimodal channels for the same information in itself potentially hindering in this task? The trend of auditory support performing slightly better than the control condition within our sample (no significance) may indicate that the latter explanation is more likely. Another aspect may be users' lower familiarity with auditory navigation than visual cues. Further training and greater participant experience may also reduce the difference in performance between he visual and auditory navigation aids.
|
| 116 |
+
|
| 117 |
+
The AR visualization was well suited for the abstracted task in our proof-of-concept evaluation. In the clinical context, a semitransparent display of our visualization may be better suited to prevent occlusion of the relevant surgical area. This occlusion can further be reduced by providing a means to interactively activate or deactivate the visualization.
|
| 118 |
+
|
| 119 |
+
Finally, the absolute values we measured for our dependent variables are less meaningful than the comparative effects we found for our navigation conditions. Multiple design factors limit the clinical validity of our study, including the exclusion of a registration pipeline. This means that the absolute task time or pointing error may well deviate from the reported descriptive results.
|
| 120 |
+
|
| 121 |
+
§ 5.2 GENERAL DISCUSSION
|
| 122 |
+
|
| 123 |
+
Our tests yielded preliminary and successful proof-of-concept results for the audiovisual AR support for resection wound repair. The results indicate that audio guidance may be helpful but the benefit could not be significantly within our sample. However, there are limitations to the clinical validity of our prototypes and our study setup.
|
| 124 |
+
|
| 125 |
+
Table 1: Descriptive results for all dependent variables. All entries are in the format <mean value (standard deviation)>.
|
| 126 |
+
|
| 127 |
+
max width=
|
| 128 |
+
|
| 129 |
+
Navigation condition Task completion time [s] Accuracy [mm] NASA-TLX
|
| 130 |
+
|
| 131 |
+
1-4
|
| 132 |
+
No support 29.92 (18.59) 12.54 (4.21) 14.14 (2.18)
|
| 133 |
+
|
| 134 |
+
1-4
|
| 135 |
+
Auditory support 41.44 (22.9) 9.69 (5.14) 12.93 (3.46)
|
| 136 |
+
|
| 137 |
+
1-4
|
| 138 |
+
Visual support 36.02 (19.21) 4.39 (3.49) 10.87 (3.86)
|
| 139 |
+
|
| 140 |
+
1-4
|
| 141 |
+
Auditory and visual support 38.12 (22.48) 6.45 (5.08) 11.93 (3.3)
|
| 142 |
+
|
| 143 |
+
1-4
|
| 144 |
+
|
| 145 |
+
Table 2: ANOVA results for all variables. AD: Auditory display, VD: Visual display. All cells are in the format $< \mathrm{F}$ value (degrees of freedom); p value>.
|
| 146 |
+
|
| 147 |
+
max width=
|
| 148 |
+
|
| 149 |
+
Dependent variable Main effect AD Main effect VD Interaction AD:VD
|
| 150 |
+
|
| 151 |
+
1-4
|
| 152 |
+
Task completion time 1.41 $\left( {1,{10}}\right) ;{0.263}$ 0.17(1,10); 0.688 1.47(1,10); 0.253
|
| 153 |
+
|
| 154 |
+
1-4
|
| 155 |
+
Accuracy ${0.11}\left( {1,{10}}\right) ;{0.748}$ 28.01(1,10); <0.001* 2.67(1,10); 0.133
|
| 156 |
+
|
| 157 |
+
1-4
|
| 158 |
+
NASA TLX 0.01(1,10); 0.911 6.35(1,10); 0.03* 1.47(1,10); 0.253
|
| 159 |
+
|
| 160 |
+
1-4
|
| 161 |
+
|
| 162 |
+
< g r a p h i c s >
|
| 163 |
+
|
| 164 |
+
Figure 5: Significant ANOVA main effects. The error bars represent standard errors.
|
| 165 |
+
|
| 166 |
+
First and foremost, the study task is an abstraction of the actual surgical task: The surgical task requires not only the identification of major subsurface structures but also the judgment and selection of a suture path. This task limitation went along with an abstract laparoscopic environment and surgical site. Our kidney phantom imitated an in-vivo kidney only in its geometric properties. The color, biomechanical behavior, and surgical surroundings did not resemble their real clinical equivalents. Moreover, the phantom was simplified in that it was based on an intact kidney rather than containing a resection wound. While this simplification is an additional limitation to our study's clinical validity, we believe that introducing a phantom with a resection bed will only be meaningful in combination with a more complex simulated task. This is because, in a realistic setting, the urologist will be familiar with the wound and aware of potential landmarks (like intentionally severed vessels) to help navigate. This would not have been the case in our simplified task and for our participants. One further step in improving the phantom for increased realism may be the simulation of the deformation that occurs. This may be achieved by producing one preoperative phantom and one intraoperative phantom based on simulated intra-operative deformation (e.g., using the Simulation Open Framework Architecture [10])
|
| 167 |
+
|
| 168 |
+
Another aspect to improve the clinical validity of our evaluation could be a more realistic task. The most valid performance parameter, however, would be the frequency of suture setting errors. Because these are not very frequent, the study would require a large sample consisting of experienced urologists. This is logistically challenging. We, therefore, regard our preliminary evaluation as a good first indication for the aptitude of our navigation support methods.
|
| 169 |
+
|
| 170 |
+
Future evaluation with a more realistic phantom and task should include the overlay of AR structures on the simulated resection wound as an (additional) reference condition. Moreover, AR registration was excluded from our study's scope to focus the investigation on the tested information presentation methods. A dedicated registration method for post-resection AR has been previously proposed [reference anonymized for peer review]. This could be combined with the dedicated AR concepts reported in this article for future, high-fidelity evaluations.
|
| 171 |
+
|
| 172 |
+
The participants were medical students with limited laparoscopic experience: They were less trained in the spatial cognitive processes that are involved in laparoscopic navigation than the experienced urologists who would be the intended users for a support system like ours. Thus, the navigation methods presented in this article will need to be further evaluated in clinically realistic settings. This may include testing on an in-vivo or ex-vivo human or porcine kidney phantom. This, however, requires an effective AR registration that is compromised by the time pressure (for in-vivo phantoms) or postmortem deformation in ex-vivo phantoms.
|
| 173 |
+
|
| 174 |
+
Beyond more clinically valid evaluation, some other research questions arise from our work: Firstly, some design iteration and comparison should be implemented to evaluate whether the limited success of our auditory display was due to the specific designs or due to a limited aptitude of the auditory modality for such information. Secondly, further visualizations should be developed and compared with our first proposal to identify an ideal information visualization. Finally, it should be investigated whether other procedures with soft tissue resection (e.g., liver or brain surgery) may benefit from similar navigation support systems for the resection wound repair.
|
| 175 |
+
|
| 176 |
+
§ 6 CONCLUSION
|
| 177 |
+
|
| 178 |
+
This work introduces and tests an audiovisual AR concept to support urologists during the resection wound repair phase in LPN/RPN. To our knowledge, these are the first dedicated solutions that have been proposed for this particular challenge. These concepts have been preliminarily evaluated in a laboratory-based study with an abstracted task. Although the results only represent a proof-of-concept evaluation, we believe that they indicate the potential of our concepts. The next steps for this work include the integration of a targeted AR registration solution and the integrated prototype's evaluation in a clinically realistic setting. Pending this work, we believe that the concepts presented in this article sketch a promising path to a clinically meaningful AR navigation system for minimally invasive, oncological resection wound repair.
|
| 179 |
+
|
| 180 |
+
§ ACKNOWLEDGMENTS
|
| 181 |
+
|
| 182 |
+
Funding information anonymized for peer review.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,423 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Exploring Sketch-based Character Design Guided by Automatic Colorization
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
|
| 7 |
+
Figure 1: Our character exploration tool facilitates the character design process by allowing artists to explore characters using colored thumbnails synthesized from sketches. These colored thumbnails, which are traditionally rough grey-scale sketches, better visualize the character for creating the turnaround sheet.
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Character design is a lengthy process, requiring artists to iteratively alter their characters' features and colorization schemes according to feedback from creative directors or peers. Artists experiment with multiple colorization schemes before deciding on the right color palette. This process may necessitate several tedious manual re-colorizations of the character. Any substantial changes to the character's appearance may also require manual re-colorization. Such complications motivate a computational approach for visualizing characters and drafting solutions.
|
| 12 |
+
|
| 13 |
+
We propose a character exploration tool that automatically colors a sketch based on a selected style. The tool employs a Generative Adversarial Network trained to automatically color sketches. The tool also allows a selection of faces to be used as a baseline for the character's design. We validated our tool by comparing it with using Photoshop for character exploration in our pilot study. Finally, we conducted a study to evaluate our tool's efficacy within the design pipeline.
|
| 14 |
+
|
| 15 |
+
Index Terms: Human-centered computing-Human computer interaction (HCI)—Interaction paradigms—Graphical user interfaces
|
| 16 |
+
|
| 17 |
+
## 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Fig. 1 illustrates a typical character design process. At the very beginning of the process, the designer is furnished with a character description that outlines a combination of personality (e.g., courageous, melancholic) and physical traits (e.g., long hair, small frame) $\left\lbrack {7,{31}}\right\rbrack$ . Their first task is then to sketch out the character’s distinguishing expressions and physical features into a thumbnail-which is often a rough low-resolution gray-scale sketch. From the thumbnail the designer then develops a character turnaround sheet, a reference for later drawing the character in context. The turnaround sheet is then presented to the creative director for feedback, and the entire process iterates. Because the ideation and creation of a turnaround sheet are manual processes, the artist often has to restart from scratch.
|
| 20 |
+
|
| 21 |
+
We devised a tool driven by a sketching interaction that automatically colors the character thumbnail, enabling artists and their creative directors to do more early exploration with less investment of effort. In practice, these thumbnails may also be used as references for producing the turnaround sheet in higher resolution using Photoshop. Fig. 1 shows an example of a colored character thumbnail synthesized using our tool.
|
| 22 |
+
|
| 23 |
+
Our novel tool can generate these colored character thumbnails based on artists' sketches, color and character face selections. Specifically, we achieve this by training a Generative Adversarial Network (GAN) using an anime dataset. We used the GAN to generate the colored thumbnails as the artists sketch, while also allowing them to place characters' faces and select their colorization style. This selection-based automatic colorization framework was able to significantly speed-up the character exploration process compared to using Photoshop for participants in our user study, without sacrificing quality. The major contributions of our work include:
|
| 24 |
+
|
| 25 |
+
- Proposing a novel generative character exploration tool by training a GAN to automatically color sketches.
|
| 26 |
+
|
| 27 |
+
- Allowing artists using our tool to select character faces and colorization style, as well as to edit the character by directly sketching on the canvas.
|
| 28 |
+
|
| 29 |
+
- Validating the effectiveness of our tool in facilitating the character exploration process compared to Photoshop via a number of design tasks.
|
| 30 |
+
|
| 31 |
+
## 2 RELATED WORK
|
| 32 |
+
|
| 33 |
+
### 2.1 Sketch-based Interactions
|
| 34 |
+
|
| 35 |
+
Similar to our approach, several works have explored using sketching as an interaction technique in different contexts.
|
| 36 |
+
|
| 37 |
+
We draw inspiration from several works that utilized sketching as an animation interaction. Kazi et al. $\left\lbrack {{25},{26}}\right\rbrack$ created an interface to allow users to animate their 2D sketches, while Guay et al. [11] presented a novel technique to animate $3\mathrm{D}$ characters’ motion using a single stroke. Storeoboard [15] allows filmmakers to sketch stereoscopic storyboards to visualize the depth of their scenes.
|
| 38 |
+
|
| 39 |
+
Our approach aims to incorporate sketching into a 2D design process, while several works aim to examine sketch interactions in 3D design. Saul et al. [37] created a design system for chair fabrication. Xu et al. [49] introduced a model-guided 3D sketching tool which allows designers to redesign existing 3D models. Huang et al. [16] created a sketch-based user interface design system. ILoveSketch [3], a curve sketching system, allows designers to iterate directly on their 3D designs. Sketch-based interaction techniques in Augmented Reality were explored by the HCI community as well $\left\lbrack {2,{29},{44}}\right\rbrack$ .
|
| 40 |
+
|
| 41 |
+
Several works explored using sketching to design cartoon characters specifically. Sketch2Manga [33] creates characters from sketches. Unlike our approach which uses a generative method to output a character from a sketch, it uses image retrieval to match the query with a character from the database. Han et al. [12] introduced a deep learning method to create 3D caricatures from an input $2\mathrm{D}$ sketch. Because the generated caricatures take the form of a texture-less 3D model, we opted to use a network architecture that enabled the generation of $2\mathrm{D}$ images and control of their colorization style.
|
| 42 |
+
|
| 43 |
+
With our tool, we aim to improve the traditional design process for artists. Similarly, Jacobs et al. [20] introduced a tool which allows artists to create dynamic procedural brushes by varying the rotation, reflection and style of their strokes. Moreover, Vignette [27] is an interactive tool which allows artists to create custom textures, and automatically fill selected regions of their illustrations with these textures.
|
| 44 |
+
|
| 45 |
+
### 2.2 Image Generation
|
| 46 |
+
|
| 47 |
+
Recently, generative modeling approaches have emerged as a powerful, data-driven approach for directly mapping sketches into images. Isola et al. [18] show that conditional GANs are an effective general purpose tool for image-to-image translation problems and can be applied to mapping sketches to images. The sketch-to-image problem is also inherently ambiguous, as different colors and "styles" can be used for multiple plausible completions. Follow-up works $\left\lbrack {{17},{50}}\right\rbrack$ introduce extensions to enable multiple predictions. We find that for our task BicycleGAN [50] is able to effectively generate colored character illustrations from edge maps due to its multimodality. One challenge is the difficulty in obtaining real sketches. Methods such as $\left\lbrack {5,9,{40}}\right\rbrack$ use generative models to generate sketches themselves. We find that "synthesized" sketches based on edge maps, with some carefully selected preprocessing choices, are adequate for our application.
|
| 48 |
+
|
| 49 |
+
Using the style selector in our tool, artists can choose the colorization scheme of their characters. Similarly, Color Sails [39] is a tool which allows coloring designs from a discrete-continuous color palette defined by the user. Tan et al. [42] developed a tool to allow real-time image palette editing. Zou et al. [51] introduced a language-based scene colorization tool. Xiang et al. [47] explored the style space of anime characters by training a style encoder which effectively encodes images into the style space, such that the distance of their codes in the space corresponds to the similarity of their artists' styles.
|
| 50 |
+
|
| 51 |
+
In later work, Xiang et al. [48] developed a Generative Adversarial Disentanglement Network which can incorporate style and content codes that are independent. This allows separate control over the style and content codes of the image, enabling faithful image generation with proper style-specific facial features (e.g., eyes, mouth, chin, hair, blushes, highlights, contours,) as well as overall color saturation and contrast. The neural transfer methods used by Xiang et al. $\left\lbrack {{47},{48}}\right\rbrack$ do not transfer facial features consistently (e.g., may transfer the mouth from some images but not all). Therefore we allow artists to control facial features via only the sketch canvas and/or face selector instead of using the neural transfer methods of Xiang et al. Nonetheless, due to its effectiveness, we still used neural transfer to control the sketch's colorization.
|
| 52 |
+
|
| 53 |
+
### 2.3 Character Design
|
| 54 |
+
|
| 55 |
+
EmoG [38] is a character design tool introduced to facilitate story-boarding. EmoG generates facial expressions according to the user's emotion selection and sketch. Akin to our approach users can drag and drop a facial expression onto the canvas in addition to the ability to draw directly on the canvas. Unlike our approach, EmoG renders no colorization suggestions to the user and is focused on facilitating drafting characters' emotional expressions rather than their overall appearance.
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+
Figure 2: Overview of our approach. To begin, an artist may place a face from the face selector onto the design canvas. The artist then directly sketches on the design canvas. If a style is selected, the sketch will be automatically colored by the GAN with the selected style. Otherwise, the tool suggests a random colorization scheme.
|
| 60 |
+
|
| 61 |
+
MakeGirlsMoe [21] is a tool that helps artists brainstorm by allowing them to select facial features to automatically generate a character illustration. However, it has an unnatural discrete selection-based interaction compared to interfaces that allow the user to illustrate by sketching. MakeGirlsMoe was updated to create the crypto-currency generator, Crypko [6]. Both frameworks were not available to us during the user evaluation and thus were not compared to our tool. PaintsChainer [35] automatically colors sketches based on the artist's color hints in the form of brush strokes on top of the sketch. It colors a completed line art that a user uploads, though it does not allow the user to modify the character by placing or editing expressions and features onto the canvas, nor does it allow the user to start from a blank canvas and iteratively sketch a character. Consequentially, it neglects the need for a sketch-based iterative tool that combines both a feature selection-based interaction and automatic colorization. Hence, we developed an interactive character design tool equipped with a face selector, colorization style selector and sketching canvas to fulfill that need.
|
| 62 |
+
|
| 63 |
+
Auto-colorization features were introduced in commercial software like Adobe Illustrator and Clip Studio Paint. However, Adobe Illustrator is limited to coloring black-and-white photographs. On the other hand, Clip Studio Paint can color cartoons, but like PaintsChainer, it can only color completed line art.
|
| 64 |
+
|
| 65 |
+
## 3 OVERVIEW
|
| 66 |
+
|
| 67 |
+
Fig. 2 shows an overview of our approach. We trained a GAN by using an edges-to-character dataset obtained by extracting the edgemaps of colored anime characters. The GAN learned to produce a colored anime character illustration given a sketch. Using the GAN we built a framework which allows character exploration by enabling a user to select and place facial features as well as sketch onto a canvas. As the user edits the canvas, the GAN will automatically color his illustrations according to the styles he selected. Finally, we demonstrated the effectiveness of our tool by conducting a user study comparing our tool with Adobe Photoshop.
|
| 68 |
+
|
| 69 |
+
## 4 DATA PROCESSING
|
| 70 |
+
|
| 71 |
+
We obtained our training and validation image pairs from the anime-face character dataset [34]. We used an automated process described below to extract edge maps from the face images, creating our edges-to-character dataset.
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Figure 3: To synthesize artist sketches we used an edge operator on our dataset images. (a) A character image sampled from our training set. The edgemaps were created by applying the DoG filter with (b) $\sigma = {0.3}$ and (c) $\sigma = {0.5}$ .
|
| 76 |
+
|
| 77 |
+
Animeface Dataset. The animeface character dataset [34] contains a total of 12,213 samples of face images. We randomly extracted 10,992 images of them for the training set. The remaining ${10}\%$ of samples (1221 images) were used as validation to monitor the progress of training the GAN.
|
| 78 |
+
|
| 79 |
+
Edgemaps. Due to the costliness of obtaining character datasets which include sketches paired with their corresponding colored counterparts, we used an edge detector on the dataset images to simulate sketches. The standard Difference of Gaussians (DoG) filter was used successfully in several works to synthesize line drawings $\left\lbrack {{10},{22},{30},{46}}\right\rbrack$ , and unlike the eXtended difference-of-Gaussians (xDoG) filter [45] it does not tend to fill dark regions. Before processing the images, we created the edgemaps of our training and validation images after converting them to grayscale and then applying the DoG filter with $\gamma = {10}^{9}$ and $k = {4.5}$ (see Fig. 3). The value of $\sigma$ was randomly selected from $\{ {0.3},{0.4},{0.5}\}$ for each image to allow for variations in the amount of noise in the edgemaps.
|
| 80 |
+
|
| 81 |
+
Image Processing. The images in the animeface dataset have a maximum size of ${160}\mathrm{{px}}$ in either dimension and various aspect ratios, while our GAN training process expects images sized exactly ${256} \times$ 256px. In order to match these requirements, we uniformly scaled the animeface faces to fit using bilinear interpolation. For other than square aspect ratios, we filled the rest of the square canvas by repeating edge pixels as shown in Fig. 4. We selected this repetition fill rather than a solid background color to avoid the network learning to reproduce such a solid border.
|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
|
| 85 |
+
Figure 4: The arrow depicts the direction of replicating the border pixels when the (a) height or (b) width is the smallest dimension.
|
| 86 |
+
|
| 87 |
+
## 5 CHARACTER COLORIZATION MODEL TRAINING
|
| 88 |
+
|
| 89 |
+
To generate each colored character image from its paired edgemap image in our dataset, we used BicycleGAN from Zhu et al [50].
|
| 90 |
+
|
| 91 |
+
Architecture. We train the network on the ${256} \times {256}$ paired images from our training edges-to-character dataset. For our encoder, we found that using a ResNet [14] encoder explored by Zhu et al. helped decrease the amount of artifact images generated by the GAN. We use a U-Net [36] generator and PatchGAN [19] discriminators.
|
| 92 |
+
|
| 93 |
+
In preliminary experiments, we found that changing the dimension of the latent code $\left| \mathbf{z}\right|$ produces different results. A code of too high dimension leads to variation in the background style instead of the character colorization style, while too low dimension leads to inadequate variation in the character colorization style. Ultimately, we found a latent code of size 8 to empirically work well. GANs are known to "collapse" when training lasts too long [4]. Subsequently, we noticed that eventually colorization resolution improves at the expense of style variation after 71 epochs as the GAN starts over-fitting on the training data. Because our tool is created to explore character designs and produce low-resolution sample thumbnails, we opted to halt training at 71 epochs to maintain variation in style and to avoid overfitting.
|
| 94 |
+
|
| 95 |
+
Training. We inherit many of the default parameters and practices of BicycleGAN: ${\lambda }_{\text{image }} = {10},{\lambda }_{\text{latent }} = {0.5}$ and ${\lambda }_{\mathrm{{KL}}} = {0.01}$ . We trained for 71 epochs using Adam [28] with batch size 1 and learning rate 0.0002 . We updated the generator once for each discriminator update, while the encoder and generator are updated simultaneously. We used the TensorFlow library [1]. Training took approximately 48 hours on an Nvidia GeForce GTX 1070 GPU.
|
| 96 |
+
|
| 97 |
+
We found these parameters empirically. Note that our goal is not to produce the state-of-the-art generative model for this task per se, but rather to explore how a reasonable implementation of a powerful generative model can be leveraged for downstreaming character exploration by an artist.
|
| 98 |
+
|
| 99 |
+
## 6 CHARACTER EXPLORATION TOOL
|
| 100 |
+
|
| 101 |
+
Due to its multi-modality, our trained neural network is able to color each edgemap in various styles. We demonstrate the several methods incorporated in our character design tool shown in Fig. 6 to color character sketches. The colorization results we presented in this section were generated using the same apparatus used for training.
|
| 102 |
+
|
| 103 |
+
Suggested Colorization. We can color the edgemaps by randomly sampling the latent code $\mathbf{z}$ from a Gaussian distribution and injecting it into the network using the add_to_input method explored by Zhu et al. [50], which spatially replicates and concatenates $\mathbf{z}$ into only the first layer of the generator.
|
| 104 |
+
|
| 105 |
+
Fig. 5 shows colorization results of images in our validation set by randomly sampling the latent code. By varying the latent code the network was able to vary the character's hair color. Because the majority of anime characters in the dataset have matching hair and eye colors, the network jointly varies the hair and eye color. Darker hair colors can be generated by increasing the amount of shading as can be seen in the second row of Fig. 5.
|
| 106 |
+
|
| 107 |
+
Style-based Colorization. We can also inject the latent code $\mathbf{z}$ of other images (i.e. style images) into the network, which enables us to color the input edgemap according to the style images. We first encode the style image to its latent code $\mathbf{z}$ . We then generate the character image from the edgemap by injecting the style image's latent code $\mathbf{z}$ using the add_to_input method.
|
| 108 |
+
|
| 109 |
+
Fig. 7 shows the results of coloring input edge images from our validation sets using a set of style images. Due to the inclusion of multi-faced images within the training set, the network is able to color multiple faces in one image (as illustrated by the final row of Fig. 7), giving artists the ability to sketch multiple faces on the same canvas. These faces are generated with the same colorization scheme. Because we did not remove the backgrounds from the training images, the GAN generates the backgrounds as part of the image's style.
|
| 110 |
+
|
| 111 |
+
Implementation Details. We designed the character exploration tool (shown in Fig. 6) to allow artists to sketch on the design canvas using the brush and eraser provided. The brush is circular and its diameter can be adjusted using the brush slider from 1 to 10 pixels. The eraser is likewise circular and its diameter can be adjusted in the range of 1 to 20 pixels.
|
| 112 |
+
|
| 113 |
+
Artists are also able to place facial expressions into any location on the design canvas from our face selector by clicking the facial expression and the canvas respectively. The facial expressions were created by extracting the faces detected by applying an anime face detector [41]. We selected 60 of the faces detected in the validation set to be used in the face selector.
|
| 114 |
+
|
| 115 |
+
The style selector provides a set of style images from our validation set. These style images were selected by embedding the 8 dimensional latent codes of images in our validation set into two dimensions using t-SNE [32]. The embeddings are visualized as a ${10} \times {10}$ grid by snapping the two dimensional embeddings to the grid. The embeddings were arranged in the grid such that every position in the grid contains the style image with the latent code which has the smallest euclidean distance to the grid position. Twelve images of the 100 style images visualized using t-SNE embedding were discarded due to the presence of some artifacts after using them as style images in colorization. Therefore, we used 88 images in total in our style selector. For consistency and to avoid the varying background, resolution and artistry of the style images from biasing artist's selections in the user study, we display preview images colored with the style images shown in the t-SNE grid using the style-based colorization method. Fig. 6 shows some of these preview images in the style selector. Please refer to the supplementary material for the t-SNE grid visualization.
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
|
| 119 |
+
Figure 5: Sample suggested colorizations from our model. The first column shows the input edgemap. The second column shows the original image. The last four columns show the colorization results of our network with a latent code $\mathbf{z}$ randomly sampled from a Gaussian distribution for each generated sample.
|
| 120 |
+
|
| 121 |
+
The colored sketch will be shown on the display canvas. If the artist has not selected a style image in the style selector, the image will be colored using the suggested colorization method. Otherwise, the sketch will be colored according to the style-based colorization method using the artist's selection in the style selector as the style image. The display canvas will be automatically updated every 20 seconds. The update can also be triggered by the artist by pressing the run button. If a style image was not selected, pressing the run button will trigger applying random colorization with a newly sampled latent vector, giving the artist an additional way-other than the style selector-to explore the colorization space. The sketching canvas can be cleared by pressing the clear button.
|
| 122 |
+
|
| 123 |
+
## 7 Pilot User Study
|
| 124 |
+
|
| 125 |
+
Participants. We recruited 27 artists with ages ranging from 19 to 30 to participate in our IRB-approved study. Fig. 8 shows participants’ average experience with sketching $\left( {\mathrm{M} = {5.52},\mathrm{{SD}} = {4.65}}\right)$ , character design $\left( {\mathrm{M} = {2.26},\mathrm{{SD}} = {2.96}}\right)$ and using Adobe Photoshop (M=3.26, SD=3.04). Participants’ experience is listed in more detail in the supplemental material.
|
| 126 |
+
|
| 127 |
+
Setup. Participants sketched on a Wacom Cintiq Pro 13 tablet with a 13-inch display. Our tool was loaded on the tablet. The participants sketched directly on the screen. We used the same apparatus employed in training the GAN to generate the images of
|
| 128 |
+
|
| 129 |
+
## the display canvas.
|
| 130 |
+
|
| 131 |
+
Tasks. Following the completion of a training task, participants were given 6 tasks. Each design task refers to a combination of design request, time condition, and tool condition. Participants completed each of the 6 design requests shown in Table 1, which were created under the consultation of a professional character designer. The time conditions Limited and Unlimited determined whether participants completed the request within a 15-minute limit or under no time constraints, respectively. The Limited time condition was used to compare the quality of designs under tight time constraints. Our tool conditions are defined as: our character design tool, our character design tool supplemented with pencil/paper, and Adobe Photoshop condition. To allow for within-subject comparisons between tool conditions under each time condition, participants completed all of the 3 tool conditions under each time condition. The ordering and combinations of the design requests, tool conditions and time conditions were randomized for each participant to avoid any carryover effects. For example, one participant may be given the first design request from Table 1 to be completed under the Adobe Photoshop and Unlimited conditions as their first task; while another participant completes the third design request under the character design tool and Limited conditions first. On average, participants completed the study-including the training task-in approximately 90 minutes.
|
| 132 |
+
|
| 133 |
+
## Training Task
|
| 134 |
+
|
| 135 |
+
We allowed participants to freely explore the tool before receiving any design requests. To facilitate the learning process, we provided participants with a tutorial explaining the various functionalities of our tool along with the study's structure.
|
| 136 |
+
|
| 137 |
+
## Character Design Tool
|
| 138 |
+
|
| 139 |
+
Participants completed two design requests using our tool and without using a pencil or paper. One request was completed within 15 minutes while another was completed without a time constraint.
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
|
| 143 |
+
Figure 6: The UI of the character exploration tool in our user study. The canvases show a thumbnail designed using our tool.
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+
Figure 7: Style-based colorization using our model. The left column shows the input edgemap while the second column shows the original image. The six rightmost columns show the results of style-based colorization on the edgemap with various styles.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+
Figure 8: The participants' average years of experience with sketching, character design, and using Adobe Photoshop.
|
| 152 |
+
|
| 153 |
+
---
|
| 154 |
+
|
| 155 |
+
Design Request
|
| 156 |
+
|
| 157 |
+
D1 Cheerful female character with long hair.
|
| 158 |
+
|
| 159 |
+
She has a cool and flowing appearance.
|
| 160 |
+
|
| 161 |
+
D2 She's a cold, lone wolf with a sense of humor.
|
| 162 |
+
|
| 163 |
+
D3 A determined and patient girl with a simple and
|
| 164 |
+
|
| 165 |
+
practical look. Her greatest desire is
|
| 166 |
+
|
| 167 |
+
ultimate knowledge.
|
| 168 |
+
|
| 169 |
+
D4 She's a determined and courageous healer,
|
| 170 |
+
|
| 171 |
+
with a dark and eerie appearance.
|
| 172 |
+
|
| 173 |
+
D5 She's a dedicated and knowledgeable scholar
|
| 174 |
+
|
| 175 |
+
with a bright and sunny aesthetic.
|
| 176 |
+
|
| 177 |
+
6 She's a charming and fun-loving socialite
|
| 178 |
+
|
| 179 |
+
with a vintage and classic look.
|
| 180 |
+
|
| 181 |
+
---
|
| 182 |
+
|
| 183 |
+
Table 1: The 6 character design requests given to participants in our user study.
|
| 184 |
+
|
| 185 |
+
## Character Design Tool and Pencil/Paper
|
| 186 |
+
|
| 187 |
+
Participants were allowed to use a pencil and paper for two design requests completed using our tool. They completed one request within 15 minutes, and one without a time constraint. Some artists typically plan their designs on paper prior to using editing software. Thus, we added this tool condition to inspect whether allowing artists to use pencil/paper affects their workflow.
|
| 188 |
+
|
| 189 |
+
## Adobe Photoshop
|
| 190 |
+
|
| 191 |
+
Similar to the previous tool conditions, participants completed two requests by using Adobe Photoshop, once without a time constraint and once with a 15-minute limit. To mimic participant's typical design process as closely as possible, we provided them with a pencil and paper in this tool condition as well.
|
| 192 |
+
|
| 193 |
+
User Survey. After completing the 2 tasks under each tool condition (i.e. once with a 15-minute time limit and once without), participants were asked to complete a survey to evaluate the performance of the tool used. We opted to use a 5-point Likert scale to evaluate the tools akin to [25]. Participants were asked to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree):
|
| 194 |
+
|
| 195 |
+
- The tool was easy to use and learn.
|
| 196 |
+
|
| 197 |
+
- I find the tool overall to be useful.
|
| 198 |
+
|
| 199 |
+
Finally, we surveyed participants once more after completing the user study in its entirety. Participants were asked to rate each of their colored designs from 1 (Poor) to 5 (Excellent). Participants are aware of the study's time constraints, so they are more likely to fairly judge their artworks' quality. Therefore, we opted to rely on the participants' evaluation of their own work instead of using external evaluators. We were also interested in learning which designs participants favored overall, so for each time condition the participants were asked to vote for either the design created under the Character Design Tool, Character Design Tool and Pencil/Paper, or Photoshop condition.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+
Figure 9: Average time of completing the design requests.
|
| 204 |
+
|
| 205 |
+
### 7.1 Time of Completion.
|
| 206 |
+
|
| 207 |
+
Fig. 9 shows the average time taken by participants to complete the character design requests under each tool condition. Mauchly's test did not show a violation of sphericity against tool condition $\left( {W\left( 2\right) = {0.84}, p = {0.11}}\right)$ . With one-way repeated-measure ANOVA, we found a significant effect of the tool used on the time of design completion $\left( {F\left( {2,{52}}\right) = {14.53}\text{, partial}{\eta }^{2} = {0.36}, p < {0.001}}\right)$ . We performed Boneferroni-corrected paired t-tests for our post-hoc pairwise comparisons.
|
| 208 |
+
|
| 209 |
+
Participants completed the designs faster by using our tool compared to Photoshop. A post-hoc test showed that the average time participants took to complete the designs using Photoshop $\left( {{1205} \pm {135.16}\text{ seconds }}\right)$ was longer than using our tool with pencil/paper $\left( {{801.7} \pm {91.28}\text{seconds}}\right) \left( {p = {0.01}}\right)$ . A post-hoc test also showed that participants completed the design requests in a shorter amount of time by using our tool without pencil/paper $\left( {{605.93} \pm {60.73}}\right) \left( {p < {0.01}}\right)$ compared to Photoshop. The post-hoc test showed no significant difference in the time of completion when comparing completing the task using our character design tool with or without pencil/paper $\left( {p = {0.12}}\right)$ . This suggests that while using our tool sped-up the design process when compared to Photoshop, including the pencil/paper did not yield any observable significant improvements in our setting.
|
| 210 |
+
|
| 211 |
+
Participants remarked that our tool expedites the design process (P1, P3, P10, P20). P3 specifically noted that our tool "makes producing a character design much faster and easier than doing it on paper."
|
| 212 |
+
|
| 213 |
+
### 7.2 Evaluation of Experience Survey
|
| 214 |
+
|
| 215 |
+
Fig. 10 shows participants' response to "The tool was easy to use and learn." for each tool condition. A Friedman test showed a significant difference in participant’s responses to the statement $\left( {{\chi }^{2}\left( 2\right) = {13.73}}\right.$ , $p = {0.01})$ . We also conducted post-hoc analysis using Wilcoxon signed-rank tests with Bonferroni correction. Similar to Adobe Pho-toshop, the median of participants found our tool easy to use (Md=4 agree). However, the post-hoc tests showed a significant difference between the ease of use of our tool and Adobe Photoshop. In other words, we found a significant difference when comparing participants' responses after using our tool without pencil/paper and Adobe Photoshop $\left( {W = {199}, Z = - {3.04}, r = {0.41}, p = {0.007}}\right)$ . Likewise, we found a significant difference when comparing using our tool with pencil/paper and Adobe Photoshop $(W = {144}, Z = - {3.25}, r =$ ${0.44}, p = {0.003})$ . These results may be observed in Fig. 10 by the broader variation in responses given to our tool conditions compared to the Photoshop condition.
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+
Figure 10: Participants answered the questions in the experience survey with a rating of 1 (strongly disagree) to 5 (strongly agree
|
| 220 |
+
|
| 221 |
+
The familiarity of photo editing software to our participants may have contributed to the consensus of Adobe Photoshop's ease of use compared to our tool. Although some participants like ${P26}$ appreciated the simplicity of our application by stating that "it's modestly easy to use for character designers of any experience level. It's perfect as it is.", the absence of exhaustive common features that exist in modern editing software might have contributed to our tool's wider range of easiness ratings. The post-hoc test showed no significant difference between the ease of using our tool with or without pencil/paper $\left( {W = {33}, Z = {0.92}, p = {0.35}}\right)$ .
|
| 222 |
+
|
| 223 |
+
Our Friedman test found a significant difference in participants' responses to the "I find the tool overall to be useful." statement as well $\left( {{\chi }^{2}\left( 2\right) = {9.86}, p = {0.007}}\right)$ . The post-hoc test $(W = {63.5}, Z =$ $- {1.33}, p = {0.56})$ showed no significant difference between the usefulness rating of using Adobe Photoshop (Md=4 agree) compared to using our tool without pencil/paper $(\mathrm{{Md}} = 5$ strongly agree), despite our tool having a higher median rating than Adobe Photoshop. Conversely, the post-hoc test $\left( {W = {135}, Z = - {2.86}, r = {0.39}, p = {0.013}}\right)$ showed a significant difference between the rating of Adobe Photo-shop and using our tool with pencil/paper despite having the same median rating $(\mathrm{{Md}} = 4$ agree). Furthermore, we found no significant difference between responses under the Character Design Tool (Md=5 strongly agree) and Character Design Tool and Pencil/Paper (Md=4 agree) conditions $\left( {W = 4, Z = - {2.49}, r = {0.34}, p = {0.038}}\right)$ . The inclusion of pencil and paper as an additional step in the participants' pipeline might have made the design process more cumbersome, resulting in the tendency to view the usefulness of our tool under the Character Design Tool and Pencil/Paper condition to be less than the other two conditions as shown in Fig. 10.
|
| 224 |
+
|
| 225 |
+
### 7.3 Evaluation of Designs
|
| 226 |
+
|
| 227 |
+
Fig. 11 shows how participants rated the designs produced using the various tool conditions we studied. The designs produced under the Limited constraint were rated similarly $\left( {\mathrm{{Md}} = 3}\right)$ under all the tool conditions. A Friedman test also indicated no significant difference in the rating of designs produced under that time constraint $\left( {{\chi }^{2}\left( 2\right) = }\right.$ ${1.98}, p = {0.37})$ .
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+
Figure 11: Participants were asked to rate their designs with a rating of 1 (poor) to 5 (excellent). Limited and Unlimited refer to whether the design was created with a 15-minute time limit or with unlimited time.
|
| 232 |
+
|
| 233 |
+
Although the median of ratings was higher for images designed under the Character Design Tool and Photoshop conditions (Md=4), than the Character Design tool and Pencil/Paper condition (Md=3); we found no significant difference in the rating of designs produced without any time constraints by applying the Friedman test $\left( {{\chi }^{2}\left( 2\right) = }\right.$ ${4.13}, p = {0.13})$ .
|
| 234 |
+
|
| 235 |
+
Some participants(P4, P12)noted that the artwork they produced during the user study does not reflect their abilities. This may suggest that the participants may be rating the designs based on their previous body of work, giving all the designs overall a neutral rating; consequently resulting in no significant difference in the rating of images under different tool conditions. Nevertheless, the designs created using our tool received the majority of participants' votes as can be seen in Fig. 14.
|
| 236 |
+
|
| 237 |
+
Fig. 12 shows some selected participant's thumbnails using our tool, while Fig. 13 shows their designs using Photoshop. The examples created under the Character Design Tool condition seem to be of better quality than their Photoshop counterparts. The participant also created the design faster by using our tool (386 seconds) compared to using Photoshop (620 seconds) while under the Unlimited time condition. Although the designs created using Photoshop are comparable to ones created under the Character Design Tool and Pencil/Paper condition, the time it took to complete the design using our tool (388 seconds) was much shorter for the participant than using Photoshop (652 seconds) while under the Unlimited time condition. The remaining thumbnails are included with the supplemental material.
|
| 238 |
+
|
| 239 |
+
## 8 EVALUATION OF THE TOOL'S USAGE IN THE WILD
|
| 240 |
+
|
| 241 |
+
To evaluate the effectiveness of our tool in the design workflow, we conducted a user study that simulates directors' and artists' workflow in the character design process. Due to the pandemic, we were unable to recruit a large number of participants and thus conduct a large-scale user study. Moreover, our user study was conducted remotely.
|
| 242 |
+
|
| 243 |
+
Participants. We recruited 5 of the artists with ages ranging from 19 to 30 in our initial user study to participate in our second IRB-approved study. We also recruited 5 participants of ages 19-25 to act as art directors.
|
| 244 |
+
|
| 245 |
+
Setup. To use our tool the artists were asked to connect to the same device utilized in our initial user study using TeamViewer. For comparison, the artists were asked to use their preferred drawing tools. We placed no constraints on the software the artists used. Instead, we encouraged artists to employ the tool that will most facilitate the brainstorming process for them. Some artists used tools that have auto-colorization capabilities like Adobe Illustrator and Clip Studio Paint, while others selected tools that did not support auto-colorization like FireAlpaca and PaintTool SAI. The artists, directors, and researcher used Zoom to communicate.
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+
Figure 12: Participant's sketches and their corresponding colored thumbnails created using our tool under the (a) Limited time condition and the (b) Unlimited time condition. The images were labeled with the participant number and design request completed (D3: "A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D4: "She's a determined and courageous healer, with a dark and eerie appearance."; D6:"She's a charming and fun-loving socialite with a vintage and classic look."). P10 used the face selector to design the character according to D3 and D4, while P7 used the face selector to design the character according to D4.
|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
|
| 253 |
+
Figure 13: Characters created by the participants from Fig. 12 using Photoshop. The two leftmost illustrations were created under the (a) Limited time condition, while the two rightmost were created under the (b) Unlimted time condition. (D1:"Cheerful female character with long hair. She has a cool and flowing appearance."; D3: "A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D5: "She's a dedicated and knowledgeable scholar with a bright and sunny aesthetic.").
|
| 254 |
+
|
| 255 |
+
Tasks. Before the study, each director submitted two character designs. Two different artists were randomly assigned to each director to complete his/her designs. The directors introduced their designs to each artist in a brainstorming session. Moreover, the artists shared their screens in these sessions to show their sketches to the directors. The artists used our tool in one brainstorming session and their selected drawing tool in the other. The session was terminated when the director was satisfied with the rough design the artist produced. The time of completing these sessions was recorded to compare our tool with other drawing tools.
|
| 256 |
+
|
| 257 |
+
After the brainstorming session ended, the artists submitted the turnaround sheets to their respective directors via e-mail. The artists iterated on the designs based on the directors' feedback. The study concluded when the directors approved the turnaround sheet that each of the two assigned artists submitted.
|
| 258 |
+
|
| 259 |
+
After the directors approved an artist's turnaround sheet, they were asked to complete a survey. The directors were asked to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree):
|
| 260 |
+
|
| 261 |
+
- I am satisfied with the quality of design the artist produced.
|
| 262 |
+
|
| 263 |
+
- The design matches my description.
|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
|
| 267 |
+
Figure 14: The number of participants (out of 27) who selected the designs created under each tool condition.
|
| 268 |
+
|
| 269 |
+
## Communication with the artist in the brainstorming session was easy.
|
| 270 |
+
|
| 271 |
+
The artists were asked to report the number of hours they spent working on the design, as well as to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree):
|
| 272 |
+
|
| 273 |
+
## Communication with the director in the brainstorming session was easy.
|
| 274 |
+
|
| 275 |
+
Results. The amount of time in the brainstorming session is shown in Table 2 as Session time. On average, the artists spent a shorter amount of time in the brainstorming session by using our tool ( ${855} \pm {95.02}$ seconds) compared to using other drawing tools ( ${1584.8} \pm {165.29}$ seconds). Despite artists spending a shorter amount of time in the brainstorming sessions, they spent a similar amount of time working on the designs after using our tool $({3.1} \pm {0.9}$ hours) in the brainstorming session compared to after using other drawing tools ( ${3.2} \pm {0.86}$ hours).
|
| 276 |
+
|
| 277 |
+
Moreover, directors overall were satisfied with the quality of the turnaround sheets produced by the artists after using our tool (Md=5 strongly agree) akin to after using other drawing tools (Md=5 strongly agree). Directors overall felt that the turnaround sheets produced after using our tool (Md=4 agree) matched their description as well. The turnaround sheets created in our user study are included in the supplementary material.
|
| 278 |
+
|
| 279 |
+
Only one director was unsatisfied with the turnaround sheet the artist produced after using our tool in the brainstorming session. The director worked with Artist 1, giving a score of 2 (disagree) to both the quality of the design and its match to the director's description. In the brainstorming session, the artist produced the thumbnail shown in Fig. 15 which the director approved. In the e-mail correspondence after the brainstorming session, the director was indecisive about the character's specifications. These miscommunications resulted in both the artist and director rating the communication in the brainstorming session lower than in any other session, with the artist giving the director a score of 1 (strongly disagree) and the director giving the artist a score of 3 (neutral) as can be seen in Table 2.
|
| 280 |
+
|
| 281 |
+
Overall, both artists and directors believed that communication with their counterparts went smoothly. The directors rated communication with the artists using our tool (Md=5 strongly agree) akin to using other drawing tools (Md=5 strongly agree) during the brainstorming session. Artists reported slightly better communication with the directors after using our tool (Md=5 strongly agree) compared to other drawing tools (Md=4 agree).
|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
|
| 285 |
+
Figure 15: The turnaround sheet created by Artist 1 after using our tool in the brainstorming session. The lower right corner shows the colored thumbnail produced by our tool and its input sketch.
|
| 286 |
+
|
| 287 |
+
## 9 Discussion
|
| 288 |
+
|
| 289 |
+
Limitations. Although participants believed that our tool allows them to draft a character much faster than Photoshop, they encountered some limitations to the framework. For example, although our tool was able to color multiple faces sketched by an artist within the same canvas, our GAN tends to style all faces with the same color scheme, limiting designs to only one character per canvas. Moreover, P3 suggested to "simply use the facial expressions, as opposed to the expressions plus some of the hair" in the face selector. Due to the face detection method we utilized, the selections we provided in the face selector included some portions of the characters hair. With a more sophisticated feature segmentation and classification model, the different facial features (e.g., hair, eyes) could be segmented and displayed separately in the selector. Some participants expressed the need for improvements to the interface like a larger sketch canvas (P7), an undo button (P3, P4, P7, P11, P13, P14, P19, P27), and stroke sensitivity/customization (P3, P5, P9, P12, P14).
|
| 290 |
+
|
| 291 |
+
Our style selector is not fully customizable. For example, ${P3}$ wanted the ability "to have a way to set up a custom hair and skin color:" Using an architecture similar to the one proposed by Karras et al. [24] to transfer the style could allow a more finely-grained customization of the colorization scheme.
|
| 292 |
+
|
| 293 |
+
While some participants were content with the variety of faces (P9, P11, P26) and styles (P14, P18, P27) our tool provides, due to limitations in our dataset, our tool does not provide artists with large variations in skin tone, nor does our tool provide a substantial number of non-female characters in the face selector for the same reason. Our tool is able to color male characters as illustrated in Figure 7 which we also provide in the face selector. Moreover, participants like ${P7}$ who, despite the usage of female pronouns in the design description, created non-female characters using our tool (as shown in Fig. 12). However, they identified the need for further inclusivity, especially in the face selector(P7, P21). Nevertheless, 20 out of the 27 participants used the face selector in at least one of their final designs.
|
| 294 |
+
|
| 295 |
+
Participants overall praised the face selector in expediting the design process by providing a baseline for the character $({P1},{P2},{P3}$ , ${P6},{P10},{P26},{P27})$ . P27 found the face selector "made the app more useful in comparison to photoshop because you could start out with a template".
|
| 296 |
+
|
| 297 |
+
<table><tr><td rowspan="2"/><td colspan="2">Artist 1</td><td colspan="2">Artist 2</td><td colspan="2">Artist 3</td><td colspan="2">Artist 4</td><td colspan="2">Artist 5</td><td colspan="2">Average</td></tr><tr><td>Ours</td><td>Other</td><td>Ours</td><td>Other</td><td>Ours</td><td>Other</td><td>Ours</td><td>Other</td><td>Ours</td><td>Other</td><td>Ours</td><td>Other</td></tr><tr><td>Session time (seconds)</td><td>1,200</td><td>1,800</td><td>900</td><td>1,860</td><td>666</td><td>1.113</td><td>797</td><td>1,893</td><td>712</td><td>1,258</td><td>855</td><td>1584.8</td></tr><tr><td>Creation time (hours)</td><td>3</td><td>3</td><td>1</td><td>1</td><td>1.5</td><td>2</td><td>6</td><td>6</td><td>4</td><td>4</td><td>3.1</td><td>3.2</td></tr><tr><td>Quality</td><td>2</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td></tr><tr><td>Description matching</td><td>2</td><td>5</td><td>5</td><td>5</td><td>4</td><td>5</td><td>5</td><td>3</td><td>4</td><td>5</td><td>4</td><td>5</td></tr><tr><td>Communication (director)</td><td>3</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td><td>5</td></tr><tr><td>Communication (artist)</td><td>1</td><td>5</td><td>5</td><td>2</td><td>5</td><td>4</td><td>5</td><td>3</td><td>5</td><td>5</td><td>5</td><td>4</td></tr></table>
|
| 298 |
+
|
| 299 |
+
Table 2: Results of comparing our tool to other drawing tools in our second user study. Our tool's results are shown in the Ours columns while other tools' are shown in the Other columns. The Session time indicates the duration of the brainstorming session in seconds. The Creation time indicates the number of hours the artists spent working on the design after the brainstorming session. The Quality indicates how the directors rated their satisfaction with the quality of the turnaround sheet. Description matching indicates the director's rating of how well the turnaround sheet matched their description. Communication (director) indicates the rating of communication during the brainstorming session that the director reported, while Communication (artist) indicates the rating the artist reported.
|
| 300 |
+
|
| 301 |
+
We received positive feedback from user study participants regarding our tool's applicability within the design pipeline $({P1},{P3}$ , ${P6},{P9},{P10})$ . Moreover, incorporating aspects of the NASA TLX (Task Load Index) [13] could also aid with further investigating our tool's usability.
|
| 302 |
+
|
| 303 |
+
Future Work. The current focus of our tool was to expedite the exploration of character faces. By expanding our dataset to include full-body character images we may be able to train a network with an architecture similar to Esser et al.'s [8] to generate characters in various poses, body types and clothing. This may expand the capabilities of our tool to aid artists in creating the entire turnaround sheet in addition to exploring the character's head-shot. PaintsChainer [35] allows artists to provide color hints for the tool. We may be able to achieve a similar interaction by including hint channels into our GAN's input layer. The GAN can be trained by randomly sampling colored strokes from the character images in the edges-to-character dataset and using them as the inputs to the hint channels.
|
| 304 |
+
|
| 305 |
+
Expanding our dataset to more styles of characters may also allow us to cater to designers who have a style dissimilar to anime as requested by ${P10}$ and ${P21}$ . Fig. 16 shows a participant’s design using Photoshop compared to our tool. Although our GAN could detect and generate portions of the character's hair and skin, the result is less than optimal when compared to the participant's design with Photoshop. The GAN in this case fell short of differentiating some portions of the skin (e.g., character's neck) from shading or determining the borders of the character's hair precisely.
|
| 306 |
+
|
| 307 |
+
Participants believed that our tool was effective in creating images which can be used in the character exploration phase of the design process (i.e. thumbnails) but not as finished pieces (P1, P12). Expanding its capabilities to generate high-resolution images (as suggested by P12), textures, lighting, and shading may broaden its applicability from a simple brainstorming tool to a standalone design tool. We may be able to achieve a higher-resolution output by modifying our GAN's architecture and training it with high-resolution images, or by using the method proposed by Karras et al. [23] to train our GAN with low-resolution images. Finally, removing the generated background may produce more polished finished pieces. We may be able to remove the generated backgrounds in post-processing by training a semantic segmentation network [43] to label backgrounds which can be subtracted thereafter. Alternatively, we may be able to suppress the GAN from generating backgrounds by subtracting the backgrounds from our dataset, and training the GAN with the background-removed images.
|
| 308 |
+
|
| 309 |
+
## 10 CONCLUSION
|
| 310 |
+
|
| 311 |
+
In this paper, we trained a Generative Adversarial Network (GAN) to automatically color anime character sketches. Using the GAN we created a tool that aids artists in the early stages of the character design process. We evaluated the efficacy of our tool in comparison to using Photoshop by conducting a user study, which showed our tool's potential in speeding-up the character exploration process while maintaining quality. Finally, we conducted a user study that simulates the director and artist interaction in the design pipeline. We concluded that our tool facilitated character design brainstorming without sacrificing the quality of the designs.
|
| 312 |
+
|
| 313 |
+

|
| 314 |
+
|
| 315 |
+
Figure 16: A character design which strayed from our dataset's anime style. (a) The participant created the design under the Character Design Tool and Limited time conditions without using the face selector. (b) The participant created the design under the Photoshop and Unlimited time conditions. (D3:"A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D2:"She's a cold, lone wolf with a sense of humor")
|
| 316 |
+
|
| 317 |
+
## REFERENCES
|
| 318 |
+
|
| 319 |
+
[1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265-283, 2016.
|
| 320 |
+
|
| 321 |
+
[2] R. Arora, R. Habib Kazi, T. Grossman, G. Fitzmaurice, and K. Singh. Symbiosissketch: Combining 2d & 3d sketching for designing detailed 3d objects in situ. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, pp. 185:1-185:15. ACM, New York, NY, USA, 2018.
|
| 322 |
+
|
| 323 |
+
[3] S.-H. Bae, R. Balakrishnan, and K. Singh. Ilovesketch: as-natural-as-possible sketching system for creating $3\mathrm{\;d}$ curve models. In Proceedings of the 21st annual ACM symposium on User interface software and technology, pp. 151-160. ACM, 2008.
|
| 324 |
+
|
| 325 |
+
[4] A. Brock, J. Donahue, and K. Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096,
|
| 326 |
+
|
| 327 |
+
2018.
|
| 328 |
+
|
| 329 |
+
[5] W. Chen and J. Hays. Sketchygan: Towards diverse and realistic sketch to image synthesis. CoRR, abs/1801.02753, 2018.
|
| 330 |
+
|
| 331 |
+
[6] Crypko. Crypko, 2018. Accessed March 16, 2020.
|
| 332 |
+
|
| 333 |
+
[7] H. Ekström. How can a character's personality be conveyed visually, through shape, 2013.
|
| 334 |
+
|
| 335 |
+
[8] P. Esser, E. Sutter, and B. Ommer. A variational u-net for conditional appearance and shape generation. CoRR, abs/1804.04694, 2018.
|
| 336 |
+
|
| 337 |
+
[9] A. Ghosh, R. Zhang, P. K. Dokania, O. Wang, A. A. Efros, P. H. S. Torr, and E. Shechtman. Interactive sketch & fill: Multiclass sketch-to-image translation. 2019.
|
| 338 |
+
|
| 339 |
+
[10] B. Gooch, E. Reinhard, and A. Gooch. Human facial illustrations: Creation and psychophysical evaluation. ACM Transactions on Graphics (TOG), 23(1):27-44, 2004.
|
| 340 |
+
|
| 341 |
+
[11] M. Guay, R. Ronfard, M. Gleicher, and M.-P. Cani. Space-time sketching of character animation. ACM Trans. Graph., 34(4):118:1-118:10, July 2015.
|
| 342 |
+
|
| 343 |
+
[12] X. Han, C. Gao, and Y. Yu. Deepsketch2face. ACM Transactions on Graphics, 36(4):1-12, Jul 2017.
|
| 344 |
+
|
| 345 |
+
[13] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in psychology, vol. 52, pp. 139-183. Elsevier, 1988.
|
| 346 |
+
|
| 347 |
+
[14] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016.
|
| 348 |
+
|
| 349 |
+
[15] R. Henrikson, B. De Araujo, F. Chevalier, K. Singh, and R. Balakrish-nan. Storeoboard: Sketching stereoscopic storyboards. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 4587-4598. ACM, 2016.
|
| 350 |
+
|
| 351 |
+
[16] F. Huang, J. F. Canny, and J. Nichols. Swire: Sketch-based user interface retrieval. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 104:1-104:10. ACM, New York, NY, USA, 2019.
|
| 352 |
+
|
| 353 |
+
[17] X. Huang, M. Liu, S. J. Belongie, and J. Kautz. Multimodal unsupervised image-to-image translation. CoRR, abs/1804.04732, 2018.
|
| 354 |
+
|
| 355 |
+
[18] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125-1134, 2017.
|
| 356 |
+
|
| 357 |
+
[19] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017.
|
| 358 |
+
|
| 359 |
+
[20] J. Jacobs, J. Brandt, R. Mech, and M. Resnick. Extending manual drawing practices with artist-centric programming tools. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 590. ACM, 2018.
|
| 360 |
+
|
| 361 |
+
[21] Y. Jin, J. Zhang, M. Li, Y. Tian, H. Zhu, and Z. Fang. Towards the automatic anime characters creation with generative adversarial networks, 2017.
|
| 362 |
+
|
| 363 |
+
[22] H. Kang, S. Lee, and C. K. Chui. Coherent line drawing. In Proceedings of the 5th international symposium on Non-photorealistic animation and rendering, pp. 43-50. ACM, 2007.
|
| 364 |
+
|
| 365 |
+
[23] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
|
| 366 |
+
|
| 367 |
+
[24] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. CoRR, abs/1812.04948, 2018.
|
| 368 |
+
|
| 369 |
+
[25] R. H. Kazi, F. Chevalier, T. Grossman, and G. Fitzmaurice. Kitty: sketching dynamic and interactive illustrations. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 395-405. ACM, 2014.
|
| 370 |
+
|
| 371 |
+
[26] R. H. Kazi, F. Chevalier, T. Grossman, S. Zhao, and G. Fitzmaurice. Draco: bringing life to illustrations with kinetic textures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 351-360. ACM, 2014.
|
| 372 |
+
|
| 373 |
+
[27] R. H. Kazi, T. Igarashi, S. Zhao, and R. Davis. Vignette: Interactive texture design and manipulation with freeform gestures for pen-and-ink illustration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, pp. 1727-1736. ACM, New York, NY, USA, 2012.
|
| 374 |
+
|
| 375 |
+
[28] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization, 2014.
|
| 376 |
+
|
| 377 |
+
[29] K. C. Kwan and H. Fu. Mobi3dsketch: 3d sketching in mobile ar.
|
| 378 |
+
|
| 379 |
+
In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 176:1-176:11. ACM, New York, NY, USA, 2019.
|
| 380 |
+
|
| 381 |
+
[30] J. E. Kyprianidis and J. Döllner. Image abstraction by structure adaptive filtering. 2008.
|
| 382 |
+
|
| 383 |
+
[31] C. Lundwall. Creating guidelines for game character designs. 2017.
|
| 384 |
+
|
| 385 |
+
[32] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605, 2008.
|
| 386 |
+
|
| 387 |
+
[33] Y. Matsui, K. Aizawa, and Y. Jing. Sketch2manga: Sketch-based manga retrieval. In 2014 IEEE International Conference on Image Processing (ICIP), pp. 3097-3101, Oct 2014.
|
| 388 |
+
|
| 389 |
+
[34] Nagadomi. Animeface-character-dataset, 2019. Accessed July 5, 2019.
|
| 390 |
+
|
| 391 |
+
[35] P. Paint. Paintschainer. http://paintschainer.preferred.tech, 2017. Accessed March 16, 2020.
|
| 392 |
+
|
| 393 |
+
[36] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234-241. Springer, 2015.
|
| 394 |
+
|
| 395 |
+
[37] G. Saul, M. Lau, J. Mitani, and T. Igarashi. Sketchchair: An all-in-one chair design system for end users. In Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '11, pp. 73-80. ACM, New York, NY, USA, 2011.
|
| 396 |
+
|
| 397 |
+
[38] Y. Shi, N. Cao, X. Ma, S. Chen, and P. Liu. Emog: Supporting the sketching of emotional expressions for storyboarding. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-12. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376520
|
| 398 |
+
|
| 399 |
+
[39] M. Shugrina, A. Kar, K. Singh, and S. Fidler. Color sails: Discrete-continuous palettes for deep color exploration. CoRR, abs/1806.02918, 2018.
|
| 400 |
+
|
| 401 |
+
[40] J. Song, K. Pang, Y. Song, T. Xiang, and T. M. Hospedales. Learning to sketch with shortcut cycle consistency. CoRR, abs/1805.00247, 2018.
|
| 402 |
+
|
| 403 |
+
[41] S. Takahashi. python-animeface, 2013. Accessed July 7, 2019.
|
| 404 |
+
|
| 405 |
+
[42] J. Tan, J. Echevarria, and Y. Gingold. Efficient palette-based decomposition and recoloring of images via rgbxy-space geometry. ACM Transactions on Graphics (TOG), 37(6):262:1-262:10, Dec 2018.
|
| 406 |
+
|
| 407 |
+
[43] Y.-H. Tsai, W.-C. Hung, S. Schulter, K. Sohn, M.-H. Yang, and M. Chandraker. Learning to adapt structured output space for semantic segmentation. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018. doi: 10.1109/cvpr.2018.00780
|
| 408 |
+
|
| 409 |
+
[44] P. Wacker, O. Nowak, S. Voelker, and J. Borchers. Arpen: Mid-air object manipulation techniques for a bimanual ar system with pen & smartphone. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, pp. 619:1-619:12. ACM, New York, NY, USA, 2019.
|
| 410 |
+
|
| 411 |
+
[45] H. Winnemöller. Xdog: advanced image stylization with extended difference-of-gaussians. In Proceedings of the ACM SIG-GRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering, pp. 147-156. ACM, 2011.
|
| 412 |
+
|
| 413 |
+
[46] H. Winnemöller, S. C. Olsen, and B. Gooch. Real-time video abstraction. In ACM Transactions On Graphics (TOG), vol. 25, pp. 1221-1226. ACM, 2006.
|
| 414 |
+
|
| 415 |
+
[47] S. Xiang and H. Li. Anime style space exploration using metric learning and generative adversarial networks, 2018.
|
| 416 |
+
|
| 417 |
+
[48] S. Xiang and H. Li. Disentangling style and content in anime illustrations, 2019.
|
| 418 |
+
|
| 419 |
+
[49] P. Xu, H. Fu, Y. Zheng, K. Singh, H. Huang, and C. Tai. Model-guided 3d sketching. IEEE Transactions on Visualization and Computer Graphics, pp. 1-1, 2018.
|
| 420 |
+
|
| 421 |
+
[50] J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman. Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems, 2017.
|
| 422 |
+
|
| 423 |
+
[51] C. Zou, H. Mo, R. Du, X. Wu, C. Gao, and H. Fu. LUCSS: Language-based user-customized colourization of scene sketches. Aug. 2018.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/iEoccQSFFsM/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,336 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ EXPLORING SKETCH-BASED CHARACTER DESIGN GUIDED BY AUTOMATIC COLORIZATION
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
< g r a p h i c s >
|
| 6 |
+
|
| 7 |
+
Figure 1: Our character exploration tool facilitates the character design process by allowing artists to explore characters using colored thumbnails synthesized from sketches. These colored thumbnails, which are traditionally rough grey-scale sketches, better visualize the character for creating the turnaround sheet.
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Character design is a lengthy process, requiring artists to iteratively alter their characters' features and colorization schemes according to feedback from creative directors or peers. Artists experiment with multiple colorization schemes before deciding on the right color palette. This process may necessitate several tedious manual re-colorizations of the character. Any substantial changes to the character's appearance may also require manual re-colorization. Such complications motivate a computational approach for visualizing characters and drafting solutions.
|
| 12 |
+
|
| 13 |
+
We propose a character exploration tool that automatically colors a sketch based on a selected style. The tool employs a Generative Adversarial Network trained to automatically color sketches. The tool also allows a selection of faces to be used as a baseline for the character's design. We validated our tool by comparing it with using Photoshop for character exploration in our pilot study. Finally, we conducted a study to evaluate our tool's efficacy within the design pipeline.
|
| 14 |
+
|
| 15 |
+
Index Terms: Human-centered computing-Human computer interaction (HCI)—Interaction paradigms—Graphical user interfaces
|
| 16 |
+
|
| 17 |
+
§ 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Fig. 1 illustrates a typical character design process. At the very beginning of the process, the designer is furnished with a character description that outlines a combination of personality (e.g., courageous, melancholic) and physical traits (e.g., long hair, small frame) $\left\lbrack {7,{31}}\right\rbrack$ . Their first task is then to sketch out the character’s distinguishing expressions and physical features into a thumbnail-which is often a rough low-resolution gray-scale sketch. From the thumbnail the designer then develops a character turnaround sheet, a reference for later drawing the character in context. The turnaround sheet is then presented to the creative director for feedback, and the entire process iterates. Because the ideation and creation of a turnaround sheet are manual processes, the artist often has to restart from scratch.
|
| 20 |
+
|
| 21 |
+
We devised a tool driven by a sketching interaction that automatically colors the character thumbnail, enabling artists and their creative directors to do more early exploration with less investment of effort. In practice, these thumbnails may also be used as references for producing the turnaround sheet in higher resolution using Photoshop. Fig. 1 shows an example of a colored character thumbnail synthesized using our tool.
|
| 22 |
+
|
| 23 |
+
Our novel tool can generate these colored character thumbnails based on artists' sketches, color and character face selections. Specifically, we achieve this by training a Generative Adversarial Network (GAN) using an anime dataset. We used the GAN to generate the colored thumbnails as the artists sketch, while also allowing them to place characters' faces and select their colorization style. This selection-based automatic colorization framework was able to significantly speed-up the character exploration process compared to using Photoshop for participants in our user study, without sacrificing quality. The major contributions of our work include:
|
| 24 |
+
|
| 25 |
+
* Proposing a novel generative character exploration tool by training a GAN to automatically color sketches.
|
| 26 |
+
|
| 27 |
+
* Allowing artists using our tool to select character faces and colorization style, as well as to edit the character by directly sketching on the canvas.
|
| 28 |
+
|
| 29 |
+
* Validating the effectiveness of our tool in facilitating the character exploration process compared to Photoshop via a number of design tasks.
|
| 30 |
+
|
| 31 |
+
§ 2 RELATED WORK
|
| 32 |
+
|
| 33 |
+
§ 2.1 SKETCH-BASED INTERACTIONS
|
| 34 |
+
|
| 35 |
+
Similar to our approach, several works have explored using sketching as an interaction technique in different contexts.
|
| 36 |
+
|
| 37 |
+
We draw inspiration from several works that utilized sketching as an animation interaction. Kazi et al. $\left\lbrack {{25},{26}}\right\rbrack$ created an interface to allow users to animate their 2D sketches, while Guay et al. [11] presented a novel technique to animate $3\mathrm{D}$ characters’ motion using a single stroke. Storeoboard [15] allows filmmakers to sketch stereoscopic storyboards to visualize the depth of their scenes.
|
| 38 |
+
|
| 39 |
+
Our approach aims to incorporate sketching into a 2D design process, while several works aim to examine sketch interactions in 3D design. Saul et al. [37] created a design system for chair fabrication. Xu et al. [49] introduced a model-guided 3D sketching tool which allows designers to redesign existing 3D models. Huang et al. [16] created a sketch-based user interface design system. ILoveSketch [3], a curve sketching system, allows designers to iterate directly on their 3D designs. Sketch-based interaction techniques in Augmented Reality were explored by the HCI community as well $\left\lbrack {2,{29},{44}}\right\rbrack$ .
|
| 40 |
+
|
| 41 |
+
Several works explored using sketching to design cartoon characters specifically. Sketch2Manga [33] creates characters from sketches. Unlike our approach which uses a generative method to output a character from a sketch, it uses image retrieval to match the query with a character from the database. Han et al. [12] introduced a deep learning method to create 3D caricatures from an input $2\mathrm{D}$ sketch. Because the generated caricatures take the form of a texture-less 3D model, we opted to use a network architecture that enabled the generation of $2\mathrm{D}$ images and control of their colorization style.
|
| 42 |
+
|
| 43 |
+
With our tool, we aim to improve the traditional design process for artists. Similarly, Jacobs et al. [20] introduced a tool which allows artists to create dynamic procedural brushes by varying the rotation, reflection and style of their strokes. Moreover, Vignette [27] is an interactive tool which allows artists to create custom textures, and automatically fill selected regions of their illustrations with these textures.
|
| 44 |
+
|
| 45 |
+
§ 2.2 IMAGE GENERATION
|
| 46 |
+
|
| 47 |
+
Recently, generative modeling approaches have emerged as a powerful, data-driven approach for directly mapping sketches into images. Isola et al. [18] show that conditional GANs are an effective general purpose tool for image-to-image translation problems and can be applied to mapping sketches to images. The sketch-to-image problem is also inherently ambiguous, as different colors and "styles" can be used for multiple plausible completions. Follow-up works $\left\lbrack {{17},{50}}\right\rbrack$ introduce extensions to enable multiple predictions. We find that for our task BicycleGAN [50] is able to effectively generate colored character illustrations from edge maps due to its multimodality. One challenge is the difficulty in obtaining real sketches. Methods such as $\left\lbrack {5,9,{40}}\right\rbrack$ use generative models to generate sketches themselves. We find that "synthesized" sketches based on edge maps, with some carefully selected preprocessing choices, are adequate for our application.
|
| 48 |
+
|
| 49 |
+
Using the style selector in our tool, artists can choose the colorization scheme of their characters. Similarly, Color Sails [39] is a tool which allows coloring designs from a discrete-continuous color palette defined by the user. Tan et al. [42] developed a tool to allow real-time image palette editing. Zou et al. [51] introduced a language-based scene colorization tool. Xiang et al. [47] explored the style space of anime characters by training a style encoder which effectively encodes images into the style space, such that the distance of their codes in the space corresponds to the similarity of their artists' styles.
|
| 50 |
+
|
| 51 |
+
In later work, Xiang et al. [48] developed a Generative Adversarial Disentanglement Network which can incorporate style and content codes that are independent. This allows separate control over the style and content codes of the image, enabling faithful image generation with proper style-specific facial features (e.g., eyes, mouth, chin, hair, blushes, highlights, contours,) as well as overall color saturation and contrast. The neural transfer methods used by Xiang et al. $\left\lbrack {{47},{48}}\right\rbrack$ do not transfer facial features consistently (e.g., may transfer the mouth from some images but not all). Therefore we allow artists to control facial features via only the sketch canvas and/or face selector instead of using the neural transfer methods of Xiang et al. Nonetheless, due to its effectiveness, we still used neural transfer to control the sketch's colorization.
|
| 52 |
+
|
| 53 |
+
§ 2.3 CHARACTER DESIGN
|
| 54 |
+
|
| 55 |
+
EmoG [38] is a character design tool introduced to facilitate story-boarding. EmoG generates facial expressions according to the user's emotion selection and sketch. Akin to our approach users can drag and drop a facial expression onto the canvas in addition to the ability to draw directly on the canvas. Unlike our approach, EmoG renders no colorization suggestions to the user and is focused on facilitating drafting characters' emotional expressions rather than their overall appearance.
|
| 56 |
+
|
| 57 |
+
< g r a p h i c s >
|
| 58 |
+
|
| 59 |
+
Figure 2: Overview of our approach. To begin, an artist may place a face from the face selector onto the design canvas. The artist then directly sketches on the design canvas. If a style is selected, the sketch will be automatically colored by the GAN with the selected style. Otherwise, the tool suggests a random colorization scheme.
|
| 60 |
+
|
| 61 |
+
MakeGirlsMoe [21] is a tool that helps artists brainstorm by allowing them to select facial features to automatically generate a character illustration. However, it has an unnatural discrete selection-based interaction compared to interfaces that allow the user to illustrate by sketching. MakeGirlsMoe was updated to create the crypto-currency generator, Crypko [6]. Both frameworks were not available to us during the user evaluation and thus were not compared to our tool. PaintsChainer [35] automatically colors sketches based on the artist's color hints in the form of brush strokes on top of the sketch. It colors a completed line art that a user uploads, though it does not allow the user to modify the character by placing or editing expressions and features onto the canvas, nor does it allow the user to start from a blank canvas and iteratively sketch a character. Consequentially, it neglects the need for a sketch-based iterative tool that combines both a feature selection-based interaction and automatic colorization. Hence, we developed an interactive character design tool equipped with a face selector, colorization style selector and sketching canvas to fulfill that need.
|
| 62 |
+
|
| 63 |
+
Auto-colorization features were introduced in commercial software like Adobe Illustrator and Clip Studio Paint. However, Adobe Illustrator is limited to coloring black-and-white photographs. On the other hand, Clip Studio Paint can color cartoons, but like PaintsChainer, it can only color completed line art.
|
| 64 |
+
|
| 65 |
+
§ 3 OVERVIEW
|
| 66 |
+
|
| 67 |
+
Fig. 2 shows an overview of our approach. We trained a GAN by using an edges-to-character dataset obtained by extracting the edgemaps of colored anime characters. The GAN learned to produce a colored anime character illustration given a sketch. Using the GAN we built a framework which allows character exploration by enabling a user to select and place facial features as well as sketch onto a canvas. As the user edits the canvas, the GAN will automatically color his illustrations according to the styles he selected. Finally, we demonstrated the effectiveness of our tool by conducting a user study comparing our tool with Adobe Photoshop.
|
| 68 |
+
|
| 69 |
+
§ 4 DATA PROCESSING
|
| 70 |
+
|
| 71 |
+
We obtained our training and validation image pairs from the anime-face character dataset [34]. We used an automated process described below to extract edge maps from the face images, creating our edges-to-character dataset.
|
| 72 |
+
|
| 73 |
+
< g r a p h i c s >
|
| 74 |
+
|
| 75 |
+
Figure 3: To synthesize artist sketches we used an edge operator on our dataset images. (a) A character image sampled from our training set. The edgemaps were created by applying the DoG filter with (b) $\sigma = {0.3}$ and (c) $\sigma = {0.5}$ .
|
| 76 |
+
|
| 77 |
+
Animeface Dataset. The animeface character dataset [34] contains a total of 12,213 samples of face images. We randomly extracted 10,992 images of them for the training set. The remaining ${10}\%$ of samples (1221 images) were used as validation to monitor the progress of training the GAN.
|
| 78 |
+
|
| 79 |
+
Edgemaps. Due to the costliness of obtaining character datasets which include sketches paired with their corresponding colored counterparts, we used an edge detector on the dataset images to simulate sketches. The standard Difference of Gaussians (DoG) filter was used successfully in several works to synthesize line drawings $\left\lbrack {{10},{22},{30},{46}}\right\rbrack$ , and unlike the eXtended difference-of-Gaussians (xDoG) filter [45] it does not tend to fill dark regions. Before processing the images, we created the edgemaps of our training and validation images after converting them to grayscale and then applying the DoG filter with $\gamma = {10}^{9}$ and $k = {4.5}$ (see Fig. 3). The value of $\sigma$ was randomly selected from $\{ {0.3},{0.4},{0.5}\}$ for each image to allow for variations in the amount of noise in the edgemaps.
|
| 80 |
+
|
| 81 |
+
Image Processing. The images in the animeface dataset have a maximum size of ${160}\mathrm{{px}}$ in either dimension and various aspect ratios, while our GAN training process expects images sized exactly ${256} \times$ 256px. In order to match these requirements, we uniformly scaled the animeface faces to fit using bilinear interpolation. For other than square aspect ratios, we filled the rest of the square canvas by repeating edge pixels as shown in Fig. 4. We selected this repetition fill rather than a solid background color to avoid the network learning to reproduce such a solid border.
|
| 82 |
+
|
| 83 |
+
< g r a p h i c s >
|
| 84 |
+
|
| 85 |
+
Figure 4: The arrow depicts the direction of replicating the border pixels when the (a) height or (b) width is the smallest dimension.
|
| 86 |
+
|
| 87 |
+
§ 5 CHARACTER COLORIZATION MODEL TRAINING
|
| 88 |
+
|
| 89 |
+
To generate each colored character image from its paired edgemap image in our dataset, we used BicycleGAN from Zhu et al [50].
|
| 90 |
+
|
| 91 |
+
Architecture. We train the network on the ${256} \times {256}$ paired images from our training edges-to-character dataset. For our encoder, we found that using a ResNet [14] encoder explored by Zhu et al. helped decrease the amount of artifact images generated by the GAN. We use a U-Net [36] generator and PatchGAN [19] discriminators.
|
| 92 |
+
|
| 93 |
+
In preliminary experiments, we found that changing the dimension of the latent code $\left| \mathbf{z}\right|$ produces different results. A code of too high dimension leads to variation in the background style instead of the character colorization style, while too low dimension leads to inadequate variation in the character colorization style. Ultimately, we found a latent code of size 8 to empirically work well. GANs are known to "collapse" when training lasts too long [4]. Subsequently, we noticed that eventually colorization resolution improves at the expense of style variation after 71 epochs as the GAN starts over-fitting on the training data. Because our tool is created to explore character designs and produce low-resolution sample thumbnails, we opted to halt training at 71 epochs to maintain variation in style and to avoid overfitting.
|
| 94 |
+
|
| 95 |
+
Training. We inherit many of the default parameters and practices of BicycleGAN: ${\lambda }_{\text{ image }} = {10},{\lambda }_{\text{ latent }} = {0.5}$ and ${\lambda }_{\mathrm{{KL}}} = {0.01}$ . We trained for 71 epochs using Adam [28] with batch size 1 and learning rate 0.0002 . We updated the generator once for each discriminator update, while the encoder and generator are updated simultaneously. We used the TensorFlow library [1]. Training took approximately 48 hours on an Nvidia GeForce GTX 1070 GPU.
|
| 96 |
+
|
| 97 |
+
We found these parameters empirically. Note that our goal is not to produce the state-of-the-art generative model for this task per se, but rather to explore how a reasonable implementation of a powerful generative model can be leveraged for downstreaming character exploration by an artist.
|
| 98 |
+
|
| 99 |
+
§ 6 CHARACTER EXPLORATION TOOL
|
| 100 |
+
|
| 101 |
+
Due to its multi-modality, our trained neural network is able to color each edgemap in various styles. We demonstrate the several methods incorporated in our character design tool shown in Fig. 6 to color character sketches. The colorization results we presented in this section were generated using the same apparatus used for training.
|
| 102 |
+
|
| 103 |
+
Suggested Colorization. We can color the edgemaps by randomly sampling the latent code $\mathbf{z}$ from a Gaussian distribution and injecting it into the network using the add_to_input method explored by Zhu et al. [50], which spatially replicates and concatenates $\mathbf{z}$ into only the first layer of the generator.
|
| 104 |
+
|
| 105 |
+
Fig. 5 shows colorization results of images in our validation set by randomly sampling the latent code. By varying the latent code the network was able to vary the character's hair color. Because the majority of anime characters in the dataset have matching hair and eye colors, the network jointly varies the hair and eye color. Darker hair colors can be generated by increasing the amount of shading as can be seen in the second row of Fig. 5.
|
| 106 |
+
|
| 107 |
+
Style-based Colorization. We can also inject the latent code $\mathbf{z}$ of other images (i.e. style images) into the network, which enables us to color the input edgemap according to the style images. We first encode the style image to its latent code $\mathbf{z}$ . We then generate the character image from the edgemap by injecting the style image's latent code $\mathbf{z}$ using the add_to_input method.
|
| 108 |
+
|
| 109 |
+
Fig. 7 shows the results of coloring input edge images from our validation sets using a set of style images. Due to the inclusion of multi-faced images within the training set, the network is able to color multiple faces in one image (as illustrated by the final row of Fig. 7), giving artists the ability to sketch multiple faces on the same canvas. These faces are generated with the same colorization scheme. Because we did not remove the backgrounds from the training images, the GAN generates the backgrounds as part of the image's style.
|
| 110 |
+
|
| 111 |
+
Implementation Details. We designed the character exploration tool (shown in Fig. 6) to allow artists to sketch on the design canvas using the brush and eraser provided. The brush is circular and its diameter can be adjusted using the brush slider from 1 to 10 pixels. The eraser is likewise circular and its diameter can be adjusted in the range of 1 to 20 pixels.
|
| 112 |
+
|
| 113 |
+
Artists are also able to place facial expressions into any location on the design canvas from our face selector by clicking the facial expression and the canvas respectively. The facial expressions were created by extracting the faces detected by applying an anime face detector [41]. We selected 60 of the faces detected in the validation set to be used in the face selector.
|
| 114 |
+
|
| 115 |
+
The style selector provides a set of style images from our validation set. These style images were selected by embedding the 8 dimensional latent codes of images in our validation set into two dimensions using t-SNE [32]. The embeddings are visualized as a ${10} \times {10}$ grid by snapping the two dimensional embeddings to the grid. The embeddings were arranged in the grid such that every position in the grid contains the style image with the latent code which has the smallest euclidean distance to the grid position. Twelve images of the 100 style images visualized using t-SNE embedding were discarded due to the presence of some artifacts after using them as style images in colorization. Therefore, we used 88 images in total in our style selector. For consistency and to avoid the varying background, resolution and artistry of the style images from biasing artist's selections in the user study, we display preview images colored with the style images shown in the t-SNE grid using the style-based colorization method. Fig. 6 shows some of these preview images in the style selector. Please refer to the supplementary material for the t-SNE grid visualization.
|
| 116 |
+
|
| 117 |
+
< g r a p h i c s >
|
| 118 |
+
|
| 119 |
+
Figure 5: Sample suggested colorizations from our model. The first column shows the input edgemap. The second column shows the original image. The last four columns show the colorization results of our network with a latent code $\mathbf{z}$ randomly sampled from a Gaussian distribution for each generated sample.
|
| 120 |
+
|
| 121 |
+
The colored sketch will be shown on the display canvas. If the artist has not selected a style image in the style selector, the image will be colored using the suggested colorization method. Otherwise, the sketch will be colored according to the style-based colorization method using the artist's selection in the style selector as the style image. The display canvas will be automatically updated every 20 seconds. The update can also be triggered by the artist by pressing the run button. If a style image was not selected, pressing the run button will trigger applying random colorization with a newly sampled latent vector, giving the artist an additional way-other than the style selector-to explore the colorization space. The sketching canvas can be cleared by pressing the clear button.
|
| 122 |
+
|
| 123 |
+
§ 7 PILOT USER STUDY
|
| 124 |
+
|
| 125 |
+
Participants. We recruited 27 artists with ages ranging from 19 to 30 to participate in our IRB-approved study. Fig. 8 shows participants’ average experience with sketching $\left( {\mathrm{M} = {5.52},\mathrm{{SD}} = {4.65}}\right)$ , character design $\left( {\mathrm{M} = {2.26},\mathrm{{SD}} = {2.96}}\right)$ and using Adobe Photoshop (M=3.26, SD=3.04). Participants’ experience is listed in more detail in the supplemental material.
|
| 126 |
+
|
| 127 |
+
Setup. Participants sketched on a Wacom Cintiq Pro 13 tablet with a 13-inch display. Our tool was loaded on the tablet. The participants sketched directly on the screen. We used the same apparatus employed in training the GAN to generate the images of
|
| 128 |
+
|
| 129 |
+
§ THE DISPLAY CANVAS.
|
| 130 |
+
|
| 131 |
+
Tasks. Following the completion of a training task, participants were given 6 tasks. Each design task refers to a combination of design request, time condition, and tool condition. Participants completed each of the 6 design requests shown in Table 1, which were created under the consultation of a professional character designer. The time conditions Limited and Unlimited determined whether participants completed the request within a 15-minute limit or under no time constraints, respectively. The Limited time condition was used to compare the quality of designs under tight time constraints. Our tool conditions are defined as: our character design tool, our character design tool supplemented with pencil/paper, and Adobe Photoshop condition. To allow for within-subject comparisons between tool conditions under each time condition, participants completed all of the 3 tool conditions under each time condition. The ordering and combinations of the design requests, tool conditions and time conditions were randomized for each participant to avoid any carryover effects. For example, one participant may be given the first design request from Table 1 to be completed under the Adobe Photoshop and Unlimited conditions as their first task; while another participant completes the third design request under the character design tool and Limited conditions first. On average, participants completed the study-including the training task-in approximately 90 minutes.
|
| 132 |
+
|
| 133 |
+
§ TRAINING TASK
|
| 134 |
+
|
| 135 |
+
We allowed participants to freely explore the tool before receiving any design requests. To facilitate the learning process, we provided participants with a tutorial explaining the various functionalities of our tool along with the study's structure.
|
| 136 |
+
|
| 137 |
+
§ CHARACTER DESIGN TOOL
|
| 138 |
+
|
| 139 |
+
Participants completed two design requests using our tool and without using a pencil or paper. One request was completed within 15 minutes while another was completed without a time constraint.
|
| 140 |
+
|
| 141 |
+
< g r a p h i c s >
|
| 142 |
+
|
| 143 |
+
Figure 6: The UI of the character exploration tool in our user study. The canvases show a thumbnail designed using our tool.
|
| 144 |
+
|
| 145 |
+
< g r a p h i c s >
|
| 146 |
+
|
| 147 |
+
Figure 7: Style-based colorization using our model. The left column shows the input edgemap while the second column shows the original image. The six rightmost columns show the results of style-based colorization on the edgemap with various styles.
|
| 148 |
+
|
| 149 |
+
< g r a p h i c s >
|
| 150 |
+
|
| 151 |
+
Figure 8: The participants' average years of experience with sketching, character design, and using Adobe Photoshop.
|
| 152 |
+
|
| 153 |
+
Design Request
|
| 154 |
+
|
| 155 |
+
D1 Cheerful female character with long hair.
|
| 156 |
+
|
| 157 |
+
She has a cool and flowing appearance.
|
| 158 |
+
|
| 159 |
+
D2 She's a cold, lone wolf with a sense of humor.
|
| 160 |
+
|
| 161 |
+
D3 A determined and patient girl with a simple and
|
| 162 |
+
|
| 163 |
+
practical look. Her greatest desire is
|
| 164 |
+
|
| 165 |
+
ultimate knowledge.
|
| 166 |
+
|
| 167 |
+
D4 She's a determined and courageous healer,
|
| 168 |
+
|
| 169 |
+
with a dark and eerie appearance.
|
| 170 |
+
|
| 171 |
+
D5 She's a dedicated and knowledgeable scholar
|
| 172 |
+
|
| 173 |
+
with a bright and sunny aesthetic.
|
| 174 |
+
|
| 175 |
+
6 She's a charming and fun-loving socialite
|
| 176 |
+
|
| 177 |
+
with a vintage and classic look.
|
| 178 |
+
|
| 179 |
+
Table 1: The 6 character design requests given to participants in our user study.
|
| 180 |
+
|
| 181 |
+
§ CHARACTER DESIGN TOOL AND PENCIL/PAPER
|
| 182 |
+
|
| 183 |
+
Participants were allowed to use a pencil and paper for two design requests completed using our tool. They completed one request within 15 minutes, and one without a time constraint. Some artists typically plan their designs on paper prior to using editing software. Thus, we added this tool condition to inspect whether allowing artists to use pencil/paper affects their workflow.
|
| 184 |
+
|
| 185 |
+
§ ADOBE PHOTOSHOP
|
| 186 |
+
|
| 187 |
+
Similar to the previous tool conditions, participants completed two requests by using Adobe Photoshop, once without a time constraint and once with a 15-minute limit. To mimic participant's typical design process as closely as possible, we provided them with a pencil and paper in this tool condition as well.
|
| 188 |
+
|
| 189 |
+
User Survey. After completing the 2 tasks under each tool condition (i.e. once with a 15-minute time limit and once without), participants were asked to complete a survey to evaluate the performance of the tool used. We opted to use a 5-point Likert scale to evaluate the tools akin to [25]. Participants were asked to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree):
|
| 190 |
+
|
| 191 |
+
* The tool was easy to use and learn.
|
| 192 |
+
|
| 193 |
+
* I find the tool overall to be useful.
|
| 194 |
+
|
| 195 |
+
Finally, we surveyed participants once more after completing the user study in its entirety. Participants were asked to rate each of their colored designs from 1 (Poor) to 5 (Excellent). Participants are aware of the study's time constraints, so they are more likely to fairly judge their artworks' quality. Therefore, we opted to rely on the participants' evaluation of their own work instead of using external evaluators. We were also interested in learning which designs participants favored overall, so for each time condition the participants were asked to vote for either the design created under the Character Design Tool, Character Design Tool and Pencil/Paper, or Photoshop condition.
|
| 196 |
+
|
| 197 |
+
< g r a p h i c s >
|
| 198 |
+
|
| 199 |
+
Figure 9: Average time of completing the design requests.
|
| 200 |
+
|
| 201 |
+
§ 7.1 TIME OF COMPLETION.
|
| 202 |
+
|
| 203 |
+
Fig. 9 shows the average time taken by participants to complete the character design requests under each tool condition. Mauchly's test did not show a violation of sphericity against tool condition $\left( {W\left( 2\right) = {0.84},p = {0.11}}\right)$ . With one-way repeated-measure ANOVA, we found a significant effect of the tool used on the time of design completion $\left( {F\left( {2,{52}}\right) = {14.53}\text{ , partial }{\eta }^{2} = {0.36},p < {0.001}}\right)$ . We performed Boneferroni-corrected paired t-tests for our post-hoc pairwise comparisons.
|
| 204 |
+
|
| 205 |
+
Participants completed the designs faster by using our tool compared to Photoshop. A post-hoc test showed that the average time participants took to complete the designs using Photoshop $\left( {{1205} \pm {135.16}\text{ seconds }}\right)$ was longer than using our tool with pencil/paper $\left( {{801.7} \pm {91.28}\text{ seconds }}\right) \left( {p = {0.01}}\right)$ . A post-hoc test also showed that participants completed the design requests in a shorter amount of time by using our tool without pencil/paper $\left( {{605.93} \pm {60.73}}\right) \left( {p < {0.01}}\right)$ compared to Photoshop. The post-hoc test showed no significant difference in the time of completion when comparing completing the task using our character design tool with or without pencil/paper $\left( {p = {0.12}}\right)$ . This suggests that while using our tool sped-up the design process when compared to Photoshop, including the pencil/paper did not yield any observable significant improvements in our setting.
|
| 206 |
+
|
| 207 |
+
Participants remarked that our tool expedites the design process (P1, P3, P10, P20). P3 specifically noted that our tool "makes producing a character design much faster and easier than doing it on paper."
|
| 208 |
+
|
| 209 |
+
§ 7.2 EVALUATION OF EXPERIENCE SURVEY
|
| 210 |
+
|
| 211 |
+
Fig. 10 shows participants' response to "The tool was easy to use and learn." for each tool condition. A Friedman test showed a significant difference in participant’s responses to the statement $\left( {{\chi }^{2}\left( 2\right) = {13.73}}\right.$ , $p = {0.01})$ . We also conducted post-hoc analysis using Wilcoxon signed-rank tests with Bonferroni correction. Similar to Adobe Pho-toshop, the median of participants found our tool easy to use (Md=4 agree). However, the post-hoc tests showed a significant difference between the ease of use of our tool and Adobe Photoshop. In other words, we found a significant difference when comparing participants' responses after using our tool without pencil/paper and Adobe Photoshop $\left( {W = {199},Z = - {3.04},r = {0.41},p = {0.007}}\right)$ . Likewise, we found a significant difference when comparing using our tool with pencil/paper and Adobe Photoshop $(W = {144},Z = - {3.25},r =$ ${0.44},p = {0.003})$ . These results may be observed in Fig. 10 by the broader variation in responses given to our tool conditions compared to the Photoshop condition.
|
| 212 |
+
|
| 213 |
+
< g r a p h i c s >
|
| 214 |
+
|
| 215 |
+
Figure 10: Participants answered the questions in the experience survey with a rating of 1 (strongly disagree) to 5 (strongly agree
|
| 216 |
+
|
| 217 |
+
The familiarity of photo editing software to our participants may have contributed to the consensus of Adobe Photoshop's ease of use compared to our tool. Although some participants like ${P26}$ appreciated the simplicity of our application by stating that "it's modestly easy to use for character designers of any experience level. It's perfect as it is.", the absence of exhaustive common features that exist in modern editing software might have contributed to our tool's wider range of easiness ratings. The post-hoc test showed no significant difference between the ease of using our tool with or without pencil/paper $\left( {W = {33},Z = {0.92},p = {0.35}}\right)$ .
|
| 218 |
+
|
| 219 |
+
Our Friedman test found a significant difference in participants' responses to the "I find the tool overall to be useful." statement as well $\left( {{\chi }^{2}\left( 2\right) = {9.86},p = {0.007}}\right)$ . The post-hoc test $(W = {63.5},Z =$ $- {1.33},p = {0.56})$ showed no significant difference between the usefulness rating of using Adobe Photoshop (Md=4 agree) compared to using our tool without pencil/paper $(\mathrm{{Md}} = 5$ strongly agree), despite our tool having a higher median rating than Adobe Photoshop. Conversely, the post-hoc test $\left( {W = {135},Z = - {2.86},r = {0.39},p = {0.013}}\right)$ showed a significant difference between the rating of Adobe Photo-shop and using our tool with pencil/paper despite having the same median rating $(\mathrm{{Md}} = 4$ agree). Furthermore, we found no significant difference between responses under the Character Design Tool (Md=5 strongly agree) and Character Design Tool and Pencil/Paper (Md=4 agree) conditions $\left( {W = 4,Z = - {2.49},r = {0.34},p = {0.038}}\right)$ . The inclusion of pencil and paper as an additional step in the participants' pipeline might have made the design process more cumbersome, resulting in the tendency to view the usefulness of our tool under the Character Design Tool and Pencil/Paper condition to be less than the other two conditions as shown in Fig. 10.
|
| 220 |
+
|
| 221 |
+
§ 7.3 EVALUATION OF DESIGNS
|
| 222 |
+
|
| 223 |
+
Fig. 11 shows how participants rated the designs produced using the various tool conditions we studied. The designs produced under the Limited constraint were rated similarly $\left( {\mathrm{{Md}} = 3}\right)$ under all the tool conditions. A Friedman test also indicated no significant difference in the rating of designs produced under that time constraint $\left( {{\chi }^{2}\left( 2\right) = }\right.$ ${1.98},p = {0.37})$ .
|
| 224 |
+
|
| 225 |
+
< g r a p h i c s >
|
| 226 |
+
|
| 227 |
+
Figure 11: Participants were asked to rate their designs with a rating of 1 (poor) to 5 (excellent). Limited and Unlimited refer to whether the design was created with a 15-minute time limit or with unlimited time.
|
| 228 |
+
|
| 229 |
+
Although the median of ratings was higher for images designed under the Character Design Tool and Photoshop conditions (Md=4), than the Character Design tool and Pencil/Paper condition (Md=3); we found no significant difference in the rating of designs produced without any time constraints by applying the Friedman test $\left( {{\chi }^{2}\left( 2\right) = }\right.$ ${4.13},p = {0.13})$ .
|
| 230 |
+
|
| 231 |
+
Some participants(P4, P12)noted that the artwork they produced during the user study does not reflect their abilities. This may suggest that the participants may be rating the designs based on their previous body of work, giving all the designs overall a neutral rating; consequently resulting in no significant difference in the rating of images under different tool conditions. Nevertheless, the designs created using our tool received the majority of participants' votes as can be seen in Fig. 14.
|
| 232 |
+
|
| 233 |
+
Fig. 12 shows some selected participant's thumbnails using our tool, while Fig. 13 shows their designs using Photoshop. The examples created under the Character Design Tool condition seem to be of better quality than their Photoshop counterparts. The participant also created the design faster by using our tool (386 seconds) compared to using Photoshop (620 seconds) while under the Unlimited time condition. Although the designs created using Photoshop are comparable to ones created under the Character Design Tool and Pencil/Paper condition, the time it took to complete the design using our tool (388 seconds) was much shorter for the participant than using Photoshop (652 seconds) while under the Unlimited time condition. The remaining thumbnails are included with the supplemental material.
|
| 234 |
+
|
| 235 |
+
§ 8 EVALUATION OF THE TOOL'S USAGE IN THE WILD
|
| 236 |
+
|
| 237 |
+
To evaluate the effectiveness of our tool in the design workflow, we conducted a user study that simulates directors' and artists' workflow in the character design process. Due to the pandemic, we were unable to recruit a large number of participants and thus conduct a large-scale user study. Moreover, our user study was conducted remotely.
|
| 238 |
+
|
| 239 |
+
Participants. We recruited 5 of the artists with ages ranging from 19 to 30 in our initial user study to participate in our second IRB-approved study. We also recruited 5 participants of ages 19-25 to act as art directors.
|
| 240 |
+
|
| 241 |
+
Setup. To use our tool the artists were asked to connect to the same device utilized in our initial user study using TeamViewer. For comparison, the artists were asked to use their preferred drawing tools. We placed no constraints on the software the artists used. Instead, we encouraged artists to employ the tool that will most facilitate the brainstorming process for them. Some artists used tools that have auto-colorization capabilities like Adobe Illustrator and Clip Studio Paint, while others selected tools that did not support auto-colorization like FireAlpaca and PaintTool SAI. The artists, directors, and researcher used Zoom to communicate.
|
| 242 |
+
|
| 243 |
+
< g r a p h i c s >
|
| 244 |
+
|
| 245 |
+
Figure 12: Participant's sketches and their corresponding colored thumbnails created using our tool under the (a) Limited time condition and the (b) Unlimited time condition. The images were labeled with the participant number and design request completed (D3: "A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D4: "She's a determined and courageous healer, with a dark and eerie appearance."; D6:"She's a charming and fun-loving socialite with a vintage and classic look."). P10 used the face selector to design the character according to D3 and D4, while P7 used the face selector to design the character according to D4.
|
| 246 |
+
|
| 247 |
+
< g r a p h i c s >
|
| 248 |
+
|
| 249 |
+
Figure 13: Characters created by the participants from Fig. 12 using Photoshop. The two leftmost illustrations were created under the (a) Limited time condition, while the two rightmost were created under the (b) Unlimted time condition. (D1:"Cheerful female character with long hair. She has a cool and flowing appearance."; D3: "A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D5: "She's a dedicated and knowledgeable scholar with a bright and sunny aesthetic.").
|
| 250 |
+
|
| 251 |
+
Tasks. Before the study, each director submitted two character designs. Two different artists were randomly assigned to each director to complete his/her designs. The directors introduced their designs to each artist in a brainstorming session. Moreover, the artists shared their screens in these sessions to show their sketches to the directors. The artists used our tool in one brainstorming session and their selected drawing tool in the other. The session was terminated when the director was satisfied with the rough design the artist produced. The time of completing these sessions was recorded to compare our tool with other drawing tools.
|
| 252 |
+
|
| 253 |
+
After the brainstorming session ended, the artists submitted the turnaround sheets to their respective directors via e-mail. The artists iterated on the designs based on the directors' feedback. The study concluded when the directors approved the turnaround sheet that each of the two assigned artists submitted.
|
| 254 |
+
|
| 255 |
+
After the directors approved an artist's turnaround sheet, they were asked to complete a survey. The directors were asked to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree):
|
| 256 |
+
|
| 257 |
+
* I am satisfied with the quality of design the artist produced.
|
| 258 |
+
|
| 259 |
+
* The design matches my description.
|
| 260 |
+
|
| 261 |
+
< g r a p h i c s >
|
| 262 |
+
|
| 263 |
+
Figure 14: The number of participants (out of 27) who selected the designs created under each tool condition.
|
| 264 |
+
|
| 265 |
+
§ COMMUNICATION WITH THE ARTIST IN THE BRAINSTORMING SESSION WAS EASY.
|
| 266 |
+
|
| 267 |
+
The artists were asked to report the number of hours they spent working on the design, as well as to evaluate the following statements with a rating of 1 (strongly disagree) to 5 (strongly agree):
|
| 268 |
+
|
| 269 |
+
§ COMMUNICATION WITH THE DIRECTOR IN THE BRAINSTORMING SESSION WAS EASY.
|
| 270 |
+
|
| 271 |
+
Results. The amount of time in the brainstorming session is shown in Table 2 as Session time. On average, the artists spent a shorter amount of time in the brainstorming session by using our tool ( ${855} \pm {95.02}$ seconds) compared to using other drawing tools ( ${1584.8} \pm {165.29}$ seconds). Despite artists spending a shorter amount of time in the brainstorming sessions, they spent a similar amount of time working on the designs after using our tool $({3.1} \pm {0.9}$ hours) in the brainstorming session compared to after using other drawing tools ( ${3.2} \pm {0.86}$ hours).
|
| 272 |
+
|
| 273 |
+
Moreover, directors overall were satisfied with the quality of the turnaround sheets produced by the artists after using our tool (Md=5 strongly agree) akin to after using other drawing tools (Md=5 strongly agree). Directors overall felt that the turnaround sheets produced after using our tool (Md=4 agree) matched their description as well. The turnaround sheets created in our user study are included in the supplementary material.
|
| 274 |
+
|
| 275 |
+
Only one director was unsatisfied with the turnaround sheet the artist produced after using our tool in the brainstorming session. The director worked with Artist 1, giving a score of 2 (disagree) to both the quality of the design and its match to the director's description. In the brainstorming session, the artist produced the thumbnail shown in Fig. 15 which the director approved. In the e-mail correspondence after the brainstorming session, the director was indecisive about the character's specifications. These miscommunications resulted in both the artist and director rating the communication in the brainstorming session lower than in any other session, with the artist giving the director a score of 1 (strongly disagree) and the director giving the artist a score of 3 (neutral) as can be seen in Table 2.
|
| 276 |
+
|
| 277 |
+
Overall, both artists and directors believed that communication with their counterparts went smoothly. The directors rated communication with the artists using our tool (Md=5 strongly agree) akin to using other drawing tools (Md=5 strongly agree) during the brainstorming session. Artists reported slightly better communication with the directors after using our tool (Md=5 strongly agree) compared to other drawing tools (Md=4 agree).
|
| 278 |
+
|
| 279 |
+
< g r a p h i c s >
|
| 280 |
+
|
| 281 |
+
Figure 15: The turnaround sheet created by Artist 1 after using our tool in the brainstorming session. The lower right corner shows the colored thumbnail produced by our tool and its input sketch.
|
| 282 |
+
|
| 283 |
+
§ 9 DISCUSSION
|
| 284 |
+
|
| 285 |
+
Limitations. Although participants believed that our tool allows them to draft a character much faster than Photoshop, they encountered some limitations to the framework. For example, although our tool was able to color multiple faces sketched by an artist within the same canvas, our GAN tends to style all faces with the same color scheme, limiting designs to only one character per canvas. Moreover, P3 suggested to "simply use the facial expressions, as opposed to the expressions plus some of the hair" in the face selector. Due to the face detection method we utilized, the selections we provided in the face selector included some portions of the characters hair. With a more sophisticated feature segmentation and classification model, the different facial features (e.g., hair, eyes) could be segmented and displayed separately in the selector. Some participants expressed the need for improvements to the interface like a larger sketch canvas (P7), an undo button (P3, P4, P7, P11, P13, P14, P19, P27), and stroke sensitivity/customization (P3, P5, P9, P12, P14).
|
| 286 |
+
|
| 287 |
+
Our style selector is not fully customizable. For example, ${P3}$ wanted the ability "to have a way to set up a custom hair and skin color:" Using an architecture similar to the one proposed by Karras et al. [24] to transfer the style could allow a more finely-grained customization of the colorization scheme.
|
| 288 |
+
|
| 289 |
+
While some participants were content with the variety of faces (P9, P11, P26) and styles (P14, P18, P27) our tool provides, due to limitations in our dataset, our tool does not provide artists with large variations in skin tone, nor does our tool provide a substantial number of non-female characters in the face selector for the same reason. Our tool is able to color male characters as illustrated in Figure 7 which we also provide in the face selector. Moreover, participants like ${P7}$ who, despite the usage of female pronouns in the design description, created non-female characters using our tool (as shown in Fig. 12). However, they identified the need for further inclusivity, especially in the face selector(P7, P21). Nevertheless, 20 out of the 27 participants used the face selector in at least one of their final designs.
|
| 290 |
+
|
| 291 |
+
Participants overall praised the face selector in expediting the design process by providing a baseline for the character $({P1},{P2},{P3}$ , ${P6},{P10},{P26},{P27})$ . P27 found the face selector "made the app more useful in comparison to photoshop because you could start out with a template".
|
| 292 |
+
|
| 293 |
+
max width=
|
| 294 |
+
|
| 295 |
+
2*X 2|c|Artist 1 2|c|Artist 2 2|c|Artist 3 2|c|Artist 4 2|c|Artist 5 2|c|Average
|
| 296 |
+
|
| 297 |
+
2-13
|
| 298 |
+
Ours Other Ours Other Ours Other Ours Other Ours Other Ours Other
|
| 299 |
+
|
| 300 |
+
1-13
|
| 301 |
+
Session time (seconds) 1,200 1,800 900 1,860 666 1.113 797 1,893 712 1,258 855 1584.8
|
| 302 |
+
|
| 303 |
+
1-13
|
| 304 |
+
Creation time (hours) 3 3 1 1 1.5 2 6 6 4 4 3.1 3.2
|
| 305 |
+
|
| 306 |
+
1-13
|
| 307 |
+
Quality 2 5 5 5 5 5 5 5 5 5 5 5
|
| 308 |
+
|
| 309 |
+
1-13
|
| 310 |
+
Description matching 2 5 5 5 4 5 5 3 4 5 4 5
|
| 311 |
+
|
| 312 |
+
1-13
|
| 313 |
+
Communication (director) 3 5 5 5 5 5 5 5 5 5 5 5
|
| 314 |
+
|
| 315 |
+
1-13
|
| 316 |
+
Communication (artist) 1 5 5 2 5 4 5 3 5 5 5 4
|
| 317 |
+
|
| 318 |
+
1-13
|
| 319 |
+
|
| 320 |
+
Table 2: Results of comparing our tool to other drawing tools in our second user study. Our tool's results are shown in the Ours columns while other tools' are shown in the Other columns. The Session time indicates the duration of the brainstorming session in seconds. The Creation time indicates the number of hours the artists spent working on the design after the brainstorming session. The Quality indicates how the directors rated their satisfaction with the quality of the turnaround sheet. Description matching indicates the director's rating of how well the turnaround sheet matched their description. Communication (director) indicates the rating of communication during the brainstorming session that the director reported, while Communication (artist) indicates the rating the artist reported.
|
| 321 |
+
|
| 322 |
+
We received positive feedback from user study participants regarding our tool's applicability within the design pipeline $({P1},{P3}$ , ${P6},{P9},{P10})$ . Moreover, incorporating aspects of the NASA TLX (Task Load Index) [13] could also aid with further investigating our tool's usability.
|
| 323 |
+
|
| 324 |
+
Future Work. The current focus of our tool was to expedite the exploration of character faces. By expanding our dataset to include full-body character images we may be able to train a network with an architecture similar to Esser et al.'s [8] to generate characters in various poses, body types and clothing. This may expand the capabilities of our tool to aid artists in creating the entire turnaround sheet in addition to exploring the character's head-shot. PaintsChainer [35] allows artists to provide color hints for the tool. We may be able to achieve a similar interaction by including hint channels into our GAN's input layer. The GAN can be trained by randomly sampling colored strokes from the character images in the edges-to-character dataset and using them as the inputs to the hint channels.
|
| 325 |
+
|
| 326 |
+
Expanding our dataset to more styles of characters may also allow us to cater to designers who have a style dissimilar to anime as requested by ${P10}$ and ${P21}$ . Fig. 16 shows a participant’s design using Photoshop compared to our tool. Although our GAN could detect and generate portions of the character's hair and skin, the result is less than optimal when compared to the participant's design with Photoshop. The GAN in this case fell short of differentiating some portions of the skin (e.g., character's neck) from shading or determining the borders of the character's hair precisely.
|
| 327 |
+
|
| 328 |
+
Participants believed that our tool was effective in creating images which can be used in the character exploration phase of the design process (i.e. thumbnails) but not as finished pieces (P1, P12). Expanding its capabilities to generate high-resolution images (as suggested by P12), textures, lighting, and shading may broaden its applicability from a simple brainstorming tool to a standalone design tool. We may be able to achieve a higher-resolution output by modifying our GAN's architecture and training it with high-resolution images, or by using the method proposed by Karras et al. [23] to train our GAN with low-resolution images. Finally, removing the generated background may produce more polished finished pieces. We may be able to remove the generated backgrounds in post-processing by training a semantic segmentation network [43] to label backgrounds which can be subtracted thereafter. Alternatively, we may be able to suppress the GAN from generating backgrounds by subtracting the backgrounds from our dataset, and training the GAN with the background-removed images.
|
| 329 |
+
|
| 330 |
+
§ 10 CONCLUSION
|
| 331 |
+
|
| 332 |
+
In this paper, we trained a Generative Adversarial Network (GAN) to automatically color anime character sketches. Using the GAN we created a tool that aids artists in the early stages of the character design process. We evaluated the efficacy of our tool in comparison to using Photoshop by conducting a user study, which showed our tool's potential in speeding-up the character exploration process while maintaining quality. Finally, we conducted a user study that simulates the director and artist interaction in the design pipeline. We concluded that our tool facilitated character design brainstorming without sacrificing the quality of the designs.
|
| 333 |
+
|
| 334 |
+
< g r a p h i c s >
|
| 335 |
+
|
| 336 |
+
Figure 16: A character design which strayed from our dataset's anime style. (a) The participant created the design under the Character Design Tool and Limited time conditions without using the face selector. (b) The participant created the design under the Photoshop and Unlimited time conditions. (D3:"A determined and patient girl with a simple and practical look. Her greatest desire is ultimate knowledge."; D2:"She's a cold, lone wolf with a sense of humor")
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,301 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MeetingMate: an Ambient Interface for Improved Meeting Effectiveness and Corporate Knowledge Sharing
|
| 2 |
+
|
| 3 |
+

|
| 4 |
+
|
| 5 |
+
Figure 1. The MeetingMate system. The content being presented (A) is captured and interpreted, then relevant corporate knowledge information is displayed on the devices of meeting attendees.
|
| 6 |
+
|
| 7 |
+
## ABSTRACT
|
| 8 |
+
|
| 9 |
+
We present MeetingMate, a system for improving meeting effectiveness and knowledge transfer within an organization. The system utilizes already existing content produced within the organization (slide decks, meeting information, HR databases, etc.) from which it generates and presents contextually relevant information in real-time to meeting participants through an ambient interface. Besides providing details about projects and content within the company, an employee relationship graph is created which supports increasing a user's "metaknowlege" about who knows what and who knows whom within the organization.
|
| 10 |
+
|
| 11 |
+
## INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The institutional knowledge of a corporation is an important resource [29], and for a corporation to be successful it is necessary for this knowledge to be shared and transferred from those who have it, to those who need it [9]. However, large workforces, distributed locations, and demanding schedules act as barriers to successful knowledge transfer. Companies often employ specific activities designed to improve knowledge sharing such as email newsletters, wiki pages,
|
| 14 |
+
|
| 15 |
+
Submitted to GI 2021 and all-hands presentations, however, these require employees to do additional work beyond their normal job functions, for some unknown, and unsure, future benefit.
|
| 16 |
+
|
| 17 |
+
Besides improving knowledge and awareness of what is going on within a company, it is valuable to improve knowledge of "who knows what" and "who knows whom" within an organization. Such knowledge is referred to as metaknowledge [24], and increases in metaknowledge have been linked to improved work performance [25], improved ability to create new innovations combining existing ideas [13], and reduced duplication of work [10].
|
| 18 |
+
|
| 19 |
+
In knowledge-based work environments it is common for workers to spend between 20% and 80% of their time in meetings $\left\lbrack {{17},{20},{28},{32}}\right\rbrack$ , and while meetings are considered important $\left\lbrack {7,{16}}\right\rbrack$ , they are also often deemed by the attendees to be inefficient and ineffective $\left\lbrack {{18},{27}}\right\rbrack$ .
|
| 20 |
+
|
| 21 |
+
This paper describes MeetingMate, a system for improving meeting effectiveness and knowledge transfer in an organization through an ambient interface. The MeetingMate system utilizes already existing content produced within the organization as source material (slide decks, meeting information, HR databases, etc.) from which it generates and presents contextually relevant information in real-time to meeting participants. This work contributes a novel technique for extracting presented meeting content directly from an HDMI stream, and is unique in its goal of presenting not only corporate "knowledge" about topics within the company, but also improving employees' "metaknowledge" about who knows what and who knows whom within the organization.
|
| 22 |
+
|
| 23 |
+
## RELATED WORK
|
| 24 |
+
|
| 25 |
+
## Meeting Assistance
|
| 26 |
+
|
| 27 |
+
The development of technology to support and enhance meetings has long been a popular topic of research [34]. Rienks et al. [26] summarize much of the work in "pro-active" meeting assistants, and divide systems into categories based on when they provide assistance: before the meeting, during the meeting, or after the meeting.
|
| 28 |
+
|
| 29 |
+
Meeting assistants which record the audio and/or visual content of meetings for future viewing are often referred to as "Smart Meeting Systems", and include projects such as the CALO Meeting Assistant System [33] which distributes the task of meeting capture, annotation, and audio transcription, and work by Geyer et al. [8] exploring the idea of allowing meeting participants to create meaningful indies into the meeting timeline while the meeting is occurring to improve later navigation. For a more thorough listing of work on "after the meeting" assistance, see Yu and Nakumura [37].
|
| 30 |
+
|
| 31 |
+
Of the systems designed for in-meeting support, many of them make use of an audio channel. SmartMic [36] makes use of smartphones to capture the audio of a meeting, and the AMIDA system [22] uses microphones in an instrumented meeting room to listen for key words in the conversation of a meeting and pull up or suggest contextually relevant documents. The Connector [5], uses the audio and video channels of a smart meeting room to determine if someone is available to receive a message, and provides mechanisms to deliver the message using the meeting room facilities. Our system is similar in some ways to AMIDA in that both bring up relevant content based on meeting context, however while AMIDA uses the audio of a meeting, our system derives context from the material being sent to the meeting room's projector. We are unaware of any prior work which extracts the visual content being presented in a meeting as context for a real-time meeting assistant.
|
| 32 |
+
|
| 33 |
+
## Corporate Knowledge
|
| 34 |
+
|
| 35 |
+
Some consider knowledge to be a company’s "greatest asset" [31]. Lee et al. developed the KMPI metric [3] to measure how well an organization performs in the area of Knowledge Management measured in five dimensions: knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. Our system aims to primarily improve knowledge sharing and knowledge utilization.
|
| 36 |
+
|
| 37 |
+
For making better use of existing corporate knowledge resources, Zanker and Gordea [38] created a recommendation engine to help when manually searching through internal documents. Aastrand et al. [1] propose using open data to bootstrap the process of creating a hierarchical tagging structure for internal content, while Chen [4] looks at the process of text-mining through corporate documents to extract useful information. When these projects consider searching through and mining corporate data, they are considering "purposefully" created artifacts such as documents and web pages.
|
| 38 |
+
|
| 39 |
+
Our work differs in that while we do mine purposefully created materials such as slide decks and project pages for data, we also make substantial use of "ancillary" corporate data such as meeting room records, mailing lists, and HR databases to generate a more complete picture of the corporate network.
|
| 40 |
+
|
| 41 |
+
## Ambient Information Systems
|
| 42 |
+
|
| 43 |
+
Ambient interfaces $\left\lbrack {2,{15},{21},{35}}\right\rbrack$ can be characterized as systems which support the monitoring of noncritical information with the intent of not distracting or burdening the user. Ambient displays have been studied for many uses, including software learning [14], social awareness [6], and office work [12].
|
| 44 |
+
|
| 45 |
+
Pousman and Stasko [23] outline four dimensions in the design of an ambient display system: information capacity, notification level, representational fidelity, and aesthetic emphasis. In our system we are aiming for high information capacity and representational fidelity, while keeping distractions to a minimum with a low notification level.
|
| 46 |
+
|
| 47 |
+
## CORPORATE KNOWLEDGE CHALLENGES
|
| 48 |
+
|
| 49 |
+
This work was developed at [COMPANY NAME] using [COMPANY NAME] internal data. [COMPANY NAME] is a multinational software company of $\sim {11},{000}$ employees. The workforce is widely distributed, with many distinct offices, and 17 of those offices house more than 150 employees.
|
| 50 |
+
|
| 51 |
+
The company faces many of the challenges with corporate knowledge management [19], and results from the yearly employee survey suggest employees generally wish they had more awareness of what is going on in other parts of the company. [COMPANY NAME] has started a number of initiatives designed to improve awareness and knowledge sharing throughout the company such as wiki pages, project groups, mailing lists, and all-hands presentations. However, since these all require employees to do some additional work beyond their normal job duties without the guarantee of a particular future benefit, these initiatives have not had the desired effect on corporate knowledge sharing.
|
| 52 |
+
|
| 53 |
+
The goal of our work is to take advantage of the vast amount of material already being produced, and information naturally available within an organization to improve efficiency and awareness. A primary design objective for the system is that there is no additional cost for someone to use the system; that is, it should be just as easy to use the system as it is to not use the system.
|
| 54 |
+
|
| 55 |
+
## MEETINGMATE
|
| 56 |
+
|
| 57 |
+
There are many different times and activities through the day where employee corporate knowledge could be improved. We've chosen to focus on times when employees are attending meetings. Since employees are involved in a large number of meetings $\left\lbrack {{17},{20},{28},{32}}\right\rbrack$ , a system designed to augment the experience of attending meetings would have a broad reach within the organizations, and since those meetings are often considered ineffective $\left\lbrack {{18},{27}}\right\rbrack$ , a meeting augmentation system could have the dual benefit of increasing overall corporate knowledge, while simultaneously improving the effectiveness of the meeting.
|
| 58 |
+
|
| 59 |
+
To this end, we've created MeetingMate, a system consisting of three main components: a Data Collector, a Presentation Capture System, and an Ambient Assistant (Figure 2).
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Figure 2. Architecture of the MeetingMate system. (components in light grey are part of the existing meeting room infrastructure).
|
| 64 |
+
|
| 65 |
+
At a high level, the MeetingMate system uses the visual content presented at meeting as the "search query" for a corporate knowledge database, and presents contextually relevant information to the meeting attendees through an ambiently updating interface. We next describe the three main components of the MeetingMate system in more detail.
|
| 66 |
+
|
| 67 |
+
## Data Sources/Data Collection
|
| 68 |
+
|
| 69 |
+
This section describes a number of existing data sources within the organization, what information is available within these sources, and how the sources are processed to extract their content. For this work we only considered data which was "publically" available within the company, that is, data which everyone within the company has access to. By only using "publically" available data, we minimize the risk that someone using MeetingMate will see privileged or confidential information to which they should not have access.
|
| 70 |
+
|
| 71 |
+
## S1. Slide Decks
|
| 72 |
+
|
| 73 |
+
Within the company, there are two main locations where documents are stored: a Microsoft Sharepoint [39] server, and a JFile [name changed for anonymity] project management system. Between the two locations, there are 13,688 Power-Point (PPT) slide decks dating back to 1997, with 5,343 presentations created between 2014 and 2016. The decks cover a wide range of topics and have been submitted by authors in all divisions of the company.
|
| 74 |
+
|
| 75 |
+
Processing the slide decks involves two main steps: collecting them from the servers, and analyzing the slides to extract relevant data. For the documents hosted on the Sharepoint server, the Sharepoint API [40] was used to search and download all files of either *.ppt or *.pptx file type. The JFile server does not have a useful API for this purpose, so a web-scraper was written in Python which iterates over each project and crawls through each sub-folder in the documents tree downloading *.ppt and *.pptx files. For both the Sharepoint and JFile based slide decks, high-level metadata such as the creation date, author, and file location are captured during the collection process.
|
| 76 |
+
|
| 77 |
+
Once the PPTs are downloaded, a data extraction process begins. A C# program using the Office. Interop. PowerPoint libraries saves images of each slide in multiple resolutions as .png files, and the text on each slide is extracted and saved to a database.
|
| 78 |
+
|
| 79 |
+
To download the full collection of 13,688 slide decks and extract the content from the 310,554 slides takes approximately 48 hours on a desktop computer. On a daily basis the Share-point and JFile systems are respectively searched and crawled, and newly added PPTs are downloaded and processed. This daily process takes approximately one hour.
|
| 80 |
+
|
| 81 |
+
## S2. Meeting Information
|
| 82 |
+
|
| 83 |
+
Since the system is restricted internally-public data, we cannot access the calendars of individual employees for meeting records. However, the majority of meetings take place in meeting rooms, which have shared calendars. As the company uses a Microsoft Outlook mail and calendar system, a C# program using the Office.Interop.Outlook libraries was written to first collect a list of all meeting rooms, and then step through each of the past meetings which have occurred in the room. For each meeting we record the meeting's: name (which often indicates the topic of the meeting), location, length, and a list of attendees.
|
| 84 |
+
|
| 85 |
+
In total there were 719 meeting rooms which held a total of 355,233 meetings between 2014 and 2016. Collection of the entire data set took approximately 36 hours. The process of accessing each of the individual calendars is relatively time-consuming, taking $\sim 5$ hours for incremental daily updates.
|
| 86 |
+
|
| 87 |
+
## S3. Code Repositories
|
| 88 |
+
|
| 89 |
+
The source code developed by the company is primarily managed through and internal GitHub Enterprise Server. Using a Python script with the GitHub API [41], data for the 6,251 internal git repositories are collected including: repository name, description, contributors, languages used, and bytes of code. 1,131 employees are listed as contributors to at least one git repository
|
| 90 |
+
|
| 91 |
+
Data collection for the full set of repositories requires $\sim {3.5}$ hours. Incremental updates are not easily captured using the API, so the full set of repository data is collected each day.
|
| 92 |
+
|
| 93 |
+
## S4. JFile Project Pages
|
| 94 |
+
|
| 95 |
+
The JFile project management system is organized into individual "projects" which represent specific working or interest groups within the company. The system houses 3,405 groups, with a median member count of 8 . The same crawler used for collecting the PPTs from JFile is used to collect the project information, collecting information such as: project name, project description, and a list of group members.
|
| 96 |
+
|
| 97 |
+
## S5. Individual Human Resources Data
|
| 98 |
+
|
| 99 |
+
Each of the 11,615 employees (contingent and full-time) at the company has an entry in the internal employee search system. This data is also available in spreadsheet form with 42 columns of information for each employee. Among the most relevant ones are name, email address, work location, job title, and manager's name. From the employee name, and manager's name fields we are able to construct the formal organizational structure of the company. Headshot photos (which are available for ${58}\%$ of employees), follow a consistent naming pattern and location, and are easily downloaded and associated with the appropriate record.
|
| 100 |
+
|
| 101 |
+
Updating the individual HR data entails copying the daily spreadsheet from the HR system and running the script to look for and download any new, or updated, headshots. This process takes approximately 30 minutes.
|
| 102 |
+
|
| 103 |
+
## S6. Email Group Memberships
|
| 104 |
+
|
| 105 |
+
To simplify sending emails and meeting requests to collections of people, the company makes use of email groups. There are a total of 15,054 email groups stored on the Mi-crosoft Outlook mail server, with between 1 and 4,316 members in each, with a median member count of 6 .
|
| 106 |
+
|
| 107 |
+
The email group data (group name, and membership list) is again collected with a C# program using the Office.In-terop. Outlook library, and the collection completes in approximately 30 minutes.
|
| 108 |
+
|
| 109 |
+
## S7. Corporate Definitions
|
| 110 |
+
|
| 111 |
+
Stored on the company intranet is an employee-maintained database of acronyms and terms frequently used within the organization. 322 acronyms and 776 terms are defined in this database which is downloaded on a daily basis.
|
| 112 |
+
|
| 113 |
+
## Live Presentation Capture
|
| 114 |
+
|
| 115 |
+
In order to supplement the presentation material with relevant information, the MeetingMate system needs to be aware of what is being presented. One possible way to do this would be to write an extension for PowerPoint which uses the Interop. PowerPoint APIs to extract the data being presented and transfer that information to the MeetingMate server. However, this approach has a number of shortcomings. First, it would only work for presentation material from Microsoft PowerPoint. Second, and more significantly, it would require presenters to do the additional work of installing a plug-in on the machine from which they are presenting. Since a primary design concern of MeetingMate is to not require additional set-up work for people to make use of the system, this approach is undesirable.
|
| 116 |
+
|
| 117 |
+
Our approach is to instead use an HDMI capture and pass through device (designed for live streaming video games) to capture a copy of exactly what is being displayed on the presentation screen. In this way, the presenter performs the exact same steps to present content as they usually (plug a video cable into their laptop), but rather than the cable going directly to the projector, it goes to the HDMI capture device, which passes the signal on to the projector (Figure 2).
|
| 118 |
+
|
| 119 |
+
The content saved by the HDMI capture device is an image of what is currently being sent to the presentation screen. This image needs to be processed to find any text being displayed.
|
| 120 |
+
|
| 121 |
+
The windows computer connected to the capture device uploads screenshots to the Project Oxford OCR [42] service for text extraction. The process of uploading the screenshot to the OCR server and receiving the extracted text takes an average of 2.0 seconds. Images are only sent to the OCR service when the projected slide has changed, and this is accomplished by comparing the most recent image with the previously uploaded one, and only uploading the new image if at least 15% of the pixels have changed. The extracted text is sent to the Ambient Assistant server (Figure 2).
|
| 122 |
+
|
| 123 |
+
Using OCR for the text extraction not only allows for text within images to be recognized, but also enables content from any source (PDF, video, etc.) to be analyzed. This makes MeetingMate completely agnostic to the format of the presented material.
|
| 124 |
+
|
| 125 |
+
## Ambient Assistant
|
| 126 |
+
|
| 127 |
+
The final piece of the MeetingMate system is the Ambient Assistant server, which receives the extracted text from the content being presented, finds relevant corporate knowledge content, and serves the results as a responsive webpage. The server is written in Python with the Flask framework, and a Tornado server running on a Windows Server 2012 instance.
|
| 128 |
+
|
| 129 |
+
The goal of the Ambient Assistant webpage is to be as unobtrusive and non-disruptive as possible, while still providing useful information which will enhance the audience's understanding of the presentation.
|
| 130 |
+
|
| 131 |
+
The Ambient Assistant displays relevant knowledge content and is shown on the served page as a series of 'cards' (Figure 1). The individual cards are designed to present the most relevant information at a glance, without requiring input, or too much attention, from the user. As new cards become available they slowly fade in at the bottom of the screen (over a period of 5 seconds) while the page automatically scrolls to make the most recent cards visible. This webpage could be viewed on a number of devices - we have explored several including projecting the ambient assistant onto a secondary screen beside the main projection screen - but believe the most useful configuration is for individuals to view the Ambient Assistant on a personal device such as a phone, tablet, or laptop.
|
| 132 |
+
|
| 133 |
+
The following sections describe the types of cards which are available, when they are displayed, and what information they contain.
|
| 134 |
+
|
| 135 |
+
## Acronyms and Definitions
|
| 136 |
+
|
| 137 |
+
Corporate communications are often riddled with acronyms and jargon, making the text unnecessarily difficult to understand $\left\lbrack {{30},{43}}\right\rbrack$ . As an example, in the 13,688 slide decks collected from the company servers, there are over 132,588 instances of acronyms on the 300,000 slides. However, only 1.7% of those acronyms are defined within the slide deck where they are used.
|
| 138 |
+
|
| 139 |
+
The first two card types are internal technology definitions and acronym expansions. The presentation text is searched for any of the collected corporate definitions or acronyms (S7), and if they are found, a card is shown with the acronym expanded (if applicable) and the term defined (Figure 3).
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
|
| 143 |
+
Figure 3. Sample internal technology definition (left) and acronym expansion (right) cards.
|
| 144 |
+
|
| 145 |
+
## Employee Information
|
| 146 |
+
|
| 147 |
+
The employee information card is displayed whenever an employee's name or email address is found on a slide (Figure 4). The lists of employee names and email addresses are derived from the human resources data (S5). Early testing revealed a common occurrence where the name an employee goes by is different from their 'official' name in the HR database (ex, Jon Smith vs. Jonathan Smith). To overcome this, a list of common name alternatives was used to generate a list of possible names for each person (ex, Jonathan Smith could be either "Jon Smith" or "Jonathan Smith")
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+
Figure 4. Sample employee information card.
|
| 152 |
+
|
| 153 |
+
Besides the employee's name, the card also displays the employee's headshot, job title, and work location. Additionally, a lists of the projects (S4) and code repositories (S3) the employee is actively contributing to are included. Finally, the most closely related employees using the computed employee network graph (discussed below) are presented as a list of "Frequent Collaborators". Combined, these lists give an overview of both what and who this employee knows.
|
| 154 |
+
|
| 155 |
+
## Projects/Code Repositories
|
| 156 |
+
|
| 157 |
+
The next two card types are project cards and repository cards (Figure 5), which are derived from the JFile (S4) and GitHub (S3) data sources. For each, the name and description of the project/repository is displayed, along with names of members or contributors, and in the case of repository cards, the list of programming languages used are also shown.
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
|
| 161 |
+
Figure 5. Sample project (left) and repository (right) cards.
|
| 162 |
+
|
| 163 |
+
The project and repository cards are shown whenever the exact project or repository name is found on a slide. Additionally, these cards are displayed if the text of the slide contains many of the same keywords as the description of a project or repository. This is determined by transforming the descriptions of the projects and repositories into ${tf}$ -idf vectors [11], computing the cosine similarity between the descriptions and the recognized text, and displaying the projects/repositories with a cosine similarity $> {0.85}$ .
|
| 164 |
+
|
| 165 |
+
## Contextually Similar Slides
|
| 166 |
+
|
| 167 |
+
The final card type displays contextually relevant slides (Figure 6) from the collection of over 300,000 slides gathered from the internal slide deck repositories (S1). Besides an image of the contextually relevant slide itself, the card also presents the author of the slide deck, when it was created, and a list of employees who are mentioned somewhere in the deck. Clicking on the "more" icon at the top right of the card presents options to: generate an email containing a link to the contextually relevant presentation, open the presentation directly, or dismiss this card and have it no longer be suggested. The other card types each have similar menus.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+
Figure 6. Sample contextually similar slide card (left), and the associated context menu (right).
|
| 172 |
+
|
| 173 |
+
Analogous to the process used to find contextually similar projects and repositories, the text for each of the slides in the slide deck repository are converted to tf-idf vectors and compared to captured text. To increase exposure to a wider range of content, if there are any slides with a cosine similarity $>$ 0.85 , one slide is chosen at random and displayed.
|
| 174 |
+
|
| 175 |
+
## Personal Connections, Employee Relationships
|
| 176 |
+
|
| 177 |
+
The information on the cards above is tailored to the content being presented, but does not change based on who is viewing the Ambient Assistant. By taking into account who is using the Ambient Assistant ("Clarence Mcevoy" in Figure 7), an additional "Personal Connection" section can be inserted into the cards which highlights a connection the logged in user has to a person or project (Figure 7).
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
|
| 181 |
+
Figure 7. Sample cards with additional personal connection information.
|
| 182 |
+
|
| 183 |
+
This personal connection field is populated by looking at an employee network graph computed using the: Meeting Information (S2), Code Repository (S3), JFile Project Pages (S4), HR Data (S5), and Emails Lists (S6) data sets. For each data set, an adjacency matrix is computed based on the strength of the inter-personal connection for each pair of employees.
|
| 184 |
+
|
| 185 |
+
For example, using the Meeting Information data set, we look at each meeting both employees were a part of, and calculate the weight of the edge between two employees $\left( {e}_{1}\right.$ and $\left. {e}_{2}\right)$ with the following formula:
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
{\text{weight}}_{{e1},{e2}} = \mathop{\sum }\limits_{{\text{meetings}\left( {{e}_{1},{e}_{2}}\right) }}^{m}\left( \frac{{\text{length}}_{m}}{{\text{#attendees}}_{m}}\right)
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
Where length ${h}_{m}$ is the length of the meeting, and #attendees ${}_{m}$ is the number of people in the meeting. This means longer meetings, and meetings with fewer attendees are weighted more heavily. Once all edge weights are computed, they are globally ranked from strongest to weakest, and re-mapped between 0 and 1 . Similar calculations are carried out for the repository, project page, and email list data sets. For employee data, edges are created between employees and managers.
|
| 192 |
+
|
| 193 |
+
The edges from the individual data set adjacency matricies are then combined into one overall adjacency matrix by summing the weights from the individual edges to create a composite "weight" metric incorporating all the different measures of adjacency. Then, to find the strongest "personal connection" between any pair of employees, we look for the heaviest-weight, shortest path. That is, we take the set of all shortest paths between the two employee nodes, and then select the path in which the edge weights are the highest.
|
| 194 |
+
|
| 195 |
+
The heaviest-weight, shortest path is then converted to a natural language sentence by looking at the strongest components of each edge. For example, "You are managed by Lester Carr; who frequently meets with William Topolin-ski", or "You are in some project groups (including UI Unification) with Alberta Santiago, who reports to Norma Stenn". The overall adjacency matrix represents a single connected component, so a path can be computed between any two employees. However, only paths with at most one intermediate node are displayed to emphasize "strong" connections.
|
| 196 |
+
|
| 197 |
+
## FEEDBACK AND DISCUSSION
|
| 198 |
+
|
| 199 |
+
To preliminarily test the design and usefulness of Meeting-Mate, the system was deployed and tested over a series of weekly group meetings. In several cases the system provided truly unknown information to the audience (in one case while sharing a research paper, an employee card for one of the authors showed up - prior to that, attendees at the meeting did not know the author had been hired). This deployment also indicated some ways the system produced spurious or unnecessary results - for example, the internal dictionary contains a definition for "User" as "someone who uses our software", and this definition appeared whenever the (very common) word 'user' appears on a slide. While it is easy for users to mark content to "never be shown again", it would be useful to reduce the number of low-utility results at the system level. In the future more advanced language modelling could look at the surrounding context of the slides, or recent slides, to better predict what results might be most relevant. It would also be interesting to explore recognizing content other than text, such as headshots of particular images, and using those as contextual cues.
|
| 200 |
+
|
| 201 |
+
The next step is to deploy the system in a meeting room continuously for several months to collect more feedback about how the system performs and is received. This wider deployment will bring up some interesting issues. For meetings concerning highly sensitive topics, we plan to have a very clear, physical switch for presenters to "disable" Meeting-Mate if they are uncomfortable with the content of their presentation being captured. It will be interesting to see how frequently that functionality is utilized.
|
| 202 |
+
|
| 203 |
+
While we are limiting ourselves to "publically" available internal data, there are still cases where information which is technically visible to all employees, probably shouldn't be. For example a meeting name could be "Discuss firing John Doe". The creator of such a meeting is probably unaware that the meeting name is publically viewable. Practically, the system will need a way to remove these sorts of sensitive entries, but hopefully the deployment of this system will make people more aware of what is visible to other employees. It would also be interesting to make use of non-public information in the system to allow for more personalized recommendations.
|
| 204 |
+
|
| 205 |
+
Overall, we believe MeetingMate serves as a valuable solution for improving meeting effectiveness and knowledge sharing within an organization, and will serve as an example for further work in this area.
|
| 206 |
+
|
| 207 |
+
REFERENCES
|
| 208 |
+
|
| 209 |
+
1. Aastrand, G., Celebi, R. and Sauermann, L. 2010. Using Linked Open Data to Bootstrap Corporate Knowledge Management in the OrganiK Project. Proceedings of the 6th International Conference on Semantic Systems (I-SEMANTICS '10), ACM, 18:1- 18:8. http://doi.org/10.1145/1839707.1839730
|
| 210 |
+
|
| 211 |
+
2. Cadiz, J.J., Venolia, G., Jancke, G. and Gupta, A. 2002. Designing and Deploying an Information Awareness Interface. Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work (CSCW '02), ACM, 314-323. http://doi.org/10.1145/587078.587122
|
| 212 |
+
|
| 213 |
+
3. Chang Lee, K., Lee, S. and Kang, I.W. 2005. KMPI: measuring knowledge management performance. Information & Management 42,3: 469-482. http://doi.org/10.1016/j.im.2004.02.003
|
| 214 |
+
|
| 215 |
+
4. Chen, H. 2001. Knowledge Management Systems: A Text Mining Perspective. Knowledge Computing Corporation. Retrieved from http://arizona.openrepository.com/arizona/handle/10150/106481
|
| 216 |
+
|
| 217 |
+
5. Danninger, M., Flaherty, G., Bernardin, K., Ekenel, H.K., Köhler, T., Malkin, R., Stiefelhagen, R. and Wai-bel, A. 2005. The Connector: Facilitating Context-aware Communication. Proceedings of the 7th International Conference on Multimodal Interfaces (ICMI ’05), ACM, 69-75.
|
| 218 |
+
|
| 219 |
+
http://doi.org/10.1145/1088463.1088478
|
| 220 |
+
|
| 221 |
+
6. Dourish, P. and Bly, S. 1992. Portholes: Supporting Awareness in a Distributed Work Group. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '92), ACM, 541-547. http://doi.org/10.1145/142750.142982
|
| 222 |
+
|
| 223 |
+
7. Doyle, M. 1993. How to Make Meetings Work! Berkley, New York.
|
| 224 |
+
|
| 225 |
+
8. Geyer, W., Richter, H. and Abowd, G.D. 2005. Towards a Smarter Meeting Record-Capture and Access of Meetings Revisited. Multimedia Tools Appl. 27, 3: 393-410. http://doi.org/10.1007/s11042-005-3815-0
|
| 226 |
+
|
| 227 |
+
9. Hinds, P.J., Patterson, M. and Pfeffer, J. 2001. Bothered by abstraction: the effect of expertise on knowledge transfer and subsequent novice performance. The Journal of Applied Psychology 86, 6: 1232-1243.
|
| 228 |
+
|
| 229 |
+
10. Jackson, P. and Klobas, J. 2008. Transactive memory systems in organizations: Implications for knowledge directories. Decision Support Systems 44, 2: 409-424. http://doi.org/10.1016/j.dss.2007.05.001
|
| 230 |
+
|
| 231 |
+
11. Jones, K.S. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation 28: 11-21.
|
| 232 |
+
|
| 233 |
+
12. MacIntyre, B., Mynatt, E.D., Voida, S., Hansen, K.M., Tullio, J. and Corso, G.M. 2001. Support for Multitasking and Background Awareness Using Interactive Peripheral Displays. Proceedings of the 14th Annual
|
| 234 |
+
|
| 235 |
+
ACM Symposium on User Interface Software and Technology (UIST '01), ACM, 41-50. http://doi.org/10.1145/502348.502355
|
| 236 |
+
|
| 237 |
+
13. Majchrzak, A., Cooper, L.P. and Neece, O.E. 2004. Knowledge Reuse for Innovation. Management Science 50, 2: 174-188. http://doi.org/10.1287/mnsc.1030.0116
|
| 238 |
+
|
| 239 |
+
14. Matejka, J., Grossman, T. and Fitzmaurice, G. 2011. Ambient Help. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11), ACM, 2751-2760. http://doi.org/10.1145/1978942.1979349
|
| 240 |
+
|
| 241 |
+
15. Matthews, T. 2006. Designing and Evaluating Glance-able Peripheral Displays. Proceedings of the 6th Conference on Designing Interactive Systems (DIS '06), ACM, 343-345. http://doi.org/10.1145/1142405.1142457
|
| 242 |
+
|
| 243 |
+
16. Monge, P.R., McSween, C. and Wyer, J. 1989. A profile of meetings in corporate America: Results of the ${3M}$ meeting effectiveness study. Annenberg School of Communications, University of Southern California.
|
| 244 |
+
|
| 245 |
+
17. Mosvick, R.K. and Nelson, R.B. 1986. We'Ve Got to Start Meeting Like This!: A Guide to Successful Business Meeting Management. HarperCollins Canada / Scott F - Prof, Glenview, Ill.
|
| 246 |
+
|
| 247 |
+
18. Murray, G. 2014. Learning How Productive and Unproductive Meetings Differ. In Advances in Artificial Intelligence, Marina Sokolova and Peter van Beek (eds.). Springer International Publishing, 191-202. Retrieved April 12, 2016 from http://link.springer.com/chapter/10.1007/978-3-319- 06483-3_17
|
| 248 |
+
|
| 249 |
+
19. Otto Kühn, A.A. 1997. Corporate Memories for Knowledge Management in Industrial Practice: Prospects and Challenges. J. UCS 3, 8: 929-954. http://doi.org/10.1007/978-3-662-03723-2_9
|
| 250 |
+
|
| 251 |
+
20. Panko, R.R. 1992. Managerial Communication Patterns. Journal of Organizational Computing 2, 1: 95- 122. http://doi.org/10.1080/10919399209540176
|
| 252 |
+
|
| 253 |
+
21. Plaue, C. and Stasko, J. 2007. Animation in a Peripheral Display: Distraction, Appeal, and Information Conveyance in Varying Display Configurations. Proceedings of Graphics Interface 2007 (GI '07), ACM, 135-142. http://doi.org/10.1145/1268517.1268541
|
| 254 |
+
|
| 255 |
+
22. Popescu-Belis, A., Boertjes, E., Kilgour, J., Poller, P., Castronovo, S., Wilson, T., Jaimes, A. and Carletta, J. 2008. The AMIDA automatic content linking device: Just-in-time document retrieval in meetings. 5237 LNCS: 272-283. http://doi.org/10.1007/978-3-540- 85853-9-25
|
| 256 |
+
|
| 257 |
+
23. Pousman, Z. and Stasko, J. 2006. A Taxonomy of Ambient Information Systems: Four Patterns of Design. Proceedings of the Working Conference on Advanced Visual Interfaces (AVI '06), ACM, 67-74. http://doi.org/10.1145/1133265.1133277
|
| 258 |
+
|
| 259 |
+
24. Ren, Y. and Argote, L. 2011. Transactive Memory Systems 1985-2010: An Integrative Framework of Key
|
| 260 |
+
|
| 261 |
+
Dimensions, Antecedents, and Consequences. The Academy of Management Annals 5, 1: 189-229. http://doi.org/10.1080/19416520.2011.590300
|
| 262 |
+
|
| 263 |
+
25. Ren, Y., Carley, K.M. and Argote, L. 2006. The Contingent Effects of Transactive Memory: When Is It More Beneficial to Know What Others Know? Management Science 52, 5: 671-682. http://doi.org/10.1287/mnsc.1050.0496
|
| 264 |
+
|
| 265 |
+
26. Rienks, R., Nijholt, A. and Barthelmess, P. 2007. Proactive meeting assistants: attention please! ${AI}\&$ SOCIETY 23, 2: 213-231. http://doi.org/10.1007/s00146-007-0135-0
|
| 266 |
+
|
| 267 |
+
27. Rogelberg, S.G., Scott, C. and Kello, J. 2007. The science and fiction of meetings. MIT Sloan management review 48, 2: 18.
|
| 268 |
+
|
| 269 |
+
28. Romano Jr, N. and Nunamaker Jr, J. 2001. Meeting Analysis: Findings from Research and Practice. Proceedings of the 34th Annual Hawaii International Conference on System Sciences ( HICSS-34)-Volume 1 - Volume 1 (HICSS '01), IEEE Computer Society, 1072-. Retrieved April 12, 2016 from http://dl.acm.org/citation.cfm?id=820557.820581
|
| 270 |
+
|
| 271 |
+
29. Sheng Wang, R.A.N. 2010. Knowledge Sharing: A Review and Directions for Future Research. Human Resource Management Review 20, 2: 115-131. http://doi.org/10.1016/j.hrmr.2009.10.001
|
| 272 |
+
|
| 273 |
+
30. Simon, P. Message Not Received. Retrieved April 11, 2016 from https://www.good-reads.com/work/best_book/42761341-message-not-received-why-business-communication-is-broken-and-how-to-fi
|
| 274 |
+
|
| 275 |
+
31. Steve Offsey. 1997. Knowledge Management: Linking People to Knowledge for Bottom Line Results. Journal of Knowledge Management 1, 2: 113-122. http://doi.org/10.1108/EUM0000000004586
|
| 276 |
+
|
| 277 |
+
32. Team, 3M Meeting Management and Drew, J. 1994. Mastering Meetings: Discovering the Hidden Potential of Effective Business Meetings. Mcgraw-Hill, New York.
|
| 278 |
+
|
| 279 |
+
33. Tur, G., Stolcke, A., Voss, L., Peters, S., Hakkani-Tur, D., Dowding, J., Favre, B., Fernandez, R., Frampton, M., Frandsen, M., Frederickson, C., Graciarena, M., Kintzing, D., Leveque, K., Mason, S., Niekrasz, J., Purver, M., Riedhammer, K., Shriberg, E., Jing Tien, Vergyri, D. and Fan Yang. 2010. The CALO Meeting Assistant System. IEEE Transactions on Audio,
|
| 280 |
+
|
| 281 |
+
Speech, and Language Processing 18, 6: 1601-1611. http://doi.org/10.1109/TASL.2009.2038810
|
| 282 |
+
|
| 283 |
+
34. Turoff, M. and Hiltz, S.R. 1977. Telecommunications: Meeting through your computer: Information exchange and engineering decision-making are made easy through computer-assisted conferencing. IEEE Spectrum 14, 5: 58-64. http://doi.org/10.1109/MSPEC.1977.6367610
|
| 284 |
+
|
| 285 |
+
35. Vogel, D. and Balakrishnan, R. 2004. Interactive Public Ambient Displays: Transitioning from Implicit to Explicit, Public to Personal, Interaction with Multiple Users. Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST '04), ACM, 137-146. http://doi.org/10.1145/1029632.1029656
|
| 286 |
+
|
| 287 |
+
36. Xu, H., Yu, Z., Wang, Z. and Ni, H. 2014. SmartMic: a smartphone-based meeting support system. The Journal of Supercomputing 70, 3: 1318-1330. http://doi.org/10.1007/s11227-014-1229-3
|
| 288 |
+
|
| 289 |
+
37. Yu, Z. and Nakamura, Y. 2010. Smart Meeting Systems: A Survey of State-of-the-art and Open Issues. ACM Comput. Surv. 42, 2: 8:1-8:20. http://doi.org/10.1145/1667062.1667065
|
| 290 |
+
|
| 291 |
+
38. Zanker, M. and Gordea, S. 2006. Recommendation-based Browsing Assistance for Corporate Knowledge Portals. Proceedings of the 2006 ACM Symposium on Applied Computing (SAC '06), ACM, 1116-1117. http://doi.org/10.1145/1141277.1141541
|
| 292 |
+
|
| 293 |
+
39. SharePoint - Team Collaboration Software Tools. Mi-crosoft Office. Retrieved April 13, 2016 from https://products.office.com/en-us/sharepoint/collaboration
|
| 294 |
+
|
| 295 |
+
40. SharePoint 2013. Retrieved April 13, 2016 from https://msdn.microsoft.com/en-us/library/office/jj162979.aspx
|
| 296 |
+
|
| 297 |
+
41. GitHub API v3 | GitHub Developer Guide. Retrieved April 13, 2016 from https://developer.github.com/v3/
|
| 298 |
+
|
| 299 |
+
42. Microsoft Cognitive Services. Retrieved April 11, 2016 from https://www.microsoft.com/cognitive-services/en-us/computer-vision-api
|
| 300 |
+
|
| 301 |
+
43. Why you should cool it with the corporate jargon - Fortune. Retrieved April 11, 2016 from http://fortune.com/2011/09/28/why-you-should-cool-it-with-the-corporate-jargon/
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/ibaCpFUWVb9/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,205 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ MEETINGMATE: AN AMBIENT INTERFACE FOR IMPROVED MEETING EFFECTIVENESS AND CORPORATE KNOWLEDGE SHARING
|
| 2 |
+
|
| 3 |
+
< g r a p h i c s >
|
| 4 |
+
|
| 5 |
+
Figure 1. The MeetingMate system. The content being presented (A) is captured and interpreted, then relevant corporate knowledge information is displayed on the devices of meeting attendees.
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
We present MeetingMate, a system for improving meeting effectiveness and knowledge transfer within an organization. The system utilizes already existing content produced within the organization (slide decks, meeting information, HR databases, etc.) from which it generates and presents contextually relevant information in real-time to meeting participants through an ambient interface. Besides providing details about projects and content within the company, an employee relationship graph is created which supports increasing a user's "metaknowlege" about who knows what and who knows whom within the organization.
|
| 10 |
+
|
| 11 |
+
§ INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The institutional knowledge of a corporation is an important resource [29], and for a corporation to be successful it is necessary for this knowledge to be shared and transferred from those who have it, to those who need it [9]. However, large workforces, distributed locations, and demanding schedules act as barriers to successful knowledge transfer. Companies often employ specific activities designed to improve knowledge sharing such as email newsletters, wiki pages,
|
| 14 |
+
|
| 15 |
+
Submitted to GI 2021 and all-hands presentations, however, these require employees to do additional work beyond their normal job functions, for some unknown, and unsure, future benefit.
|
| 16 |
+
|
| 17 |
+
Besides improving knowledge and awareness of what is going on within a company, it is valuable to improve knowledge of "who knows what" and "who knows whom" within an organization. Such knowledge is referred to as metaknowledge [24], and increases in metaknowledge have been linked to improved work performance [25], improved ability to create new innovations combining existing ideas [13], and reduced duplication of work [10].
|
| 18 |
+
|
| 19 |
+
In knowledge-based work environments it is common for workers to spend between 20% and 80% of their time in meetings $\left\lbrack {{17},{20},{28},{32}}\right\rbrack$ , and while meetings are considered important $\left\lbrack {7,{16}}\right\rbrack$ , they are also often deemed by the attendees to be inefficient and ineffective $\left\lbrack {{18},{27}}\right\rbrack$ .
|
| 20 |
+
|
| 21 |
+
This paper describes MeetingMate, a system for improving meeting effectiveness and knowledge transfer in an organization through an ambient interface. The MeetingMate system utilizes already existing content produced within the organization as source material (slide decks, meeting information, HR databases, etc.) from which it generates and presents contextually relevant information in real-time to meeting participants. This work contributes a novel technique for extracting presented meeting content directly from an HDMI stream, and is unique in its goal of presenting not only corporate "knowledge" about topics within the company, but also improving employees' "metaknowledge" about who knows what and who knows whom within the organization.
|
| 22 |
+
|
| 23 |
+
§ RELATED WORK
|
| 24 |
+
|
| 25 |
+
§ MEETING ASSISTANCE
|
| 26 |
+
|
| 27 |
+
The development of technology to support and enhance meetings has long been a popular topic of research [34]. Rienks et al. [26] summarize much of the work in "pro-active" meeting assistants, and divide systems into categories based on when they provide assistance: before the meeting, during the meeting, or after the meeting.
|
| 28 |
+
|
| 29 |
+
Meeting assistants which record the audio and/or visual content of meetings for future viewing are often referred to as "Smart Meeting Systems", and include projects such as the CALO Meeting Assistant System [33] which distributes the task of meeting capture, annotation, and audio transcription, and work by Geyer et al. [8] exploring the idea of allowing meeting participants to create meaningful indies into the meeting timeline while the meeting is occurring to improve later navigation. For a more thorough listing of work on "after the meeting" assistance, see Yu and Nakumura [37].
|
| 30 |
+
|
| 31 |
+
Of the systems designed for in-meeting support, many of them make use of an audio channel. SmartMic [36] makes use of smartphones to capture the audio of a meeting, and the AMIDA system [22] uses microphones in an instrumented meeting room to listen for key words in the conversation of a meeting and pull up or suggest contextually relevant documents. The Connector [5], uses the audio and video channels of a smart meeting room to determine if someone is available to receive a message, and provides mechanisms to deliver the message using the meeting room facilities. Our system is similar in some ways to AMIDA in that both bring up relevant content based on meeting context, however while AMIDA uses the audio of a meeting, our system derives context from the material being sent to the meeting room's projector. We are unaware of any prior work which extracts the visual content being presented in a meeting as context for a real-time meeting assistant.
|
| 32 |
+
|
| 33 |
+
§ CORPORATE KNOWLEDGE
|
| 34 |
+
|
| 35 |
+
Some consider knowledge to be a company’s "greatest asset" [31]. Lee et al. developed the KMPI metric [3] to measure how well an organization performs in the area of Knowledge Management measured in five dimensions: knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. Our system aims to primarily improve knowledge sharing and knowledge utilization.
|
| 36 |
+
|
| 37 |
+
For making better use of existing corporate knowledge resources, Zanker and Gordea [38] created a recommendation engine to help when manually searching through internal documents. Aastrand et al. [1] propose using open data to bootstrap the process of creating a hierarchical tagging structure for internal content, while Chen [4] looks at the process of text-mining through corporate documents to extract useful information. When these projects consider searching through and mining corporate data, they are considering "purposefully" created artifacts such as documents and web pages.
|
| 38 |
+
|
| 39 |
+
Our work differs in that while we do mine purposefully created materials such as slide decks and project pages for data, we also make substantial use of "ancillary" corporate data such as meeting room records, mailing lists, and HR databases to generate a more complete picture of the corporate network.
|
| 40 |
+
|
| 41 |
+
§ AMBIENT INFORMATION SYSTEMS
|
| 42 |
+
|
| 43 |
+
Ambient interfaces $\left\lbrack {2,{15},{21},{35}}\right\rbrack$ can be characterized as systems which support the monitoring of noncritical information with the intent of not distracting or burdening the user. Ambient displays have been studied for many uses, including software learning [14], social awareness [6], and office work [12].
|
| 44 |
+
|
| 45 |
+
Pousman and Stasko [23] outline four dimensions in the design of an ambient display system: information capacity, notification level, representational fidelity, and aesthetic emphasis. In our system we are aiming for high information capacity and representational fidelity, while keeping distractions to a minimum with a low notification level.
|
| 46 |
+
|
| 47 |
+
§ CORPORATE KNOWLEDGE CHALLENGES
|
| 48 |
+
|
| 49 |
+
This work was developed at [COMPANY NAME] using [COMPANY NAME] internal data. [COMPANY NAME] is a multinational software company of $\sim {11},{000}$ employees. The workforce is widely distributed, with many distinct offices, and 17 of those offices house more than 150 employees.
|
| 50 |
+
|
| 51 |
+
The company faces many of the challenges with corporate knowledge management [19], and results from the yearly employee survey suggest employees generally wish they had more awareness of what is going on in other parts of the company. [COMPANY NAME] has started a number of initiatives designed to improve awareness and knowledge sharing throughout the company such as wiki pages, project groups, mailing lists, and all-hands presentations. However, since these all require employees to do some additional work beyond their normal job duties without the guarantee of a particular future benefit, these initiatives have not had the desired effect on corporate knowledge sharing.
|
| 52 |
+
|
| 53 |
+
The goal of our work is to take advantage of the vast amount of material already being produced, and information naturally available within an organization to improve efficiency and awareness. A primary design objective for the system is that there is no additional cost for someone to use the system; that is, it should be just as easy to use the system as it is to not use the system.
|
| 54 |
+
|
| 55 |
+
§ MEETINGMATE
|
| 56 |
+
|
| 57 |
+
There are many different times and activities through the day where employee corporate knowledge could be improved. We've chosen to focus on times when employees are attending meetings. Since employees are involved in a large number of meetings $\left\lbrack {{17},{20},{28},{32}}\right\rbrack$ , a system designed to augment the experience of attending meetings would have a broad reach within the organizations, and since those meetings are often considered ineffective $\left\lbrack {{18},{27}}\right\rbrack$ , a meeting augmentation system could have the dual benefit of increasing overall corporate knowledge, while simultaneously improving the effectiveness of the meeting.
|
| 58 |
+
|
| 59 |
+
To this end, we've created MeetingMate, a system consisting of three main components: a Data Collector, a Presentation Capture System, and an Ambient Assistant (Figure 2).
|
| 60 |
+
|
| 61 |
+
< g r a p h i c s >
|
| 62 |
+
|
| 63 |
+
Figure 2. Architecture of the MeetingMate system. (components in light grey are part of the existing meeting room infrastructure).
|
| 64 |
+
|
| 65 |
+
At a high level, the MeetingMate system uses the visual content presented at meeting as the "search query" for a corporate knowledge database, and presents contextually relevant information to the meeting attendees through an ambiently updating interface. We next describe the three main components of the MeetingMate system in more detail.
|
| 66 |
+
|
| 67 |
+
§ DATA SOURCES/DATA COLLECTION
|
| 68 |
+
|
| 69 |
+
This section describes a number of existing data sources within the organization, what information is available within these sources, and how the sources are processed to extract their content. For this work we only considered data which was "publically" available within the company, that is, data which everyone within the company has access to. By only using "publically" available data, we minimize the risk that someone using MeetingMate will see privileged or confidential information to which they should not have access.
|
| 70 |
+
|
| 71 |
+
§ S1. SLIDE DECKS
|
| 72 |
+
|
| 73 |
+
Within the company, there are two main locations where documents are stored: a Microsoft Sharepoint [39] server, and a JFile [name changed for anonymity] project management system. Between the two locations, there are 13,688 Power-Point (PPT) slide decks dating back to 1997, with 5,343 presentations created between 2014 and 2016. The decks cover a wide range of topics and have been submitted by authors in all divisions of the company.
|
| 74 |
+
|
| 75 |
+
Processing the slide decks involves two main steps: collecting them from the servers, and analyzing the slides to extract relevant data. For the documents hosted on the Sharepoint server, the Sharepoint API [40] was used to search and download all files of either *.ppt or *.pptx file type. The JFile server does not have a useful API for this purpose, so a web-scraper was written in Python which iterates over each project and crawls through each sub-folder in the documents tree downloading *.ppt and *.pptx files. For both the Sharepoint and JFile based slide decks, high-level metadata such as the creation date, author, and file location are captured during the collection process.
|
| 76 |
+
|
| 77 |
+
Once the PPTs are downloaded, a data extraction process begins. A C# program using the Office. Interop. PowerPoint libraries saves images of each slide in multiple resolutions as .png files, and the text on each slide is extracted and saved to a database.
|
| 78 |
+
|
| 79 |
+
To download the full collection of 13,688 slide decks and extract the content from the 310,554 slides takes approximately 48 hours on a desktop computer. On a daily basis the Share-point and JFile systems are respectively searched and crawled, and newly added PPTs are downloaded and processed. This daily process takes approximately one hour.
|
| 80 |
+
|
| 81 |
+
§ S2. MEETING INFORMATION
|
| 82 |
+
|
| 83 |
+
Since the system is restricted internally-public data, we cannot access the calendars of individual employees for meeting records. However, the majority of meetings take place in meeting rooms, which have shared calendars. As the company uses a Microsoft Outlook mail and calendar system, a C# program using the Office.Interop.Outlook libraries was written to first collect a list of all meeting rooms, and then step through each of the past meetings which have occurred in the room. For each meeting we record the meeting's: name (which often indicates the topic of the meeting), location, length, and a list of attendees.
|
| 84 |
+
|
| 85 |
+
In total there were 719 meeting rooms which held a total of 355,233 meetings between 2014 and 2016. Collection of the entire data set took approximately 36 hours. The process of accessing each of the individual calendars is relatively time-consuming, taking $\sim 5$ hours for incremental daily updates.
|
| 86 |
+
|
| 87 |
+
§ S3. CODE REPOSITORIES
|
| 88 |
+
|
| 89 |
+
The source code developed by the company is primarily managed through and internal GitHub Enterprise Server. Using a Python script with the GitHub API [41], data for the 6,251 internal git repositories are collected including: repository name, description, contributors, languages used, and bytes of code. 1,131 employees are listed as contributors to at least one git repository
|
| 90 |
+
|
| 91 |
+
Data collection for the full set of repositories requires $\sim {3.5}$ hours. Incremental updates are not easily captured using the API, so the full set of repository data is collected each day.
|
| 92 |
+
|
| 93 |
+
§ S4. JFILE PROJECT PAGES
|
| 94 |
+
|
| 95 |
+
The JFile project management system is organized into individual "projects" which represent specific working or interest groups within the company. The system houses 3,405 groups, with a median member count of 8 . The same crawler used for collecting the PPTs from JFile is used to collect the project information, collecting information such as: project name, project description, and a list of group members.
|
| 96 |
+
|
| 97 |
+
§ S5. INDIVIDUAL HUMAN RESOURCES DATA
|
| 98 |
+
|
| 99 |
+
Each of the 11,615 employees (contingent and full-time) at the company has an entry in the internal employee search system. This data is also available in spreadsheet form with 42 columns of information for each employee. Among the most relevant ones are name, email address, work location, job title, and manager's name. From the employee name, and manager's name fields we are able to construct the formal organizational structure of the company. Headshot photos (which are available for ${58}\%$ of employees), follow a consistent naming pattern and location, and are easily downloaded and associated with the appropriate record.
|
| 100 |
+
|
| 101 |
+
Updating the individual HR data entails copying the daily spreadsheet from the HR system and running the script to look for and download any new, or updated, headshots. This process takes approximately 30 minutes.
|
| 102 |
+
|
| 103 |
+
§ S6. EMAIL GROUP MEMBERSHIPS
|
| 104 |
+
|
| 105 |
+
To simplify sending emails and meeting requests to collections of people, the company makes use of email groups. There are a total of 15,054 email groups stored on the Mi-crosoft Outlook mail server, with between 1 and 4,316 members in each, with a median member count of 6 .
|
| 106 |
+
|
| 107 |
+
The email group data (group name, and membership list) is again collected with a C# program using the Office.In-terop. Outlook library, and the collection completes in approximately 30 minutes.
|
| 108 |
+
|
| 109 |
+
§ S7. CORPORATE DEFINITIONS
|
| 110 |
+
|
| 111 |
+
Stored on the company intranet is an employee-maintained database of acronyms and terms frequently used within the organization. 322 acronyms and 776 terms are defined in this database which is downloaded on a daily basis.
|
| 112 |
+
|
| 113 |
+
§ LIVE PRESENTATION CAPTURE
|
| 114 |
+
|
| 115 |
+
In order to supplement the presentation material with relevant information, the MeetingMate system needs to be aware of what is being presented. One possible way to do this would be to write an extension for PowerPoint which uses the Interop. PowerPoint APIs to extract the data being presented and transfer that information to the MeetingMate server. However, this approach has a number of shortcomings. First, it would only work for presentation material from Microsoft PowerPoint. Second, and more significantly, it would require presenters to do the additional work of installing a plug-in on the machine from which they are presenting. Since a primary design concern of MeetingMate is to not require additional set-up work for people to make use of the system, this approach is undesirable.
|
| 116 |
+
|
| 117 |
+
Our approach is to instead use an HDMI capture and pass through device (designed for live streaming video games) to capture a copy of exactly what is being displayed on the presentation screen. In this way, the presenter performs the exact same steps to present content as they usually (plug a video cable into their laptop), but rather than the cable going directly to the projector, it goes to the HDMI capture device, which passes the signal on to the projector (Figure 2).
|
| 118 |
+
|
| 119 |
+
The content saved by the HDMI capture device is an image of what is currently being sent to the presentation screen. This image needs to be processed to find any text being displayed.
|
| 120 |
+
|
| 121 |
+
The windows computer connected to the capture device uploads screenshots to the Project Oxford OCR [42] service for text extraction. The process of uploading the screenshot to the OCR server and receiving the extracted text takes an average of 2.0 seconds. Images are only sent to the OCR service when the projected slide has changed, and this is accomplished by comparing the most recent image with the previously uploaded one, and only uploading the new image if at least 15% of the pixels have changed. The extracted text is sent to the Ambient Assistant server (Figure 2).
|
| 122 |
+
|
| 123 |
+
Using OCR for the text extraction not only allows for text within images to be recognized, but also enables content from any source (PDF, video, etc.) to be analyzed. This makes MeetingMate completely agnostic to the format of the presented material.
|
| 124 |
+
|
| 125 |
+
§ AMBIENT ASSISTANT
|
| 126 |
+
|
| 127 |
+
The final piece of the MeetingMate system is the Ambient Assistant server, which receives the extracted text from the content being presented, finds relevant corporate knowledge content, and serves the results as a responsive webpage. The server is written in Python with the Flask framework, and a Tornado server running on a Windows Server 2012 instance.
|
| 128 |
+
|
| 129 |
+
The goal of the Ambient Assistant webpage is to be as unobtrusive and non-disruptive as possible, while still providing useful information which will enhance the audience's understanding of the presentation.
|
| 130 |
+
|
| 131 |
+
The Ambient Assistant displays relevant knowledge content and is shown on the served page as a series of 'cards' (Figure 1). The individual cards are designed to present the most relevant information at a glance, without requiring input, or too much attention, from the user. As new cards become available they slowly fade in at the bottom of the screen (over a period of 5 seconds) while the page automatically scrolls to make the most recent cards visible. This webpage could be viewed on a number of devices - we have explored several including projecting the ambient assistant onto a secondary screen beside the main projection screen - but believe the most useful configuration is for individuals to view the Ambient Assistant on a personal device such as a phone, tablet, or laptop.
|
| 132 |
+
|
| 133 |
+
The following sections describe the types of cards which are available, when they are displayed, and what information they contain.
|
| 134 |
+
|
| 135 |
+
§ ACRONYMS AND DEFINITIONS
|
| 136 |
+
|
| 137 |
+
Corporate communications are often riddled with acronyms and jargon, making the text unnecessarily difficult to understand $\left\lbrack {{30},{43}}\right\rbrack$ . As an example, in the 13,688 slide decks collected from the company servers, there are over 132,588 instances of acronyms on the 300,000 slides. However, only 1.7% of those acronyms are defined within the slide deck where they are used.
|
| 138 |
+
|
| 139 |
+
The first two card types are internal technology definitions and acronym expansions. The presentation text is searched for any of the collected corporate definitions or acronyms (S7), and if they are found, a card is shown with the acronym expanded (if applicable) and the term defined (Figure 3).
|
| 140 |
+
|
| 141 |
+
< g r a p h i c s >
|
| 142 |
+
|
| 143 |
+
Figure 3. Sample internal technology definition (left) and acronym expansion (right) cards.
|
| 144 |
+
|
| 145 |
+
§ EMPLOYEE INFORMATION
|
| 146 |
+
|
| 147 |
+
The employee information card is displayed whenever an employee's name or email address is found on a slide (Figure 4). The lists of employee names and email addresses are derived from the human resources data (S5). Early testing revealed a common occurrence where the name an employee goes by is different from their 'official' name in the HR database (ex, Jon Smith vs. Jonathan Smith). To overcome this, a list of common name alternatives was used to generate a list of possible names for each person (ex, Jonathan Smith could be either "Jon Smith" or "Jonathan Smith")
|
| 148 |
+
|
| 149 |
+
< g r a p h i c s >
|
| 150 |
+
|
| 151 |
+
Figure 4. Sample employee information card.
|
| 152 |
+
|
| 153 |
+
Besides the employee's name, the card also displays the employee's headshot, job title, and work location. Additionally, a lists of the projects (S4) and code repositories (S3) the employee is actively contributing to are included. Finally, the most closely related employees using the computed employee network graph (discussed below) are presented as a list of "Frequent Collaborators". Combined, these lists give an overview of both what and who this employee knows.
|
| 154 |
+
|
| 155 |
+
§ PROJECTS/CODE REPOSITORIES
|
| 156 |
+
|
| 157 |
+
The next two card types are project cards and repository cards (Figure 5), which are derived from the JFile (S4) and GitHub (S3) data sources. For each, the name and description of the project/repository is displayed, along with names of members or contributors, and in the case of repository cards, the list of programming languages used are also shown.
|
| 158 |
+
|
| 159 |
+
< g r a p h i c s >
|
| 160 |
+
|
| 161 |
+
Figure 5. Sample project (left) and repository (right) cards.
|
| 162 |
+
|
| 163 |
+
The project and repository cards are shown whenever the exact project or repository name is found on a slide. Additionally, these cards are displayed if the text of the slide contains many of the same keywords as the description of a project or repository. This is determined by transforming the descriptions of the projects and repositories into ${tf}$ -idf vectors [11], computing the cosine similarity between the descriptions and the recognized text, and displaying the projects/repositories with a cosine similarity $> {0.85}$ .
|
| 164 |
+
|
| 165 |
+
§ CONTEXTUALLY SIMILAR SLIDES
|
| 166 |
+
|
| 167 |
+
The final card type displays contextually relevant slides (Figure 6) from the collection of over 300,000 slides gathered from the internal slide deck repositories (S1). Besides an image of the contextually relevant slide itself, the card also presents the author of the slide deck, when it was created, and a list of employees who are mentioned somewhere in the deck. Clicking on the "more" icon at the top right of the card presents options to: generate an email containing a link to the contextually relevant presentation, open the presentation directly, or dismiss this card and have it no longer be suggested. The other card types each have similar menus.
|
| 168 |
+
|
| 169 |
+
< g r a p h i c s >
|
| 170 |
+
|
| 171 |
+
Figure 6. Sample contextually similar slide card (left), and the associated context menu (right).
|
| 172 |
+
|
| 173 |
+
Analogous to the process used to find contextually similar projects and repositories, the text for each of the slides in the slide deck repository are converted to tf-idf vectors and compared to captured text. To increase exposure to a wider range of content, if there are any slides with a cosine similarity $>$ 0.85, one slide is chosen at random and displayed.
|
| 174 |
+
|
| 175 |
+
§ PERSONAL CONNECTIONS, EMPLOYEE RELATIONSHIPS
|
| 176 |
+
|
| 177 |
+
The information on the cards above is tailored to the content being presented, but does not change based on who is viewing the Ambient Assistant. By taking into account who is using the Ambient Assistant ("Clarence Mcevoy" in Figure 7), an additional "Personal Connection" section can be inserted into the cards which highlights a connection the logged in user has to a person or project (Figure 7).
|
| 178 |
+
|
| 179 |
+
< g r a p h i c s >
|
| 180 |
+
|
| 181 |
+
Figure 7. Sample cards with additional personal connection information.
|
| 182 |
+
|
| 183 |
+
This personal connection field is populated by looking at an employee network graph computed using the: Meeting Information (S2), Code Repository (S3), JFile Project Pages (S4), HR Data (S5), and Emails Lists (S6) data sets. For each data set, an adjacency matrix is computed based on the strength of the inter-personal connection for each pair of employees.
|
| 184 |
+
|
| 185 |
+
For example, using the Meeting Information data set, we look at each meeting both employees were a part of, and calculate the weight of the edge between two employees $\left( {e}_{1}\right.$ and $\left. {e}_{2}\right)$ with the following formula:
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
{\text{ weight }}_{{e1},{e2}} = \mathop{\sum }\limits_{{\text{ meetings }\left( {{e}_{1},{e}_{2}}\right) }}^{m}\left( \frac{{\text{ length }}_{m}}{{\text{ \#attendees }}_{m}}\right)
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
Where length ${h}_{m}$ is the length of the meeting, and #attendees ${}_{m}$ is the number of people in the meeting. This means longer meetings, and meetings with fewer attendees are weighted more heavily. Once all edge weights are computed, they are globally ranked from strongest to weakest, and re-mapped between 0 and 1 . Similar calculations are carried out for the repository, project page, and email list data sets. For employee data, edges are created between employees and managers.
|
| 192 |
+
|
| 193 |
+
The edges from the individual data set adjacency matricies are then combined into one overall adjacency matrix by summing the weights from the individual edges to create a composite "weight" metric incorporating all the different measures of adjacency. Then, to find the strongest "personal connection" between any pair of employees, we look for the heaviest-weight, shortest path. That is, we take the set of all shortest paths between the two employee nodes, and then select the path in which the edge weights are the highest.
|
| 194 |
+
|
| 195 |
+
The heaviest-weight, shortest path is then converted to a natural language sentence by looking at the strongest components of each edge. For example, "You are managed by Lester Carr; who frequently meets with William Topolin-ski", or "You are in some project groups (including UI Unification) with Alberta Santiago, who reports to Norma Stenn". The overall adjacency matrix represents a single connected component, so a path can be computed between any two employees. However, only paths with at most one intermediate node are displayed to emphasize "strong" connections.
|
| 196 |
+
|
| 197 |
+
§ FEEDBACK AND DISCUSSION
|
| 198 |
+
|
| 199 |
+
To preliminarily test the design and usefulness of Meeting-Mate, the system was deployed and tested over a series of weekly group meetings. In several cases the system provided truly unknown information to the audience (in one case while sharing a research paper, an employee card for one of the authors showed up - prior to that, attendees at the meeting did not know the author had been hired). This deployment also indicated some ways the system produced spurious or unnecessary results - for example, the internal dictionary contains a definition for "User" as "someone who uses our software", and this definition appeared whenever the (very common) word 'user' appears on a slide. While it is easy for users to mark content to "never be shown again", it would be useful to reduce the number of low-utility results at the system level. In the future more advanced language modelling could look at the surrounding context of the slides, or recent slides, to better predict what results might be most relevant. It would also be interesting to explore recognizing content other than text, such as headshots of particular images, and using those as contextual cues.
|
| 200 |
+
|
| 201 |
+
The next step is to deploy the system in a meeting room continuously for several months to collect more feedback about how the system performs and is received. This wider deployment will bring up some interesting issues. For meetings concerning highly sensitive topics, we plan to have a very clear, physical switch for presenters to "disable" Meeting-Mate if they are uncomfortable with the content of their presentation being captured. It will be interesting to see how frequently that functionality is utilized.
|
| 202 |
+
|
| 203 |
+
While we are limiting ourselves to "publically" available internal data, there are still cases where information which is technically visible to all employees, probably shouldn't be. For example a meeting name could be "Discuss firing John Doe". The creator of such a meeting is probably unaware that the meeting name is publically viewable. Practically, the system will need a way to remove these sorts of sensitive entries, but hopefully the deployment of this system will make people more aware of what is visible to other employees. It would also be interesting to make use of non-public information in the system to allow for more personalized recommendations.
|
| 204 |
+
|
| 205 |
+
Overall, we believe MeetingMate serves as a valuable solution for improving meeting effectiveness and knowledge sharing within an organization, and will serve as an example for further work in this area.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,421 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Towards Enabling Blind People to Fill Out Paper Forms with a Wearable Smartphone Assistant
|
| 2 |
+
|
| 3 |
+
Anonymous Authors
|
| 4 |
+
|
| 5 |
+
affiliation
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
We present PaperPal, a wearable smartphone assistant which blind people can use to fill out paper forms independently. Unique features of PaperPal include: a novel 3D-printed attachment that transforms a conventional smartphone into a wearable device with adjustable camera angle; capability to work on both flat stationary tables and portable clipboards; real-time video tracking of pen and paper which is coupled to an interface that generates real-time audio read outs of the form's text content and instructions to guide the user to the form fields; and support for filling out these fields without signature guides. The paper primarily focuses on an essential aspect of PaperPal, namely an accessible design of the wearable elements of PaperPal and the design, implementation and evaluation of a novel user interface for the filling of paper forms by blind people. PaperPal distinguishes itself from a recent work on smartphone-based assistant for blind people for filling paper forms that requires the smartphone and the paper to be placed on a stationary desk, needs the signature guide for form filling, and has no audio read outs of the form's text content. PaperPal, whose design was informed by a separate wizard-of-oz study with blind participants, was evaluated with 8 blind users. Results indicate that they can fill out form fields at the correct locations with an accuracy reaching 96.7%.
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing-Accessibility-Accessibility technologies—; Human-centered computing—Human computer interaction (HCI)
|
| 12 |
+
|
| 13 |
+
## 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Paper documents continue to persist in our daily lives, notwithstanding the paperless digitally connected world we live in. People still continue to encounter paper-based transactions that require reading, writing and signing paper documents. Examples include paper receipts, mails, checks, bank documents, hospital forms and legal agreements. A recent survey shows that over ${33}\%$ of transactions in organizations are still done with paper documents [4]. Many of these paper documents, at the very least, require affixing signatures on them. While it is straightforward for sighted people to write and affix their signatures on paper, for people who are blind this is challenging, if not impossible to do independently. When it comes to writing, blind people invariably rely on sighted people for assistance. Such assistance may not always be readily available, but more troublingly, having to depend on others for writing always comes with a loss of privacy. To make matters worse, unlike reading assistants for blind people, of which there are quite a few (e.g., [2,8]), there are hardly any computer-assisted aids that can help them to write on paper independently, a problem that has taken on added significance due to the recent pandemic-driven upsurge in mail-in balloting. In fact, a recent lawsuit was brought by blind plaintiffs on the discriminatory nature of mail-in paper ballots since they could not be filled out without compromising confidentiality [5].
|
| 16 |
+
|
| 17 |
+
There are two essential aspects to a form-filling assistant for blind people: (1) document annotation which includes capturing the image of the document with a camera, and automatic identification of all its items, namely, text segments, form fields and their labels; and (2) the design and implementation of an interface to enable blind people to access and read all the items of the document and fill out the fields independently.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Figure 1: A blind user filling out a form using PaperPal. An interaction scenario: (A) The pen is pointing to a text item and is read out, (B) Rotate the pen right to read the next item, (C) Bi-directional rotate to navigation to the form field, (D) Fill out the field.
|
| 22 |
+
|
| 23 |
+
In so far as (1) is concerned, the existence of several smartphone reading apps for them (e.g., SeeingAI [8], KNFB reader [2], and voice dream scanner [26]) has established the feasibility of acquiring images of paper documents by blind people using a smartphone. These apps demonstrate that blind users can independently use their audio interface to capture the image of the document. The aforementioned apps also extract text segments from the captured images using OCR and document segmentation methods, which are then read out to the user. In so far as forms are concerned, it is possible to extract form fields and their labels from document images using extant vision-based systems such as Adobe [10], and AWS Textract [9]. In contrast, the HCI aspects of an interface for a form-filling assistant for blind people is a challenging and relatively understudied research problem, and is the primary focus of this paper.
|
| 24 |
+
|
| 25 |
+
Of late, research on HCI aspects of writing aids for blind people is beginning to emerge. A recent work describes a first-of-its-kind smartphone-based writing-aid, called WiYG, for assisting blind people to fill out paper forms by themselves [28]. WiYG uses a 3D printed attachment to keep the phone upright on a flat table and redirects the focus of the phone's camera to the document that is placed in front of the phone. The paper and phone in WiYG are kept stationary; The user receives audio instructions to slide the signature guide - a card similar in size to regular credit cards that has a rectangular opening in the middle to help blind people sign on papers - to different form fields on the paper. All the form fields are manually annotated apriori. In addition, visual markers are affixed to the signature guide for tracking its locations with the camera.
|
| 26 |
+
|
| 27 |
+
WiYG work has opened up new design questions and challenges that could form the basis for next generation computer-aided paper-form-filling writing assistants for blind people. We explore some of these questions here. Firstly, WiYG provides no readouts of the text in the form documents which arguably is desirable, especially for documents that require signatures. Secondly, WiYG simply steps through each form field in the document one by one without backtracking. In practice one would like to seamlessly switch back and forth between the fields and fill them in any order. Thirdly, WiYG requires a flat table to keep the paper as well as the phone stationary during use. The ability to operate in different situational contexts such as documents on non-stationary portable surfaces such as clipboards makes for a more flexible computer-aided reading/writing wearable assistant. In fact, often times blind users find themselves in situations where the documents they are asked to review and sign such as forms at hospitals and doctor's offices are on clipboards.
|
| 28 |
+
|
| 29 |
+
To explore these questions we employed a user-centered design approach. We started with a Wizard of Oz (WoZ) pilot study with eight blind participants to understand the feasibility of filling paper forms on a clipboard. The study included paper forms placed on both flat desks and portable clipboards with the wearable cameras worn over the chest or attached to glasses, to mimic smart glasses. The study was designed to elicit data on several key questions including: (1) How do blind people write on paper attached to a portable clipboard? (2) Where can the camera be worn conveniently and in a way that the pen and paper are visible within the camera's field of view? (3) Considering all the camera-clipboard movements, how can blind people coordinate the clipboard and the wearable camera to maintain the pen and paper inside the camera's field of view while writing? The study was also intended to elicit user feedback and gather design requirements. The findings from the WoZ study informed the design of PaperPal, a wearable smartphone assistant for non-visual interaction with paper forms in more general scenarios than only a stationary desk, such as portable clipboards.
|
| 30 |
+
|
| 31 |
+
There are several unique aspects to the design of PaperPal. First, its novel 3D-printed attachment transforms a conventional smart-phone into a wearable device with a mechanism to adjust the camera angle with one hand. Second, PaperPal is flexible to where it can be used: stationary tables as well as non-stationary surfaces, specifically, portable clipboards. Third, PaperPal enables users to write without having to use their signature guides - a key requirement that emerged from the WoZ study. Fourth, PaperPal leverages real-time video processing techniques to track the paper and pen and accordingly provides appropriated audio feedback. Lastly, both reading and writing are tightly integrated in PaperPal, with users being able to easily switch between them while accessing different items on the document. Our evaluation with 8 blind users showed that PaperPal could successfully assist people who are blind to fill in various paper forms, such as bank checks, restaurant receipts, lease agreement, and informed consent forms. They independently filled out these forms with an accuracy reaching ${96.7}\%$ . We summarize our contributions as follows:
|
| 32 |
+
|
| 33 |
+
- The results of a wizard of OZ study with blind participants to uncover requirements for independently interacting with paper forms in portable settings.
|
| 34 |
+
|
| 35 |
+
- The design of a novel 3D attachment that can turn a smartphone into a wearable with adjustable camera angle. This can also be used for other wearable vision-based applications that require adjustment of the camera angle.
|
| 36 |
+
|
| 37 |
+
- The design and implementation of PaperPal, a new smartphone application, to assist blind users to independently read and fill out paper forms both on flat tables as well as portable clipboards.
|
| 38 |
+
|
| 39 |
+
- The results of a user study with blind participants to assess the efficacy of PaperPal in filling out various paper forms.
|
| 40 |
+
|
| 41 |
+
Following WiYG [28], we also assume annotated paper forms. As mentioned earlier, there exist smartphone applications and known techniques for document image capture by blind people and for automatic annotations. While the annotation problem is orthogonal to the design and implementation of the user interface explored in this paper, in Section 5.11 we describe our experiences with automated annotation of paper forms and discuss its envisioned integration in PaperPal to realize a fully automated paper-form-filling assistant.
|
| 42 |
+
|
| 43 |
+
## 2 RELATED WORK
|
| 44 |
+
|
| 45 |
+
The research underlying PaperPal has broad connections to assistive technologies for reading and writing on paper documents, particularly for blind people, 3D printed artifacts and image acquisition and processing in accessibility. What follows is a review of existing research on these broad topics.
|
| 46 |
+
|
| 47 |
+
Reading and Writing: For well over a century, Braille has been the standard assistive tool for reading and writing for blind people. It is a tactile-based system made up of raised dots that encode characters. The use of braille has been declining in the computing era which ushered a major paradigm shift to digital assistive technologies [48]. Examples of digital technolgies for reading printed documents include some CCTVs [63] and Kurzweil Scanner [3] which reads off the text in scanned documents.
|
| 48 |
+
|
| 49 |
+
The smartphone revolution has witnessed a surge in mobile reading aids. Notable examples include the KNFB reader [2], Seein-gAI [8], Voice Dream Scanner [26], Text Detective [14], and Tap-TapSee [60]. The smartphone-based solutions (e.g., [47]) as well as other hand-held solutions (e.g., SYPOLE [30]) require the user to position the camera for getting the document in its field of view. In recent years, wearable reading aids are emerging (e.g., finger reader [56], Hand Sight [58], and Orcam [7]). Although finger-centric wearables such as $\left\lbrack {{56},{58}}\right\rbrack$ do not require positioning of the camera, the drawback is their interference with writing. Reading paper documents using crowd sourced services is another option for blind people (e.g., be my eyes [1] and Aira [11]). These have the obvious drawback of lacking privacy.
|
| 50 |
+
|
| 51 |
+
In contrast to reading aids, research on assistive writing on physical paper is at a nascent stage. A wizard of OZ study to explore the kinds of audio-haptic signals that would be useful for navigation on a paper form was reported in [17]. In this study, the form was placed on a flat table and the wizard generated the audio-haptic signals that was received on a smartwatch worn by the participant.
|
| 52 |
+
|
| 53 |
+
A recent paper describes WiYG, a smartphone-based assistant for blind people to fill out paper forms [28]. In WiYG the user places the phone on a stationary table in an upright position using a 3D-printed attachment. The paper form is placed on the desk in front of the smartphone. The user slides the signature guide over the paper form to each form field, guided by audio instructions provided by the smartphone app. To write into the form field the user uses both the hands, one to keep the signature guide in place over the form field and the other hand to write into it with the pen. As mentioned earlier in Section 1, WiYG provides no readouts of the text, simply steps through each form field and can only be used with a flat table where both the paper and the phone are kept stationary. The PaperPal system described in this paper integrates both reading of the document's text and writing in the form fields. It has the capability to operate on both stationary tables and portable clipboards.
|
| 54 |
+
|
| 55 |
+
3D Printing in Assistive Technologies: The increasing availability of 3D printers has increased the potential for rapid 3D printing for assistive technology artifacts $\left\lbrack {{20},{38}}\right\rbrack$ . $\left\lbrack {31}\right\rbrack$ shows that it is feasible for blind users to do 3D printing of models by themselves and [23] list organizations that use 3D printing tools to serve people with disabilities. Other examples of 3D printing applications are custom 3D printed assistive artifacts $\left\lbrack {{22},{35}}\right\rbrack ,3\mathrm{D}$ printed markers attached to appliances [33], and applications in accessibility of educational content $\left\lbrack {{19},{21},{24},{37}}\right\rbrack$ , graphical design [46], and learning programming languages [39]. 3D printing is also used to convey visual content [59], art [25], and map information [55] to blind people. Interactive 3D printed objects is yet another way 3D printing is utilized for accessibility [51-53]. Other examples of 3D printing include generating tactile children’s books $\left\lbrack {{40},{57}}\right\rbrack$ to promote literacy in children. In addition, [41] studies how children with disabilities can use 3D printing. In [36] it is mentioned that children with disabilities can also utilize 3D printing in the context of DIY projects. 3D printing is also used to utilize already existing technologies (e.g., making wearable smartphones [44]). In this paper we utilize 3D printing to design a phone case and a pocketable attachment to turn a smartphone into a wearable that allows the camera's angle to be adjusted.
|
| 56 |
+
|
| 57 |
+
Image Acquisition and Processing in Accessibility: Accessible image acquisition tools such as $\left\lbrack {{15},{43},{62}}\right\rbrack$ instruct blind users to position the camera at the correct angle and distance from the target for capturing an image. The work in [32] illustrates the practical deployment of such tools in an assistive technology for image acquisition by blind people. In terms of capturing images of paper documents, assistive reading apps, namely, SeeingAI [8], KNFB reader [2], and voice dream scanner [26] demonstrate that blind people can independently use the apps' interface to direct the smartphone camera on the paper document and capture its image.
|
| 58 |
+
|
| 59 |
+
The post-processing of the document image is a well-established research topic and can range from local OCR processing [12] to other computer vision methods such as document segmentation $\left\lbrack {{13},{29}}\right\rbrack$ to form labeling techniques $\left\lbrack {9,{10},{49},{61}}\right\rbrack$ .
|
| 60 |
+
|
| 61 |
+
Another topic related to camera-based assistive technologies is the use of visual markers for tracking objects in the environment. For example, in [27] different types of visual tracking methods are studied to make shopping easy for blind people. The work in [45] studies color-coded markers for use in a way finding application for blind people. Visual markers are especially beneficial when computer vision methods do not provide satisfactory accuracy. Examples of assistive technologies that utilize visual markers are $\left\lbrack {{28},{54},{55}}\right\rbrack$ . In PaperPal we also use visual markers to track the tip of the pen and the paper. To track the latter, PaperPal uses visual markers similar to the ones used for tracking the signature guide in [28]. To track the pen, PaperPal uses visual markers attached to a 3D printed pen topper, inspired by previous work on pen tracking that also use visual markers [21,64-66].
|
| 62 |
+
|
| 63 |
+
## 3 A Wizard of Oz Pilot Study
|
| 64 |
+
|
| 65 |
+
To the best of our knowledge, there is no previous research on how blind people write on paper documents attached to non-stationary surfaces, namely, portable clipboards. To this end, we did a pilot study to assess the feasibility of an assistive tool that uses a wearable device for filling paper forms attached to clipboards. In the study these specific questions were explored: (1) How do blind people write on non-stationary surfaces like clipboards with a wearable? (2) How do blind people coordinate their hand and body movements to keep the pen and paper within the camera's field of view? (3) What is the most suitable on-body location for a wearable camera between the proposed locations of head vs. chest. In addition, the study was also intended to gather requirements for the wearable camera attachment. Details of the study follows.
|
| 66 |
+
|
| 67 |
+
### 3.1 Participants
|
| 68 |
+
|
| 69 |
+
Eight (8) participants ( 3 males, 5 females) whose ages ranged form 35 to 77 (average age 50) were recruited for the pilot study. All participants were completely blind; all knew how to write on paper; none had any motor impairments that would have affected their full participation in the study.
|
| 70 |
+
|
| 71 |
+
### 3.2 Apparatus
|
| 72 |
+
|
| 73 |
+
The study used a standard ball point pen, a portable clipboard, and a credit-card sized signature guide. Each form, printed on standard letter-sized paper, had 5 randomly placed equal-sized fields, with the same distance between consecutive fields. The wizard used a Nexus phone to send instructions to an iPhone8+. Participants wore the iPhone8+ on two on-body locations using two holders. The first holder had the iPhone attached to Ski goggles. Participants wore this on the head akin to smart glasses - Wearable ${}_{\text{head }}$ (figure 2 B) Using the second holder, the participants wore a lanyard around their necks with the phone rested on their chests. The holder had a reflective mirror to redirect the camera's field of view and served as the wearable on the chest - Wearable chest (figure 2 A, C).
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
|
| 77 |
+
Figure 2: The Apparatus for the pilot study. A: The phone case with reflective mirror in front of the camera to be worn on the chest using the lanyard in C. B: Ski goggles for wearing the phone as glasses. D: Paper with Aruco markers where users mark ’ $x$ ’ in the numbered form fields.
|
| 78 |
+
|
| 79 |
+
### 3.3 Study Design
|
| 80 |
+
|
| 81 |
+
In this within-subjects study every participant filled out a total of 4 random forms corresponding to four different conditions namely, $< {\text{Wearable}}_{\text{head }}$ , form on desk>, $< {\text{Wearable}}_{\text{head }}$ , form on clipboard>,<Wearable ${}_{\text{chest }}$ , form on desk>, and <Wearable ${}_{\text{chest }}$ form on clipboard>.
|
| 82 |
+
|
| 83 |
+
A total of 32 forms were filled out ( 8 participants, 4 conditions). The order of the four form-filling tasks was randomized to minimize the learning effect. The wizard app was used by the experimenter (i.e., the wizard) to manually direct the participant to the form fields by sending directional audio instructions such as "move left", "move right", and so on. The participant's phone had an app to track the paper based on the markers that were printed on the paper (see figure 2 D). The participant's phone was also instrumented to gather study data throughout the duration of each study session which was also video recorded.
|
| 84 |
+
|
| 85 |
+
The accuracy was measured as the percentage of overlap between the annotated rectangular region of a given form-field (a priori) and the rectangle enclosing the participant's written text in that field (annotated after the study) [28].
|
| 86 |
+
|
| 87 |
+
### 3.4 Procedure
|
| 88 |
+
|
| 89 |
+
Each form filling task began with the wizard directing the participant to go to the first form field. We regarded this as the initialization phase and discounted it from our measurements so as to exclude any confounding variables that might arise due to starting off from a random position. Upon reaching the form field the participant would initial the field with a " $\times$ " using the signature guide. If at any time during this process the paper disappears from the camera's field of view, the iPhone app would raise a 'paper not visible' audio alert In response the participant would make adjustments by shifting the paper on or the wearable to bring it back into focus, which gets acknowledged by the 'paper is visible' shout-out by the app. The participant received the navigational instructions only when the paper was visible by the camera. The experimenter monitored the participant's navigational progress and sent audio directions in real time to guide the user's signature guide to each form field. At the conclusion of the session, users would compare and contrast writing on the desk vs clipboard, the wearable's location on the chest vs the head, and other experiences, in an open-ended discussion.
|
| 90 |
+
|
| 91 |
+
### 3.5 Key Takeaways
|
| 92 |
+
|
| 93 |
+
Chest vs. Head location for the Wearable: 6 out of 8 participants preferred the chest wearable, one participant preferred the head location and one participant had no preference. With the camera on the head, the paper went out of focus far more often - by a factor of 4 - than when it was worn on the chest.
|
| 94 |
+
|
| 95 |
+
The differences in percentage of overlap among the 4 conditions were found to be statistically significant (repeated measures ANOVA, ${F}_{3,{124}} = {5.32}, p = {0.002}$ ). Pairwise comparisons with posthoc Tukey test showed that under the "head, clipboard" condition the field overlaps are significantly less than when user wears the phone on the chest and writes either on desk $\left( {p = {0.0102}}\right)$ or clipboard $\left( {p = {0.0171}}\right)$ . This suggests that when wearing the phone on the chest, user can better write on the correct location for both papers that are placed both on the desk and clipboard.
|
| 96 |
+
|
| 97 |
+
These are excerpts of select participant regarding the (1) Wearable ${}_{\text{head }}$ : "Looking downward is not comfortable and this task requires a lot of looking down." and (2) Wearable chest: "Around the neck is more comfortable and you can focus on the direction of the paper and hand.". Overall, the chest location was better suited for placement of the wearable, and was adopted in the design of PaperPal.
|
| 98 |
+
|
| 99 |
+
Writing Surface: Unsurprisingly, all participants deemed writing on the desk was easier than on the clipboard. However, pairwise comparisons did not show any statistically significant difference in the accuracy or form fill-out time when only the paper placement variable changed from desk to clipboard. During the discussion, all participants mentioned real-life situations where they had to use clipboards.
|
| 100 |
+
|
| 101 |
+
Navigational Differences: On desks, we observed that the user moved the signature guide with one hand while using the other hand to feel the edge of the paper as a means to get a sense of the relative orientation of the signature guide w. r. t. the orientation of the paper. Such a behaviour was also reported in [28]. On the other hand, when holding the clipboard, participants could not feel the paper's edge in the same way as they did on flat desks. In fact, we observed that participants had difficulty moving the signature guide on a trajectory aligned with the paper's orientation. Despite this, the wizard was able to adapt the instructions to lead the participant to the target fields.
|
| 102 |
+
|
| 103 |
+
Reflective Mirror: There was a lot of variability in how participants held the clipboards. This means there is no one perfect angle for attaching the mirror to the holder that can cover all these variations. A wide angle camera increases the likelihood of the paper staying in its field of view. However, this will require the use of a large sized reflective mirror which is not practical. Thus, a smartphone holder without a reflective mirror whose orientation can be adjusted to position the camera, is desirable for a wearable on the chest.
|
| 104 |
+
|
| 105 |
+
Signature Guide: When using the clipboard participants had to hold the clipboard throughout the interaction process. On the other hand, writing with the signature guide requires both the hands. This became a difficult juggling act for the participants. These difficulties are reflected in the feedback of all the participants (e.g.,: P5 mentioned "specially you have to pickup your pen while using signature guide"). Hence using signature guides with clipboards is not an option.
|
| 106 |
+
|
| 107 |
+
## 4 THE PAPERPAL WEARABLE ASSISTANT 4.1 Design of 3D Printed Phone Holder
|
| 108 |
+
|
| 109 |
+
Informed by our pilot study, we designed a 3D-printed holder to convert an off-the-shelf smartphone (iPhone 8+) into a hands-free wearable on the chest. In addition, the holder design had to meet these requirements: (a) Support one-handed tilting of the phone to different angles so that differences in how the clipboard is held by different users can be accommodated. They will hold the clipboard with one hand and appropriately adjust the angle of the phone with the other hand to capture the paper in the camera's field of view; (b) Use minimal number of component pieces that are compact enough to fit in a pocket; and (c) Ease of assembly/disassembly.
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
|
| 113 |
+
Figure 3: The iPhone holder. The L-shaped half fork can be attached to the phone case by rotating the screw inside the threaded bearing. When tightened the L-shaped half fork can rotate.
|
| 114 |
+
|
| 115 |
+

|
| 116 |
+
|
| 117 |
+
Figure 4: PaperPal's interaction. The interaction automaton.
|
| 118 |
+
|
| 119 |
+
These requirements led to the design of the Adjustable iPhone Holder shown in fig 3 which went through several rounds of experimentation. This design is comprised of two pieces: The first piece is a phone case that has a small threaded bearing on the side. The second piece is a L-shaped half fork which facilitates tilting of the phone. This piece can be attached to the phone case by rotating the screw into the threaded bearing and can be rotated 360 degrees. A lanyard is attached to the lifting lug to wear the holder with its phone around the neck. The user can change the lanyard ribbon's length. This L-shaped fork can be rotated by the user to adjust the tilt angle of the wearable phone. Furthermore, the tilt angle can be adjusted so that it can support upright placement of the phone on a desk. Thus it can operate both as a wearable as well as serve as a stationary holder for writing on a flat desk.
|
| 120 |
+
|
| 121 |
+
### 4.2 PaperPal: An Operational Overview
|
| 122 |
+
|
| 123 |
+
The PaperPal system runs as an iPhone app. The user interacts with the items of the paper, namely, text segments, form fields and their labels, by moving the pen over the paper like a pointer and making gestures with the pen. Two types of gestures are used: (1) unidirectional rotate left or right of the pen around its longitudinal axis, and (2) bidirectional rotate made up of two rotations done consecutively in the opposite directions.
|
| 124 |
+
|
| 125 |
+
PaperPal's response to the user's pen movements and pen gestures is governed by a two-state interaction automaton shown in Figure 4.
|
| 126 |
+
|
| 127 |
+
The application starts in the "select item and read" state which is inspired by the smartphone screen reader interface. In this state the user can move the pen like a pointer to simultaneously select an item and hear an audio readout of the item that is associated with the location pointed by the pen. This interaction is analogous to the "touch exploration" on the smartphone screen reader.
|
| 128 |
+
|
| 129 |
+
The unidirectional rotate left (right) selects the previous (next) item on the document and its content is read aloud. This interaction is analogous to "swiping" on the smartphone screen reader.
|
| 130 |
+
|
| 131 |
+
The bidirectional rotate switches between the two states of the application namely "select item and read" and "navigate to item and write".
|
| 132 |
+
|
| 133 |
+
The "navigate to item and write" state handles two situations: (1) If the item selected is a text segment it reads aloud the item's text content; (2) if the item selected is a form field it reads aloud its label and generates navigational instructions to direct the user's pen to the location of the field. No readouts of any intermediate items take place when a user is being navigated to a form field. Upon reaching the field it reads out the label of the form field once more to refresh the user's memory and directs the user to write in the field, alerting the user when the pen strays out of the field and giving instructions on how to to move the pen back into the field and continue writing. In this state the user can do a bidirectional rotate at any time to "move to the select item and read" state or continue in the current sate and continue on to the other form fields or other items via unidirectional rotate gestures.
|
| 134 |
+
|
| 135 |
+
### 4.3 PaperPal implementation
|
| 136 |
+
|
| 137 |
+
PaperPal uses the phone's camera to observe the user's actions. The application is implemented as an iOS app and uses OpenCV library [6] for real time video processing. Specifically, PaperPal: (a) tracks the physical location of the pen tip over the paper, and (b) detects pen gestures namely uni-directional and bi-directional rotates. The pen location and gestures determine the audio responses, namely, text readouts, navigation and writing instructions, that are generated in real time. Figure 5 is a high-level workflow of the process.
|
| 138 |
+
|
| 139 |
+
#### 4.3.1 Visual Markers
|
| 140 |
+
|
| 141 |
+
To enable accurate tracking of the pen and paper, Aruco markers [50] with known size are used for paper and pen tracking.
|
| 142 |
+
|
| 143 |
+
The paper tracker is a credit-card sized rectangular card ( ${85.60}\mathrm{\;{mm}}$ $\times {53.98}\mathrm{\;{mm}}$ ) wrapped by an Aruco board of 24 markers. It has a narrow diagonal groove that serves as a tangible guide for attaching the the paper into the card. The user slides the paper's upper left corner into this groove.
|
| 144 |
+
|
| 145 |
+
The pen tracker is a cube-shaped pen topper with Aruco markers affixed to each face of the cube. It can be easily attached to any regular ball point pen and is resilient to hand occlusions.
|
| 146 |
+
|
| 147 |
+
#### 4.3.2 Locating Pen Tip on Paper
|
| 148 |
+
|
| 149 |
+
For each image frame containing the pen and paper, two transformations $H$ and $P$ are estimated: (1) $H$ is a homography transformation that maps each image pixel to its corresponding location on the paper's coordinate system. This is estimated based on the paper tracker via the DLT algorithm [34], and (2) $P$ is a projective transformation [34] between any 3D location in the pen tracker's coordinate system and their corresponding 2D image pixel. $P$ is comprised of the intrinsic camera calibration, which is measured once for the camera, and the extrinsic camera calibration which is estimated based on the pen markers with the EPnP method [42].
|
| 150 |
+
|
| 151 |
+
For a pen that is touching the paper, PaperPal starts with the physical location of the pen tip whose distance is a constant w. r. t. the pen tracker coordinate system, and applies $P$ followed by the $H$ transformations to estimate the pen tip location on the paper’s coordinate system.
|
| 152 |
+
|
| 153 |
+
We only estimated the pen location if the pen tip is close to the paper. To this end, we developed a heuristic based on the observation that in PaperPal as the pen moves away from the paper it gets closer to the camera. specifically, in the image, the observed size of the pen markers should not be more than twice the size of the paper markers. The criteria was based on experimenting with various thresholds and the one candidate with the lowest average re-projection error [34] was selected. Furthermore we remove outlier pen tip locations whose distance from the previous observed pen tip is more than ${18}\mathrm{\;{mm}}$ on the paper. This threshold was selected based on the fastest pen movements that could be captured using the pen's visual markers.
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
|
| 157 |
+
Figure 6: A: rectilinear path along the paper's coordinate system, B: non-rectilinear path along the paper's coordinate system, and C: the rectilinear path on the rotated paper's coordinate system.
|
| 158 |
+
|
| 159 |
+
#### 4.3.3 Detecting Pen Gestures
|
| 160 |
+
|
| 161 |
+
A previous work on pen rolling gestures had shown that when writing on a paper, unintended pen rotations were observed with high speeds and small angles [16]. We used this insight to increase the duration of intended rotation gestures and thus distinguish them from unintended ones.
|
| 162 |
+
|
| 163 |
+
To this end, through experimentation the duration for intended gestures, denoted ${T}_{\text{gesture }}$ , was set to ${600}\mathrm{\;{ms}}$ . Rotation gestures are performed with the pen tip on or close to the paper surface. To detect the rotation gestures we choose a 3D point $R$ w. r. t. the pen coordinate system that is close to the pen tip. We find $R$ ’s corresponding 2D location $r$ on the paper coordinate system using the same process that was used for estimating the position of the pen tip in Section 4.3 above. To detect a rotational movement along the longitudinal axis of the pen, the angle of $r$ relative to the tip of the pen is measured for each frame using simple trigonometry. We record the direction of the rotational angle (left/right) between each two consecutive frames. If in the time window of ${T}_{\text{gesture }}$ , the majority (over 90%) of pen rotations are in the same direction (left/right), the rotation gesture in the corresponding direction is detected. For detecting bidirectional rotations, first note that it involve two quick rotation in both direction. A bidirectional rotate is detected if within the sliding time window of ${T}_{\text{gesture }}$ , majority (over 90%) of rotations in one half in one direction and in the opposite direction in the other half.
|
| 164 |
+
|
| 165 |
+
#### 4.3.4 Generating Audio Responses
|
| 166 |
+
|
| 167 |
+
PaperPal responds to the user's pen movements by generating four kinds of audio responses: coordination alerts, audio readouts, writing and navigation instructions.
|
| 168 |
+
|
| 169 |
+
Coordination alerts: Alerts when the paper and/or the pen move out of the camera's field of view. To avoid needless alerts for momentary movements out of the field of view, we alert when the pen and paper are not in field of view for a duration that is longer than 2 seconds. Readouts: The textual content of an item selected by the user is readout to the user in audio.
|
| 170 |
+
|
| 171 |
+
Table 1: Participants demographic and habits regarding braille, writing, and smartphone applications (SG stands for signature guide).
|
| 172 |
+
|
| 173 |
+
<table><tr><td>ID</td><td>Pilot study</td><td>Age (Sex)</td><td>Diagnosis (Light perception)</td><td>Braille usage (Level)</td><td>Braille scenarios</td><td>Writing usage (Level)</td><td>Writing scenarios</td><td>SG Own (Carry)</td><td>Smartphone (experience)</td><td>Smartphone apps for papers</td></tr><tr><td>P1</td><td>yes</td><td>34 (F)</td><td>retrograde optic atrophy (yes)</td><td>daily (advanced)</td><td>papers at work, at the library</td><td>daily (beginner)</td><td>doctor's office, legal forms, banks</td><td>yes (always)</td><td>iphone 10 S (advanced)</td><td>seeingAI, KNFB reader, voice dream reader. voice dream writer</td></tr><tr><td>P2</td><td>yes</td><td>63 (F)</td><td>Acute congenital glaucoma (no)</td><td>daily (advanced)</td><td>taking notes for myself</td><td>rarely (beginner)</td><td>Leaving notes for sighted peers</td><td>yes (always)</td><td>iPhone 8 (advanced)</td><td>be my eyes</td></tr><tr><td>P3</td><td>no</td><td>32 (F)</td><td>medical malpractice (no)</td><td>daily (beginner)</td><td>elevators. remote control</td><td>weekly (advanced)</td><td>doctor's office. checks</td><td>no</td><td>iPhone 10 (advanced)</td><td>None</td></tr><tr><td>P4</td><td>no</td><td>55 (M)</td><td>retinal detachment (no)</td><td>monthly (advanced)</td><td>elevators, mails</td><td>weekly (beginner)</td><td>timesheet signature, checks. legal documents</td><td>no</td><td>iPhone (advanced)</td><td>KNFB reader</td></tr><tr><td>P5</td><td>yes</td><td>54 (M)</td><td>glaucoma (no)</td><td>-</td><td>-</td><td>weekly (beginner)</td><td>shopping receipts. credit card bills</td><td>no</td><td>iPhone (beginner)</td><td>seeingAI, tap tap see</td></tr><tr><td>P6</td><td>yes</td><td>46 (F)</td><td>retinitis pigmentosa (yes)</td><td>never (beginner)</td><td>elevators, mails</td><td>monthly (beginner)</td><td>legall documents. doctor's office</td><td>no</td><td>iPhone 8 (advanced)</td><td>seeing AI</td></tr><tr><td>P7</td><td>no</td><td>38 (M)</td><td>optic atrophy, retinitis pigmentosa (yes)</td><td>monthly (advanced)</td><td>reading documents</td><td>daily (advanced)</td><td>documents at work. taking notes for sighted peers</td><td>yes (often)</td><td>iPhone 11 pro (advanced)</td><td>KNFB reader. seeingAI. Aira voice dream scanner</td></tr><tr><td>P8</td><td>no</td><td>61 (M)</td><td>retinitis pigmentosa (yes)</td><td>daily (advanced)</td><td>elevator. calendar</td><td>daily (advanced)</td><td>timesheet signature. checks. legal documents</td><td>ves (always)</td><td>flip phone (beginner)</td><td>None</td></tr></table>
|
| 174 |
+
|
| 175 |
+
Writing instructions: While writing is in progress, instructions to maintain the pen within the field is given whenever the estimated pen tip falls out of the rectangular boundary of the field. For example, if the user's pen position has strayed above (below) the field the user is guided back to field with a "Move down (up)" instruction.
|
| 176 |
+
|
| 177 |
+
Navigation instruction: Navigational instructions consists of four basic directives namely up, down, left and right. With these four directives the user is guided to any field on the paper. In [28] a simple navigation algorithm was used to guide the user along a rectilinear path that corresponded to the Manhattan distance between the pen and field, see figure 6 A. Recall that our pilot study revealed that the user's navigational movements, in response to the audio instructions can deviate from the intended axes when the paper is placed on a clipboard. Figure $6\mathrm{\;B}$ , demonstrates how the user’s pen tip trajectory can deviate from the expected path along the paper's coordinate system with simple rectilinear navigational instructions.
|
| 178 |
+
|
| 179 |
+
To address this problem we estimate the deviation angle of the pen tip's trajectory w.r.t. the paper's coordinate system. To compensate for this deviation, we rotate the paper's coordinate system to the same degree but in the opposite direction, so that the pen tip's trajectory is aligned with the transformed axes - see figure $6\mathrm{C}$ . The navigation directives are generated w.r.t. the transformed axes. Observe that the pen tip trajectory now follows a rectilinear path w.r.t. the transformed axes. To estimate the deviation angle, we use the pen tip’s estimated location $t$ on the paper and find the angle between $t$ and the intended axis (horizontal axis when the navigation instruction is left or right, and vertical when the navigation is up or down). To avoid noise and jitters of the transformed axes, the deviation angle is averaged over a sliding window of one second.
|
| 180 |
+
|
| 181 |
+
## 5 EVALUATION
|
| 182 |
+
|
| 183 |
+
We conducted an IRB-approved user study of PaperPal to evaluate its effectiveness as a form-filling assistant for blind people. To this end, the study was designed to answer the following questions: (a) How accurately can users fill out forms in terms of writing on the correct location? (b) How long does it take to fill out forms? (c) What is the overall user experience of using PaperPal to fill out paper forms of different sizes, layouts, and texts?
|
| 184 |
+
|
| 185 |
+
### 5.1 Participants
|
| 186 |
+
|
| 187 |
+
Ten fully blind participants were recruited. However, two participants could not attend and the study was conducted with the remaining eight participants whose ages ranged from 32 to 63 (average $= {47.88}$ , std $= {12.16}$ ,4 females and 4 males). Note that 4 out of the 8 participants were also part of the WoZ pilot study discussed in Section 3. Table 1 is the demographic data of the participants. The participants were compensated \$50 per hour. All the participants were right handed, and none had any motor impairments that impeded their full participation in the study. All the participants (except P5) were familiar with braille and all of them affirmed that they knew how to write on paper. All participants stated that in real-life they always asked a sighted peer to do the form fill out for them except for affixing their signatures. For that, they were led by the sighted peer to the signature field where they would sign by themselves.
|
| 188 |
+
|
| 189 |
+
### 5.2 Apparatus
|
| 190 |
+
|
| 191 |
+
The PaperPal application was running on an iPhone 8+. The 3D-printed holder, lanyard, paper tracker, pen tracker, a regular ball point pen and a clipboard were provided to the user, see figure 1 . Finally, each participant was given 4 paper forms to fill out.
|
| 192 |
+
|
| 193 |
+
Forms: The forms were selected to have different properties and to reflect realistic scenarios. Specifically, the forms were:
|
| 194 |
+
|
| 195 |
+
- (F1): A regular-size check that consists of six fields namely pay to the order of, date, $\$$ , dollars, memo, and signature.
|
| 196 |
+
|
| 197 |
+
- (F2): A restaurant receipt that consists of three fields namely tip, total, and signature.
|
| 198 |
+
|
| 199 |
+
- (F3): A template for a lease agreement that consists of the following six fields: landlord's first name, landlord's last name, tenant's first name, tenant's last name, landlord's signature, date, tenant's signature, and date.
|
| 200 |
+
|
| 201 |
+
- (F4): An informed consent form that requires the participant to fill out four fields which are full name, date of birth, participant's signature, and date.
|
| 202 |
+
|
| 203 |
+
The two forms (F1 and F2) are quiet similar to the ones used in the evaluation of the WiYG [28]. Two additional forms were selected to evaluate more complex forms in terms of the number of fields (F3) and text items (F4) - see Figure 7. These forms have different paper sizes, orientation, and form layouts. Specifically, F1 and F2 forms are smaller than the standard letter size pages used in F3 and F4. The fields in F2 are vertically aligned and placed below one another, F3 fields are placed on horizontally on a table-like layout, and F1 and F4 have more complex layouts.
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
|
| 207 |
+
Figure 7: The four forms used in the user study. Note that the scale of the images does not represent their relative size (refer to the dimensions in the figure). The participants' hand writings are annotated with blue (brown) as correct (incorrect) by human evaluators.
|
| 208 |
+
|
| 209 |
+
### 5.3 Design
|
| 210 |
+
|
| 211 |
+
The study was designed as a repeated measures within-subject study. Each participant was required to fill out each of the 4 forms ( 4 tasks) with PaperPal in a counterbalanced order using a latin square [18]. The task completion time was the elapsed duration from the moment the pen was detected over the paper for the first time and the moment the user finished writing on the last form field. Accuracy was measured as the percentage of overlap between the ground truth annotated rectangular region of a given form field (a priori) and the rectangle enclosing the participant's written text for that same field - see figure 7.
|
| 212 |
+
|
| 213 |
+
### 5.4 Procedure
|
| 214 |
+
|
| 215 |
+
To start with, we draw attention to the circumstances surrounding the study. It was conducted after the gradual re-opening of businesses shut down due to the COVID-19 pandemic. Consequently, the study procedure was adapted to follow CDC recommended safety measures. Specifically, both the participant and the experimenter wore face masks and kept the recommended social distance from each other. Therefore, the experimenter relied on verbal communications instead of physical demonstrations to conduct this study.
|
| 216 |
+
|
| 217 |
+
Each session began with a semi-structured interview to gather demographic data, reading/writing habits, and prior experiences with assistive smartphone apps ( $\approx {20}$ minutes) - see table 1 .
|
| 218 |
+
|
| 219 |
+
Following this step, the participant was instructed on how to set up the PaperPal apparatus. Towards that, the participant was asked to pick up each piece of the apparatus and the experimenter would provide a verbal description of the pieces, after which the participant began assembling the pieces with the experimenter giving step-by-step assembly instructions until the participant was able to attach the paper tracker to the paper, the pen tracker to the pen, the L-shaped fork to the phone case, clip paper to the clipboard and the lanyard to wear the phone around the neck. After that, the experimenter described the user interface. The participant was asked to practice reading and writing with PaperPal with a set of test forms that were different from those used in the study. During the practice the experimenter observed the progress of the participant and intervened with instructions and explanations as needed. The entire process of assembling and practicing use of the application took about an hour.
|
| 220 |
+
|
| 221 |
+
The participant was next asked to fill out the four forms F1, F2, F3 and F4, followed by a single ease question. A maximum of 10 minutes per form was allocated. An open-ended discussion with the experimenter took place upon completion of the tasks. The entire study session per participant lasted 2.5 hours, with the experimenter making notes throughout the video recorded session.
|
| 222 |
+
|
| 223 |
+
### 5.5 Results: Task Completion Time
|
| 224 |
+
|
| 225 |
+
The task completion time is indicative of the efficiency of PaperPal as a form-filling assistant for blind people. On average, the total time spent to fill out forms F1 to F4 was 169.38, 73.91, 229.824, and 164.84 seconds respectively. The task completion time is divided into: (a) Navigation time, which is the time taken to navigate the to the target field; (c) Writing time, which is the time taken by the participant to fill in the field; (d) Coordination time, which is the time taken by the participant to bring the paper back in the camera's field of view. Figure 8 shows both the navigation time and writing time for each field.
|
| 226 |
+
|
| 227 |
+
For the coordination time the repeated measures ANOVA showed statistically significant difference in the time spent in coordination between the 4 forms with $F = {7.00}$ and $p = {0.002}$ . The pairwise analysis with post-hoc Tukey test showed that F2 coordination time is significantly less than F3 and F4 (with $p < {0.01}$ for both pairs) This can be explained as follows: working with larger papers such as F3 and F4 increases the likelihood for the paper or the pen to stray out of the camera's field of view which adds to the coordination time compared to a small-sized form like F2 where straying out is less likely.
|
| 228 |
+
|
| 229 |
+
### 5.6 Results: Assembly Time
|
| 230 |
+
|
| 231 |
+
The holder assembly time was includes time spent to attach the L-shaped half fork to the phone case and wear the phone around the neck. All participants were able to assemble the holder with an average time of 16.55 seconds (std 4.33 seconds). In addition, the average time to attach the paper tracker card to the top-left corner of an A4 page was 18.75 seconds (std $= {6.45}$ seconds). One difficulty that arose was that most participants were wearing protective gloves which made it difficult for them to attach the paper to the paper tracker card.
|
| 232 |
+
|
| 233 |
+
### 5.7 Results: Form Filling Accuracy
|
| 234 |
+
|
| 235 |
+
Overlap Percentage: Recall that the overlap percentage is defined as the percentage of the rectangular bounding box enclosing the participant's writing that falls inside the ground truth rectangular region of the field, see figure 7. The average percentage of field overlap for forms F1 to F4 was ${61.84}\% ,{66.02}\% ,{87.69}\%$ , and ${64.23}\%$ respectively.
|
| 236 |
+
|
| 237 |
+
Out of all the form fields in this study ( 8 participants x 21 fields = 168) participants attempted to fill out 156 of them. Out of these 156 fields, 117 (75%) of them had an overlap region of 50% or higher. Figure 8 (Field Overlap) shows the overlap percentages.
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+
Figure 8: left: navigation time and writing time per field. middle: accuracy in terms of the field overlap percentage. right: human assessment of the filled-out fields
|
| 242 |
+
|
| 243 |
+
Human assessment: We asked three human evaluators to assess whether each of the form fields were correctly filled out by the participants. The final verdict was rendered via majority voting. The inter-annotator agreement was high (Fleiss ${}^{\prime } = {0.77}$ ). Figure 8 (Human Assessment) shows the percentage of fields that were deeded as correct (81.55%), incorrect(10.71%), or missed(7.74%).
|
| 244 |
+
|
| 245 |
+
The human assessment of form-filling accuracy for forms F1 to F4 was ${88.09}\% ,{69.56}\% ,{96.77}\%$ , and 85.71% respectively. Note that human assessment shows higher accuracy compared to the average overlap. Akin to [28], this phenomenon is due to the fact that even when the written content is not perfectly inside the form field, human annotators still counted it as acceptable.
|
| 246 |
+
|
| 247 |
+
### 5.8 Results: PaperPal vs. WiYG
|
| 248 |
+
|
| 249 |
+
The standard check (F1) and receipt (F2) forms were similar to the forms used in WiYG's user study in that it also used a check and receipt of similar size and layout and identical set of fields. Although the differences in the setup and study participants preclude a rigorous comparison of the performance between PaperPal and WiYG, we can still get a sense of the differences in their performance via an informal comparison shown in table 2. This comparison suggests that in spite of the complexities of writing on a non-stationary clipboard with a wearable, users could fill out forms in a shorter time without compromising accuracy with PaperPal.
|
| 250 |
+
|
| 251 |
+
### 5.9 Results: Subjective Feedback
|
| 252 |
+
|
| 253 |
+
We administered a single ease question to each participant to rate the difficulty of assembling the holder and completing each form, on a scale of 1 to 7 with 1 being very difficult and 7 being very easy. The median rating for holder assembly was 7 which suggested that the assembly process was viewed as being easy. The median rating for forms were F1 : 6, F2: 5, F3: 4, F4: 3.5. In the open-ended discussion, participants mentioned that filling out forms that had long text content ( such as F4) or had more fields were more difficult.
|
| 254 |
+
|
| 255 |
+
All participants liked the fact that PaperPal lets them work with paper documents independently - quoting P5:" I like the ability to fill out my own forms and checks". All of them mentioned that PaperPal fills an unmet need for preserving privacy when filling out forms and affixing signatures. They all appreciated the integrated reading and writing feature in PaperPal as they got to hear what was in the form that they were filling out prior to affixing their signatures.
|
| 256 |
+
|
| 257 |
+
### 5.10 Discussion
|
| 258 |
+
|
| 259 |
+
In the user study participants missed filling 12 out of 168 fields. We observed most of the missing cases were associated with F3's two date fields, which were next to each other and had the same label, causing ambiguity. The last two fields in F4 were also missed by some participants because the long text gave them the false impression that there were no more fields left in the form. One way to address this in future work is to apriori notify the user of the total number of form fields.
|
| 260 |
+
|
| 261 |
+
Table 2: Comparison of PaperPal to WiYG (accuracy: overlap%)
|
| 262 |
+
|
| 263 |
+
<table><tr><td rowspan="2"/><td colspan="2">Standard Check (F1)</td><td colspan="2">Receipt (F2)</td></tr><tr><td>average time (s)</td><td>average accuracy (%)</td><td>average time (s)</td><td>average accuracy (%)</td></tr><tr><td>$\mathrm{{WiYG}}\left\lbrack {28}\right\rbrack$</td><td>249.75 (s)</td><td>64.85%</td><td>91.25 (s)</td><td>63.90%</td></tr><tr><td>PaperPal</td><td>159.38 (s)</td><td>61.84%</td><td>73.91(s)</td><td>66.02%</td></tr></table>
|
| 264 |
+
|
| 265 |
+
All participants mentioned that doing several consecutive rotate gestures required re-adjustments to their grip on the pen. Gestures with subtle finger movements such as finger flicks and taps on the pen are possible alternatives that can address this problem. This will require the use of computer vision recognition algorithms to detect subtle finger movements and is a topic for future research.
|
| 266 |
+
|
| 267 |
+
A unique aspect of PaperPal is that users can simply work with the paper and pen without having to hold any other objects like the phone in reading apps $\left\lbrack {2,8}\right\rbrack$ or signature guide while filling out paper forms as in WiYG. Finally, assembly of the 3D-printed attachment, attaching the trackers, and wearing the phone were all done independently by the study participants, affirming that the design of the apparatus associated with PaperPal was highly accessible for blind users.
|
| 268 |
+
|
| 269 |
+
The results of the study showed that blind participants were able to fill out forms in a few minutes (ranging from 1 min and 23 sec to $3\mathrm{\;{mins}}$ and ${83}\mathrm{{sec}}$ ) with high accuracy, measured in terms of the average overlap percentage, which was more than ${60}\%$ for all the forms. In addition, accuracy of the filled out form fields as judged by humans was also as high as 96.77%.
|
| 270 |
+
|
| 271 |
+
### 5.11 Future Work
|
| 272 |
+
|
| 273 |
+
Use of Markers: The PaperPal's accuracy depends on accurate tracking of pen and paper. Which is why it is done with visual markers attached to these objects. Eliminating these markers is a challenging open computer vision research.
|
| 274 |
+
|
| 275 |
+
Document Annotation: While the focus of this paper has been the accessible HCI interface for filling paper forms with a wearable, we envision a front-end to PaperPal consisting of an app like KNFB reader or voice dream scanner to acquire the image of the form document which subsequently will be dispatched to the augmented AWS Textract services for automatic annotation of the form elements. We conducted preliminary experiments with this service. To this end, we took pictures of the 4 forms used in the study (Section 5) with the voice dream scanner app. These images were rectified using the "image to paper" transformation (section 4.3) and these rectified images were processed by AWS Textract augmented with human-in-the-loop workflow. Out of the 21 fields in the 4 forms, 17 fields and their labels were detected correctly by Textract and 2 were erroneously recognized and were marked as such through intervention via the the human-in-the-loop workflow. Integration of this process to PaperPal and its end-to-end evaluation is a topic of future work.
|
| 276 |
+
|
| 277 |
+
## 6 CONCLUSION
|
| 278 |
+
|
| 279 |
+
PaperPal is a wearable reading and form-filling assistant for blind people. Wearability is achieved by transforming a smartphone, specifically an iPhone 8+, into a wearable around the chest with a 3D printed phone holder that can adjust the phone's viewing angle. PaperPal operates on both stationary flat tables as well as non-stationary portable clipboards. A preliminary study with blind participants demonstrated the feasibility and promise of PaperPal: blind users could fill out form fields at the correct locations with an accuracy of ${96.7}\%$ . PaperPal has potential to enhance their independence at home, at work and on the go and at school.
|
| 280 |
+
|
| 281 |
+
## REFERENCES
|
| 282 |
+
|
| 283 |
+
[1] Be my eyes: Bringing sight to blind and low vision people, 2018.
|
| 284 |
+
|
| 285 |
+
[2] Knfb reader, 2018.
|
| 286 |
+
|
| 287 |
+
[3] Kurzweil 1000 for windows, 2018.
|
| 288 |
+
|
| 289 |
+
[4] Data exchange in the era of digital transformation, 2019.
|
| 290 |
+
|
| 291 |
+
[5] Blind voters are suing north carolina and texas, arguing that mail ballots are discriminatory, 2020.
|
| 292 |
+
|
| 293 |
+
[6] Opencv: Open source computer vision library, 2020.
|
| 294 |
+
|
| 295 |
+
[7] Orcam, 2020.
|
| 296 |
+
|
| 297 |
+
[8] Seeing ai, 2020.
|
| 298 |
+
|
| 299 |
+
[9] Aws textract form extraction documentation, 2021.
|
| 300 |
+
|
| 301 |
+
[10] Adobe. Work with automatic field detection, 2021.
|
| 302 |
+
|
| 303 |
+
[11] Aira. Aira, 2018.
|
| 304 |
+
|
| 305 |
+
[12] Apple. Recognizing text in images, 2020.
|
| 306 |
+
|
| 307 |
+
[13] Apple. Vision, apply computer vision algorithms to perform a variety of tasks on input images and video, 2020.
|
| 308 |
+
|
| 309 |
+
[14] AppleVis. Text detective, 2018.
|
| 310 |
+
|
| 311 |
+
[15] J. Balata, Z. Mikovec, and L. Neoproud. Blindcamera: Central and golden-ratio composition for blind photographers. In Proceedings of the Mulitimedia, Interaction, Design and Innnovation, pp. 1-8. 2015.
|
| 312 |
+
|
| 313 |
+
[16] X. Bi, T. Moscovich, G. Ramos, R. Balakrishnan, and K. Hinckley. An exploration of pen rolling for pen-based interaction. In Proceedings of the 21st annual ACM symposium on User interface software and technology, pp. 191-200, 2008.
|
| 314 |
+
|
| 315 |
+
[17] S. M. Billah, V. Ashok, and I. Ramakrishnan. Write-it-yourself with the aid of smartwatches: A wizard-of-oz experiment with blind people. In 23rd International Conference on Intelligent User Interfaces, IUI '18, pp. 427-431. ACM, New York, NY, USA, 2018. doi: 10.1145/3172944 .3173005
|
| 316 |
+
|
| 317 |
+
[18] J. V. Bradley. Complete counterbalancing of immediate sequential effects in a latin square design. Journal of the American Statistical Association, 53(282):525-528, 1958.
|
| 318 |
+
|
| 319 |
+
[19] C. Brown and A. Hurst. Viztouch: Automatically generated tactile visualizations of coordinate spaces. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, TEI '12, p. 131-138. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2148131.2148160
|
| 320 |
+
|
| 321 |
+
[20] E. Buehler, S. Branham, A. Ali, J. J. Chang, M. K. Hofmann, A. Hurst, and S. K. Kane. Sharing is caring: Assistive technology designs on thin-giverse. CHI '15, p. 525-534. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123.2702525
|
| 322 |
+
|
| 323 |
+
[21] E. Buehler, N. Comrie, M. Hofmann, S. McDonald, and A. Hurst. Investigating the implications of $3\mathrm{\;d}$ printing in special education. ${ACM}$ Trans. Access. Comput., 8(3), Mar. 2016. doi: 10.1145/2870640
|
| 324 |
+
|
| 325 |
+
[22] E. Buehler, A. Hurst, and M. Hofmann. Coming to grips: 3d printing for accessibility. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility, pp. 291-292, 2014.
|
| 326 |
+
|
| 327 |
+
[23] E. Buehler, S. K. Kane, and A. Hurst. Abc and 3d: Opportunities and obstacles to $3\mathrm{\;d}$ printing in special education environments. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers & Accessibility, ASSETS '14, pp. 107-114. ACM, New York, NY, USA, 2014. doi: 10.1145/2661334.2661365
|
| 328 |
+
|
| 329 |
+
[24] E. Buehler, S. K. Kane, and A. Hurst. Abc and 3d: Opportunities and obstacles to $3\mathrm{\;d}$ printing in special education environments. In Proceedings of the 16th International ACM SIGACCESS Conference on Computers amp; Accessibility, ASSETS '14, p. 107-114. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/ 2661334.2661365
|
| 330 |
+
|
| 331 |
+
[25] L. Cavazos Quero, J. Iranzo Bartolomé, S. Lee, E. Han, S. Kim, and J. Cho. An interactive multimodal guide to improve art accessibility for blind people. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 346-348, 2018.
|
| 332 |
+
|
| 333 |
+
[26] V. Dream. Voice dream scanner, 2020.
|
| 334 |
+
|
| 335 |
+
[27] M. Elgendy, C. Sik-Lanyi, and A. Kelemen. Making shopping easy for people with visual impairment using mobile assistive technologies. Applied Sciences, 9(6):1061, 2019.
|
| 336 |
+
|
| 337 |
+
[28] S. Feiz, S. M. Billah, V. Ashok, R. Shilkrot, and I. Ramakrishnan. Towards enabling blind people to independently write on printed forms.
|
| 338 |
+
|
| 339 |
+
In the ACM Conference on Human Factors in Computing Systems (CHI'19), 2019.
|
| 340 |
+
|
| 341 |
+
[29] P. Forczmański, A. Smoliński, A. Nowosielski, and K. Mafecki. Segmentation of scanned documents using deep-learning approach. In
|
| 342 |
+
|
| 343 |
+
International Conference on Computer Recognition Systems, pp. 141- 152. Springer, 2019.
|
| 344 |
+
|
| 345 |
+
[30] V. Gaudissart, S. Ferreira, C. Thillou, and B. Gosselin. Sypole: mobile reading assistant for blind people. In 9th Conference Speech and Computer, 2004.
|
| 346 |
+
|
| 347 |
+
[31] T. Götzelmann, L. Branz, C. Heidenreich, and M. Otto. A personal computer-based approach for $3\mathrm{\;d}$ printing accessible to blind people. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, pp. 1-4, 2017.
|
| 348 |
+
|
| 349 |
+
[32] A. Guo, X. Chen, H. Qi, S. White, S. Ghosh, C. Asakawa, and J. P. Bigham. Vizlens: A robust and interactive screen reader for interfaces in the real world. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp. 651-664, 2016.
|
| 350 |
+
|
| 351 |
+
[33] A. Guo, J. Kim, X. A. Chen, T. Yeh, S. E. Hudson, J. Mankoff, and J. P. Bigham. Facade: Auto-generating tactile interfaces to appliances. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 5826-5838. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3025453. 3025845
|
| 352 |
+
|
| 353 |
+
[34] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
|
| 354 |
+
|
| 355 |
+
[35] M. Hofmann, J. Harris, S. E. Hudson, and J. Mankoff. Helping hands: Requirements for a prototyping methodology for upper-limb prosthetics users. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 1769-1780. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/ 2858036.2858340
|
| 356 |
+
|
| 357 |
+
[36] J. Hook, S. Verbaan, A. Durrant, P. Olivier, and P. Wright. A study of the challenges related to diy assistive technology in the context of children with disabilities. In Proceedings of the 2014 Conference on Designing Interactive Systems, DIS '14, pp. 597-606. ACM, New York, NY, USA, 2014. doi: 10.1145/2598510.2598530
|
| 358 |
+
|
| 359 |
+
[37] M. Hu. Exploring new paradigms for accessible 3d printed graphs. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility, pp. 365-366, 2015.
|
| 360 |
+
|
| 361 |
+
[38] A. Hurst and J. Tobias. Empowering individuals with do-it-yourself assistive technology. In The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '11, pp. 11-18. ACM, New York, NY, USA, 2011. doi: 10.1145/2049536. 2049541
|
| 362 |
+
|
| 363 |
+
[39] S. K. Kane and J. P. Bigham. Tracking@ stemxcomet: teaching programming to blind students via 3d printing, crisis management, and twitter. In Proceedings of the 45th ACM technical symposium on Computer science education, pp. 247-252, 2014.
|
| 364 |
+
|
| 365 |
+
[40] J. Kim and T. Yeh. Toward 3d-printed movable tactile pictures for children with visual impairments. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, p. 2815-2824. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123.2702144
|
| 366 |
+
|
| 367 |
+
[41] B. Leduc-Mills, J. Dec, and J. Schimmel. Evaluating accessibility in fabrication tools for children. In Proceedings of the 12th International Conference on Interaction Design and Children, pp. 617-620, 2013.
|
| 368 |
+
|
| 369 |
+
[42] V. Lepetit, F. Moreno-Noguer, and P. Fua. Epnp: Efficient perspective-n-point camera pose estimation. International Journal of Computer Vision, 81(2):155-166, 2009.
|
| 370 |
+
|
| 371 |
+
[43] J. Lim, Y. Yoo, H. Cho, and S. Choi. Touchphoto: Enabling independent picture taking and understanding for visually-impaired users. In 2019 International Conference on Multimodal Interaction, pp. 124-134. ACM, 2019.
|
| 372 |
+
|
| 373 |
+
[44] Z. Lv. Wearable smartphone: Wearable hybrid framework for hand and foot gesture interaction on smartphone. In Proceedings of the IEEE international conference on computer vision workshops, pp. 436-443, 2013.
|
| 374 |
+
|
| 375 |
+
[45] R. Manduchi. Mobile vision as assistive technology for the blind: An experimental study. In International Conference on Computers for Handicapped Persons, pp. 9-16. Springer, 2012.
|
| 376 |
+
|
| 377 |
+
[46] S. McDonald, J. Dutterer, A. Abdolrahmani, S. K. Kane, and A. Hurst. Tactile aids for visually impaired graphical design education. In Proceedings of the 16th International ACM SIGACCESS Conference on
|
| 378 |
+
|
| 379 |
+
Computers amp; Accessibility, ASSETS '14, p. 275-276. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10. 1145/2661334.2661392
|
| 380 |
+
|
| 381 |
+
[47] R. Neto and N. Fonseca. Camera reading for blind people. Procedia Technology, 16:1200-1209, 2014.
|
| 382 |
+
|
| 383 |
+
[48] NFB. Statistical facts about blindness in the united states, 2016.
|
| 384 |
+
|
| 385 |
+
[49] H. Nguyen, T. Nguyen, and J. Freire. Learning to extract form labels. Proceedings of the VLDB Endowment, 1(1):684-694, 2008.
|
| 386 |
+
|
| 387 |
+
[50] F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer. Speeded up detection of squared fiducial markers. Image and Vision Computing, 2018.
|
| 388 |
+
|
| 389 |
+
[51] L. Shi, H. Lawson, Z. Zhang, and S. Azenkot. Designing interactive 3d printed models with teachers of the visually impaired. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2019.
|
| 390 |
+
|
| 391 |
+
[52] L. Shi, R. McLachlan, Y. Zhao, and S. Azenkot. Magic touch: Interacting with 3d printed graphics. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 329-330, 2016.
|
| 392 |
+
|
| 393 |
+
[53] L. Shi, Y. Zhao, and S. Azenkot. Markit and talkit: a low-barrier toolkit to augment $3\mathrm{\;d}$ printed models with audio annotations. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 493-506, 2017.
|
| 394 |
+
|
| 395 |
+
[54] L. Shi, Y. Zhao, and S. Azenkot. Markit and talkit: A low-barrier toolkit to augment $3\mathrm{\;d}$ printed models with audio annotations. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST '17, pp. 493-506. ACM, New York, NY, USA, 2017. doi: 10.1145/3126594.3126650
|
| 396 |
+
|
| 397 |
+
[55] L. Shi, Y. Zhao, R. Gonzalez Penuela, E. Kupferstein, and S. Azenkot. Molder: An accessible design tool for tactile maps. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2020.
|
| 398 |
+
|
| 399 |
+
[56] R. Shilkrot, J. Huber, W. Meng Ee, P. Maes, and S. C. Nanayakkara. Fingerreader: A wearable device to explore printed text on the go. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, pp. 2363-2372. ACM, New York, NY, USA, 2015. doi: 10.1145/2702123.2702421
|
| 400 |
+
|
| 401 |
+
[57] A. Stangl, J. Kim, and T. Yeh. 3d printed tactile picture books for children with visual impairments: A design probe. In Proceedings of the 2014 Conference on Interaction Design and Children, IDC '14, p. 321-324. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2593968.2610482
|
| 402 |
+
|
| 403 |
+
[58] L. Stearns, R. Du, U. Oh, C. Jou, L. Findlater, D. A. Ross, and J. E. Froehlich. Evaluating haptic and auditory directional guidance to assist blind people in reading printed text using finger-mounted cameras. ACM Trans. Access. Comput., 9(1):1:1-1:38, Oct. 2016. doi: 10.1145/ 2914793
|
| 404 |
+
|
| 405 |
+
[59] S. Swaminathan, T. Roumen, R. Kovacs, D. Stangl, S. Mueller, and P. Baudisch. Linespace: A sensemaking platform for the blind. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 2175-2185. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2858036.2858245
|
| 406 |
+
|
| 407 |
+
[60] TapTapSee. Taptapsee, 2020.
|
| 408 |
+
|
| 409 |
+
[61] U. Uckun, A. S. Aydin, V. Ashok, and I. Ramakrishnan. Breaking the accessibility barrier in non-visual interaction with pdf forms. Proceedings of the ACM on Human-Computer Interaction, 4(EICS):1-16, 2020.
|
| 410 |
+
|
| 411 |
+
[62] M. Vázquez and A. Steinfeld. An assisted photography framework to help visually impaired users properly aim a camera. ACM Transactions on Computer-Human Interaction (TOCHI), 21(5):25, 2014.
|
| 412 |
+
|
| 413 |
+
[63] E. vision. Davinci hd/ocr all-in-one desktop magnifier, 2020.
|
| 414 |
+
|
| 415 |
+
[64] P. Wacker, O. Nowak, S. Voelker, and J. Borchers. Arpen: Mid-air object manipulation techniques for a bimanual ar system with pen & smartphone. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2019.
|
| 416 |
+
|
| 417 |
+
[65] P. Wacker, O. Nowak, S. Voelker, and J. Borchers. Evaluating menu techniques for handheld ar with a smartphone & mid-air pen. In 22nd
|
| 418 |
+
|
| 419 |
+
International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1-10, 2020.
|
| 420 |
+
|
| 421 |
+
[66] P. Wacker, A. Wagner, S. Voelker, and J. Borchers. Heatmaps, shadows, bubbles, rays: Comparing mid-air pen position visualizations in handheld ar. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-11, 2020.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/l8jScx6ROAh/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,320 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TOWARDS ENABLING BLIND PEOPLE TO FILL OUT PAPER FORMS WITH A WEARABLE SMARTPHONE ASSISTANT
|
| 2 |
+
|
| 3 |
+
Anonymous Authors
|
| 4 |
+
|
| 5 |
+
affiliation
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
We present PaperPal, a wearable smartphone assistant which blind people can use to fill out paper forms independently. Unique features of PaperPal include: a novel 3D-printed attachment that transforms a conventional smartphone into a wearable device with adjustable camera angle; capability to work on both flat stationary tables and portable clipboards; real-time video tracking of pen and paper which is coupled to an interface that generates real-time audio read outs of the form's text content and instructions to guide the user to the form fields; and support for filling out these fields without signature guides. The paper primarily focuses on an essential aspect of PaperPal, namely an accessible design of the wearable elements of PaperPal and the design, implementation and evaluation of a novel user interface for the filling of paper forms by blind people. PaperPal distinguishes itself from a recent work on smartphone-based assistant for blind people for filling paper forms that requires the smartphone and the paper to be placed on a stationary desk, needs the signature guide for form filling, and has no audio read outs of the form's text content. PaperPal, whose design was informed by a separate wizard-of-oz study with blind participants, was evaluated with 8 blind users. Results indicate that they can fill out form fields at the correct locations with an accuracy reaching 96.7%.
|
| 10 |
+
|
| 11 |
+
Index Terms: Human-centered computing-Accessibility-Accessibility technologies—; Human-centered computing—Human computer interaction (HCI)
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Paper documents continue to persist in our daily lives, notwithstanding the paperless digitally connected world we live in. People still continue to encounter paper-based transactions that require reading, writing and signing paper documents. Examples include paper receipts, mails, checks, bank documents, hospital forms and legal agreements. A recent survey shows that over ${33}\%$ of transactions in organizations are still done with paper documents [4]. Many of these paper documents, at the very least, require affixing signatures on them. While it is straightforward for sighted people to write and affix their signatures on paper, for people who are blind this is challenging, if not impossible to do independently. When it comes to writing, blind people invariably rely on sighted people for assistance. Such assistance may not always be readily available, but more troublingly, having to depend on others for writing always comes with a loss of privacy. To make matters worse, unlike reading assistants for blind people, of which there are quite a few (e.g., [2,8]), there are hardly any computer-assisted aids that can help them to write on paper independently, a problem that has taken on added significance due to the recent pandemic-driven upsurge in mail-in balloting. In fact, a recent lawsuit was brought by blind plaintiffs on the discriminatory nature of mail-in paper ballots since they could not be filled out without compromising confidentiality [5].
|
| 16 |
+
|
| 17 |
+
There are two essential aspects to a form-filling assistant for blind people: (1) document annotation which includes capturing the image of the document with a camera, and automatic identification of all its items, namely, text segments, form fields and their labels; and (2) the design and implementation of an interface to enable blind people to access and read all the items of the document and fill out the fields independently.
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Figure 1: A blind user filling out a form using PaperPal. An interaction scenario: (A) The pen is pointing to a text item and is read out, (B) Rotate the pen right to read the next item, (C) Bi-directional rotate to navigation to the form field, (D) Fill out the field.
|
| 22 |
+
|
| 23 |
+
In so far as (1) is concerned, the existence of several smartphone reading apps for them (e.g., SeeingAI [8], KNFB reader [2], and voice dream scanner [26]) has established the feasibility of acquiring images of paper documents by blind people using a smartphone. These apps demonstrate that blind users can independently use their audio interface to capture the image of the document. The aforementioned apps also extract text segments from the captured images using OCR and document segmentation methods, which are then read out to the user. In so far as forms are concerned, it is possible to extract form fields and their labels from document images using extant vision-based systems such as Adobe [10], and AWS Textract [9]. In contrast, the HCI aspects of an interface for a form-filling assistant for blind people is a challenging and relatively understudied research problem, and is the primary focus of this paper.
|
| 24 |
+
|
| 25 |
+
Of late, research on HCI aspects of writing aids for blind people is beginning to emerge. A recent work describes a first-of-its-kind smartphone-based writing-aid, called WiYG, for assisting blind people to fill out paper forms by themselves [28]. WiYG uses a 3D printed attachment to keep the phone upright on a flat table and redirects the focus of the phone's camera to the document that is placed in front of the phone. The paper and phone in WiYG are kept stationary; The user receives audio instructions to slide the signature guide - a card similar in size to regular credit cards that has a rectangular opening in the middle to help blind people sign on papers - to different form fields on the paper. All the form fields are manually annotated apriori. In addition, visual markers are affixed to the signature guide for tracking its locations with the camera.
|
| 26 |
+
|
| 27 |
+
WiYG work has opened up new design questions and challenges that could form the basis for next generation computer-aided paper-form-filling writing assistants for blind people. We explore some of these questions here. Firstly, WiYG provides no readouts of the text in the form documents which arguably is desirable, especially for documents that require signatures. Secondly, WiYG simply steps through each form field in the document one by one without backtracking. In practice one would like to seamlessly switch back and forth between the fields and fill them in any order. Thirdly, WiYG requires a flat table to keep the paper as well as the phone stationary during use. The ability to operate in different situational contexts such as documents on non-stationary portable surfaces such as clipboards makes for a more flexible computer-aided reading/writing wearable assistant. In fact, often times blind users find themselves in situations where the documents they are asked to review and sign such as forms at hospitals and doctor's offices are on clipboards.
|
| 28 |
+
|
| 29 |
+
To explore these questions we employed a user-centered design approach. We started with a Wizard of Oz (WoZ) pilot study with eight blind participants to understand the feasibility of filling paper forms on a clipboard. The study included paper forms placed on both flat desks and portable clipboards with the wearable cameras worn over the chest or attached to glasses, to mimic smart glasses. The study was designed to elicit data on several key questions including: (1) How do blind people write on paper attached to a portable clipboard? (2) Where can the camera be worn conveniently and in a way that the pen and paper are visible within the camera's field of view? (3) Considering all the camera-clipboard movements, how can blind people coordinate the clipboard and the wearable camera to maintain the pen and paper inside the camera's field of view while writing? The study was also intended to elicit user feedback and gather design requirements. The findings from the WoZ study informed the design of PaperPal, a wearable smartphone assistant for non-visual interaction with paper forms in more general scenarios than only a stationary desk, such as portable clipboards.
|
| 30 |
+
|
| 31 |
+
There are several unique aspects to the design of PaperPal. First, its novel 3D-printed attachment transforms a conventional smart-phone into a wearable device with a mechanism to adjust the camera angle with one hand. Second, PaperPal is flexible to where it can be used: stationary tables as well as non-stationary surfaces, specifically, portable clipboards. Third, PaperPal enables users to write without having to use their signature guides - a key requirement that emerged from the WoZ study. Fourth, PaperPal leverages real-time video processing techniques to track the paper and pen and accordingly provides appropriated audio feedback. Lastly, both reading and writing are tightly integrated in PaperPal, with users being able to easily switch between them while accessing different items on the document. Our evaluation with 8 blind users showed that PaperPal could successfully assist people who are blind to fill in various paper forms, such as bank checks, restaurant receipts, lease agreement, and informed consent forms. They independently filled out these forms with an accuracy reaching ${96.7}\%$ . We summarize our contributions as follows:
|
| 32 |
+
|
| 33 |
+
* The results of a wizard of OZ study with blind participants to uncover requirements for independently interacting with paper forms in portable settings.
|
| 34 |
+
|
| 35 |
+
* The design of a novel 3D attachment that can turn a smartphone into a wearable with adjustable camera angle. This can also be used for other wearable vision-based applications that require adjustment of the camera angle.
|
| 36 |
+
|
| 37 |
+
* The design and implementation of PaperPal, a new smartphone application, to assist blind users to independently read and fill out paper forms both on flat tables as well as portable clipboards.
|
| 38 |
+
|
| 39 |
+
* The results of a user study with blind participants to assess the efficacy of PaperPal in filling out various paper forms.
|
| 40 |
+
|
| 41 |
+
Following WiYG [28], we also assume annotated paper forms. As mentioned earlier, there exist smartphone applications and known techniques for document image capture by blind people and for automatic annotations. While the annotation problem is orthogonal to the design and implementation of the user interface explored in this paper, in Section 5.11 we describe our experiences with automated annotation of paper forms and discuss its envisioned integration in PaperPal to realize a fully automated paper-form-filling assistant.
|
| 42 |
+
|
| 43 |
+
§ 2 RELATED WORK
|
| 44 |
+
|
| 45 |
+
The research underlying PaperPal has broad connections to assistive technologies for reading and writing on paper documents, particularly for blind people, 3D printed artifacts and image acquisition and processing in accessibility. What follows is a review of existing research on these broad topics.
|
| 46 |
+
|
| 47 |
+
Reading and Writing: For well over a century, Braille has been the standard assistive tool for reading and writing for blind people. It is a tactile-based system made up of raised dots that encode characters. The use of braille has been declining in the computing era which ushered a major paradigm shift to digital assistive technologies [48]. Examples of digital technolgies for reading printed documents include some CCTVs [63] and Kurzweil Scanner [3] which reads off the text in scanned documents.
|
| 48 |
+
|
| 49 |
+
The smartphone revolution has witnessed a surge in mobile reading aids. Notable examples include the KNFB reader [2], Seein-gAI [8], Voice Dream Scanner [26], Text Detective [14], and Tap-TapSee [60]. The smartphone-based solutions (e.g., [47]) as well as other hand-held solutions (e.g., SYPOLE [30]) require the user to position the camera for getting the document in its field of view. In recent years, wearable reading aids are emerging (e.g., finger reader [56], Hand Sight [58], and Orcam [7]). Although finger-centric wearables such as $\left\lbrack {{56},{58}}\right\rbrack$ do not require positioning of the camera, the drawback is their interference with writing. Reading paper documents using crowd sourced services is another option for blind people (e.g., be my eyes [1] and Aira [11]). These have the obvious drawback of lacking privacy.
|
| 50 |
+
|
| 51 |
+
In contrast to reading aids, research on assistive writing on physical paper is at a nascent stage. A wizard of OZ study to explore the kinds of audio-haptic signals that would be useful for navigation on a paper form was reported in [17]. In this study, the form was placed on a flat table and the wizard generated the audio-haptic signals that was received on a smartwatch worn by the participant.
|
| 52 |
+
|
| 53 |
+
A recent paper describes WiYG, a smartphone-based assistant for blind people to fill out paper forms [28]. In WiYG the user places the phone on a stationary table in an upright position using a 3D-printed attachment. The paper form is placed on the desk in front of the smartphone. The user slides the signature guide over the paper form to each form field, guided by audio instructions provided by the smartphone app. To write into the form field the user uses both the hands, one to keep the signature guide in place over the form field and the other hand to write into it with the pen. As mentioned earlier in Section 1, WiYG provides no readouts of the text, simply steps through each form field and can only be used with a flat table where both the paper and the phone are kept stationary. The PaperPal system described in this paper integrates both reading of the document's text and writing in the form fields. It has the capability to operate on both stationary tables and portable clipboards.
|
| 54 |
+
|
| 55 |
+
3D Printing in Assistive Technologies: The increasing availability of 3D printers has increased the potential for rapid 3D printing for assistive technology artifacts $\left\lbrack {{20},{38}}\right\rbrack$ . $\left\lbrack {31}\right\rbrack$ shows that it is feasible for blind users to do 3D printing of models by themselves and [23] list organizations that use 3D printing tools to serve people with disabilities. Other examples of 3D printing applications are custom 3D printed assistive artifacts $\left\lbrack {{22},{35}}\right\rbrack ,3\mathrm{D}$ printed markers attached to appliances [33], and applications in accessibility of educational content $\left\lbrack {{19},{21},{24},{37}}\right\rbrack$ , graphical design [46], and learning programming languages [39]. 3D printing is also used to convey visual content [59], art [25], and map information [55] to blind people. Interactive 3D printed objects is yet another way 3D printing is utilized for accessibility [51-53]. Other examples of 3D printing include generating tactile children’s books $\left\lbrack {{40},{57}}\right\rbrack$ to promote literacy in children. In addition, [41] studies how children with disabilities can use 3D printing. In [36] it is mentioned that children with disabilities can also utilize 3D printing in the context of DIY projects. 3D printing is also used to utilize already existing technologies (e.g., making wearable smartphones [44]). In this paper we utilize 3D printing to design a phone case and a pocketable attachment to turn a smartphone into a wearable that allows the camera's angle to be adjusted.
|
| 56 |
+
|
| 57 |
+
Image Acquisition and Processing in Accessibility: Accessible image acquisition tools such as $\left\lbrack {{15},{43},{62}}\right\rbrack$ instruct blind users to position the camera at the correct angle and distance from the target for capturing an image. The work in [32] illustrates the practical deployment of such tools in an assistive technology for image acquisition by blind people. In terms of capturing images of paper documents, assistive reading apps, namely, SeeingAI [8], KNFB reader [2], and voice dream scanner [26] demonstrate that blind people can independently use the apps' interface to direct the smartphone camera on the paper document and capture its image.
|
| 58 |
+
|
| 59 |
+
The post-processing of the document image is a well-established research topic and can range from local OCR processing [12] to other computer vision methods such as document segmentation $\left\lbrack {{13},{29}}\right\rbrack$ to form labeling techniques $\left\lbrack {9,{10},{49},{61}}\right\rbrack$ .
|
| 60 |
+
|
| 61 |
+
Another topic related to camera-based assistive technologies is the use of visual markers for tracking objects in the environment. For example, in [27] different types of visual tracking methods are studied to make shopping easy for blind people. The work in [45] studies color-coded markers for use in a way finding application for blind people. Visual markers are especially beneficial when computer vision methods do not provide satisfactory accuracy. Examples of assistive technologies that utilize visual markers are $\left\lbrack {{28},{54},{55}}\right\rbrack$ . In PaperPal we also use visual markers to track the tip of the pen and the paper. To track the latter, PaperPal uses visual markers similar to the ones used for tracking the signature guide in [28]. To track the pen, PaperPal uses visual markers attached to a 3D printed pen topper, inspired by previous work on pen tracking that also use visual markers [21,64-66].
|
| 62 |
+
|
| 63 |
+
§ 3 A WIZARD OF OZ PILOT STUDY
|
| 64 |
+
|
| 65 |
+
To the best of our knowledge, there is no previous research on how blind people write on paper documents attached to non-stationary surfaces, namely, portable clipboards. To this end, we did a pilot study to assess the feasibility of an assistive tool that uses a wearable device for filling paper forms attached to clipboards. In the study these specific questions were explored: (1) How do blind people write on non-stationary surfaces like clipboards with a wearable? (2) How do blind people coordinate their hand and body movements to keep the pen and paper within the camera's field of view? (3) What is the most suitable on-body location for a wearable camera between the proposed locations of head vs. chest. In addition, the study was also intended to gather requirements for the wearable camera attachment. Details of the study follows.
|
| 66 |
+
|
| 67 |
+
§ 3.1 PARTICIPANTS
|
| 68 |
+
|
| 69 |
+
Eight (8) participants ( 3 males, 5 females) whose ages ranged form 35 to 77 (average age 50) were recruited for the pilot study. All participants were completely blind; all knew how to write on paper; none had any motor impairments that would have affected their full participation in the study.
|
| 70 |
+
|
| 71 |
+
§ 3.2 APPARATUS
|
| 72 |
+
|
| 73 |
+
The study used a standard ball point pen, a portable clipboard, and a credit-card sized signature guide. Each form, printed on standard letter-sized paper, had 5 randomly placed equal-sized fields, with the same distance between consecutive fields. The wizard used a Nexus phone to send instructions to an iPhone8+. Participants wore the iPhone8+ on two on-body locations using two holders. The first holder had the iPhone attached to Ski goggles. Participants wore this on the head akin to smart glasses - Wearable ${}_{\text{ head }}$ (figure 2 B) Using the second holder, the participants wore a lanyard around their necks with the phone rested on their chests. The holder had a reflective mirror to redirect the camera's field of view and served as the wearable on the chest - Wearable chest (figure 2 A, C).
|
| 74 |
+
|
| 75 |
+
< g r a p h i c s >
|
| 76 |
+
|
| 77 |
+
Figure 2: The Apparatus for the pilot study. A: The phone case with reflective mirror in front of the camera to be worn on the chest using the lanyard in C. B: Ski goggles for wearing the phone as glasses. D: Paper with Aruco markers where users mark ’ $x$ ’ in the numbered form fields.
|
| 78 |
+
|
| 79 |
+
§ 3.3 STUDY DESIGN
|
| 80 |
+
|
| 81 |
+
In this within-subjects study every participant filled out a total of 4 random forms corresponding to four different conditions namely, $< {\text{ Wearable }}_{\text{ head }}$ , form on desk>, $< {\text{ Wearable }}_{\text{ head }}$ , form on clipboard>,<Wearable ${}_{\text{ chest }}$ , form on desk>, and <Wearable ${}_{\text{ chest }}$ form on clipboard>.
|
| 82 |
+
|
| 83 |
+
A total of 32 forms were filled out ( 8 participants, 4 conditions). The order of the four form-filling tasks was randomized to minimize the learning effect. The wizard app was used by the experimenter (i.e., the wizard) to manually direct the participant to the form fields by sending directional audio instructions such as "move left", "move right", and so on. The participant's phone had an app to track the paper based on the markers that were printed on the paper (see figure 2 D). The participant's phone was also instrumented to gather study data throughout the duration of each study session which was also video recorded.
|
| 84 |
+
|
| 85 |
+
The accuracy was measured as the percentage of overlap between the annotated rectangular region of a given form-field (a priori) and the rectangle enclosing the participant's written text in that field (annotated after the study) [28].
|
| 86 |
+
|
| 87 |
+
§ 3.4 PROCEDURE
|
| 88 |
+
|
| 89 |
+
Each form filling task began with the wizard directing the participant to go to the first form field. We regarded this as the initialization phase and discounted it from our measurements so as to exclude any confounding variables that might arise due to starting off from a random position. Upon reaching the form field the participant would initial the field with a " $\times$ " using the signature guide. If at any time during this process the paper disappears from the camera's field of view, the iPhone app would raise a 'paper not visible' audio alert In response the participant would make adjustments by shifting the paper on or the wearable to bring it back into focus, which gets acknowledged by the 'paper is visible' shout-out by the app. The participant received the navigational instructions only when the paper was visible by the camera. The experimenter monitored the participant's navigational progress and sent audio directions in real time to guide the user's signature guide to each form field. At the conclusion of the session, users would compare and contrast writing on the desk vs clipboard, the wearable's location on the chest vs the head, and other experiences, in an open-ended discussion.
|
| 90 |
+
|
| 91 |
+
§ 3.5 KEY TAKEAWAYS
|
| 92 |
+
|
| 93 |
+
Chest vs. Head location for the Wearable: 6 out of 8 participants preferred the chest wearable, one participant preferred the head location and one participant had no preference. With the camera on the head, the paper went out of focus far more often - by a factor of 4 - than when it was worn on the chest.
|
| 94 |
+
|
| 95 |
+
The differences in percentage of overlap among the 4 conditions were found to be statistically significant (repeated measures ANOVA, ${F}_{3,{124}} = {5.32},p = {0.002}$ ). Pairwise comparisons with posthoc Tukey test showed that under the "head, clipboard" condition the field overlaps are significantly less than when user wears the phone on the chest and writes either on desk $\left( {p = {0.0102}}\right)$ or clipboard $\left( {p = {0.0171}}\right)$ . This suggests that when wearing the phone on the chest, user can better write on the correct location for both papers that are placed both on the desk and clipboard.
|
| 96 |
+
|
| 97 |
+
These are excerpts of select participant regarding the (1) Wearable ${}_{\text{ head }}$ : "Looking downward is not comfortable and this task requires a lot of looking down." and (2) Wearable chest: "Around the neck is more comfortable and you can focus on the direction of the paper and hand.". Overall, the chest location was better suited for placement of the wearable, and was adopted in the design of PaperPal.
|
| 98 |
+
|
| 99 |
+
Writing Surface: Unsurprisingly, all participants deemed writing on the desk was easier than on the clipboard. However, pairwise comparisons did not show any statistically significant difference in the accuracy or form fill-out time when only the paper placement variable changed from desk to clipboard. During the discussion, all participants mentioned real-life situations where they had to use clipboards.
|
| 100 |
+
|
| 101 |
+
Navigational Differences: On desks, we observed that the user moved the signature guide with one hand while using the other hand to feel the edge of the paper as a means to get a sense of the relative orientation of the signature guide w. r. t. the orientation of the paper. Such a behaviour was also reported in [28]. On the other hand, when holding the clipboard, participants could not feel the paper's edge in the same way as they did on flat desks. In fact, we observed that participants had difficulty moving the signature guide on a trajectory aligned with the paper's orientation. Despite this, the wizard was able to adapt the instructions to lead the participant to the target fields.
|
| 102 |
+
|
| 103 |
+
Reflective Mirror: There was a lot of variability in how participants held the clipboards. This means there is no one perfect angle for attaching the mirror to the holder that can cover all these variations. A wide angle camera increases the likelihood of the paper staying in its field of view. However, this will require the use of a large sized reflective mirror which is not practical. Thus, a smartphone holder without a reflective mirror whose orientation can be adjusted to position the camera, is desirable for a wearable on the chest.
|
| 104 |
+
|
| 105 |
+
Signature Guide: When using the clipboard participants had to hold the clipboard throughout the interaction process. On the other hand, writing with the signature guide requires both the hands. This became a difficult juggling act for the participants. These difficulties are reflected in the feedback of all the participants (e.g.,: P5 mentioned "specially you have to pickup your pen while using signature guide"). Hence using signature guides with clipboards is not an option.
|
| 106 |
+
|
| 107 |
+
§ 4 THE PAPERPAL WEARABLE ASSISTANT 4.1 DESIGN OF 3D PRINTED PHONE HOLDER
|
| 108 |
+
|
| 109 |
+
Informed by our pilot study, we designed a 3D-printed holder to convert an off-the-shelf smartphone (iPhone 8+) into a hands-free wearable on the chest. In addition, the holder design had to meet these requirements: (a) Support one-handed tilting of the phone to different angles so that differences in how the clipboard is held by different users can be accommodated. They will hold the clipboard with one hand and appropriately adjust the angle of the phone with the other hand to capture the paper in the camera's field of view; (b) Use minimal number of component pieces that are compact enough to fit in a pocket; and (c) Ease of assembly/disassembly.
|
| 110 |
+
|
| 111 |
+
< g r a p h i c s >
|
| 112 |
+
|
| 113 |
+
Figure 3: The iPhone holder. The L-shaped half fork can be attached to the phone case by rotating the screw inside the threaded bearing. When tightened the L-shaped half fork can rotate.
|
| 114 |
+
|
| 115 |
+
< g r a p h i c s >
|
| 116 |
+
|
| 117 |
+
Figure 4: PaperPal's interaction. The interaction automaton.
|
| 118 |
+
|
| 119 |
+
These requirements led to the design of the Adjustable iPhone Holder shown in fig 3 which went through several rounds of experimentation. This design is comprised of two pieces: The first piece is a phone case that has a small threaded bearing on the side. The second piece is a L-shaped half fork which facilitates tilting of the phone. This piece can be attached to the phone case by rotating the screw into the threaded bearing and can be rotated 360 degrees. A lanyard is attached to the lifting lug to wear the holder with its phone around the neck. The user can change the lanyard ribbon's length. This L-shaped fork can be rotated by the user to adjust the tilt angle of the wearable phone. Furthermore, the tilt angle can be adjusted so that it can support upright placement of the phone on a desk. Thus it can operate both as a wearable as well as serve as a stationary holder for writing on a flat desk.
|
| 120 |
+
|
| 121 |
+
§ 4.2 PAPERPAL: AN OPERATIONAL OVERVIEW
|
| 122 |
+
|
| 123 |
+
The PaperPal system runs as an iPhone app. The user interacts with the items of the paper, namely, text segments, form fields and their labels, by moving the pen over the paper like a pointer and making gestures with the pen. Two types of gestures are used: (1) unidirectional rotate left or right of the pen around its longitudinal axis, and (2) bidirectional rotate made up of two rotations done consecutively in the opposite directions.
|
| 124 |
+
|
| 125 |
+
PaperPal's response to the user's pen movements and pen gestures is governed by a two-state interaction automaton shown in Figure 4.
|
| 126 |
+
|
| 127 |
+
The application starts in the "select item and read" state which is inspired by the smartphone screen reader interface. In this state the user can move the pen like a pointer to simultaneously select an item and hear an audio readout of the item that is associated with the location pointed by the pen. This interaction is analogous to the "touch exploration" on the smartphone screen reader.
|
| 128 |
+
|
| 129 |
+
The unidirectional rotate left (right) selects the previous (next) item on the document and its content is read aloud. This interaction is analogous to "swiping" on the smartphone screen reader.
|
| 130 |
+
|
| 131 |
+
The bidirectional rotate switches between the two states of the application namely "select item and read" and "navigate to item and write".
|
| 132 |
+
|
| 133 |
+
The "navigate to item and write" state handles two situations: (1) If the item selected is a text segment it reads aloud the item's text content; (2) if the item selected is a form field it reads aloud its label and generates navigational instructions to direct the user's pen to the location of the field. No readouts of any intermediate items take place when a user is being navigated to a form field. Upon reaching the field it reads out the label of the form field once more to refresh the user's memory and directs the user to write in the field, alerting the user when the pen strays out of the field and giving instructions on how to to move the pen back into the field and continue writing. In this state the user can do a bidirectional rotate at any time to "move to the select item and read" state or continue in the current sate and continue on to the other form fields or other items via unidirectional rotate gestures.
|
| 134 |
+
|
| 135 |
+
§ 4.3 PAPERPAL IMPLEMENTATION
|
| 136 |
+
|
| 137 |
+
PaperPal uses the phone's camera to observe the user's actions. The application is implemented as an iOS app and uses OpenCV library [6] for real time video processing. Specifically, PaperPal: (a) tracks the physical location of the pen tip over the paper, and (b) detects pen gestures namely uni-directional and bi-directional rotates. The pen location and gestures determine the audio responses, namely, text readouts, navigation and writing instructions, that are generated in real time. Figure 5 is a high-level workflow of the process.
|
| 138 |
+
|
| 139 |
+
§ 4.3.1 VISUAL MARKERS
|
| 140 |
+
|
| 141 |
+
To enable accurate tracking of the pen and paper, Aruco markers [50] with known size are used for paper and pen tracking.
|
| 142 |
+
|
| 143 |
+
The paper tracker is a credit-card sized rectangular card ( ${85.60}\mathrm{\;{mm}}$ $\times {53.98}\mathrm{\;{mm}}$ ) wrapped by an Aruco board of 24 markers. It has a narrow diagonal groove that serves as a tangible guide for attaching the the paper into the card. The user slides the paper's upper left corner into this groove.
|
| 144 |
+
|
| 145 |
+
The pen tracker is a cube-shaped pen topper with Aruco markers affixed to each face of the cube. It can be easily attached to any regular ball point pen and is resilient to hand occlusions.
|
| 146 |
+
|
| 147 |
+
§ 4.3.2 LOCATING PEN TIP ON PAPER
|
| 148 |
+
|
| 149 |
+
For each image frame containing the pen and paper, two transformations $H$ and $P$ are estimated: (1) $H$ is a homography transformation that maps each image pixel to its corresponding location on the paper's coordinate system. This is estimated based on the paper tracker via the DLT algorithm [34], and (2) $P$ is a projective transformation [34] between any 3D location in the pen tracker's coordinate system and their corresponding 2D image pixel. $P$ is comprised of the intrinsic camera calibration, which is measured once for the camera, and the extrinsic camera calibration which is estimated based on the pen markers with the EPnP method [42].
|
| 150 |
+
|
| 151 |
+
For a pen that is touching the paper, PaperPal starts with the physical location of the pen tip whose distance is a constant w. r. t. the pen tracker coordinate system, and applies $P$ followed by the $H$ transformations to estimate the pen tip location on the paper’s coordinate system.
|
| 152 |
+
|
| 153 |
+
We only estimated the pen location if the pen tip is close to the paper. To this end, we developed a heuristic based on the observation that in PaperPal as the pen moves away from the paper it gets closer to the camera. specifically, in the image, the observed size of the pen markers should not be more than twice the size of the paper markers. The criteria was based on experimenting with various thresholds and the one candidate with the lowest average re-projection error [34] was selected. Furthermore we remove outlier pen tip locations whose distance from the previous observed pen tip is more than ${18}\mathrm{\;{mm}}$ on the paper. This threshold was selected based on the fastest pen movements that could be captured using the pen's visual markers.
|
| 154 |
+
|
| 155 |
+
< g r a p h i c s >
|
| 156 |
+
|
| 157 |
+
Figure 6: A: rectilinear path along the paper's coordinate system, B: non-rectilinear path along the paper's coordinate system, and C: the rectilinear path on the rotated paper's coordinate system.
|
| 158 |
+
|
| 159 |
+
§ 4.3.3 DETECTING PEN GESTURES
|
| 160 |
+
|
| 161 |
+
A previous work on pen rolling gestures had shown that when writing on a paper, unintended pen rotations were observed with high speeds and small angles [16]. We used this insight to increase the duration of intended rotation gestures and thus distinguish them from unintended ones.
|
| 162 |
+
|
| 163 |
+
To this end, through experimentation the duration for intended gestures, denoted ${T}_{\text{ gesture }}$ , was set to ${600}\mathrm{\;{ms}}$ . Rotation gestures are performed with the pen tip on or close to the paper surface. To detect the rotation gestures we choose a 3D point $R$ w. r. t. the pen coordinate system that is close to the pen tip. We find $R$ ’s corresponding 2D location $r$ on the paper coordinate system using the same process that was used for estimating the position of the pen tip in Section 4.3 above. To detect a rotational movement along the longitudinal axis of the pen, the angle of $r$ relative to the tip of the pen is measured for each frame using simple trigonometry. We record the direction of the rotational angle (left/right) between each two consecutive frames. If in the time window of ${T}_{\text{ gesture }}$ , the majority (over 90%) of pen rotations are in the same direction (left/right), the rotation gesture in the corresponding direction is detected. For detecting bidirectional rotations, first note that it involve two quick rotation in both direction. A bidirectional rotate is detected if within the sliding time window of ${T}_{\text{ gesture }}$ , majority (over 90%) of rotations in one half in one direction and in the opposite direction in the other half.
|
| 164 |
+
|
| 165 |
+
§ 4.3.4 GENERATING AUDIO RESPONSES
|
| 166 |
+
|
| 167 |
+
PaperPal responds to the user's pen movements by generating four kinds of audio responses: coordination alerts, audio readouts, writing and navigation instructions.
|
| 168 |
+
|
| 169 |
+
Coordination alerts: Alerts when the paper and/or the pen move out of the camera's field of view. To avoid needless alerts for momentary movements out of the field of view, we alert when the pen and paper are not in field of view for a duration that is longer than 2 seconds. Readouts: The textual content of an item selected by the user is readout to the user in audio.
|
| 170 |
+
|
| 171 |
+
Table 1: Participants demographic and habits regarding braille, writing, and smartphone applications (SG stands for signature guide).
|
| 172 |
+
|
| 173 |
+
max width=
|
| 174 |
+
|
| 175 |
+
ID Pilot study Age (Sex) Diagnosis (Light perception) Braille usage (Level) Braille scenarios Writing usage (Level) Writing scenarios SG Own (Carry) Smartphone (experience) Smartphone apps for papers
|
| 176 |
+
|
| 177 |
+
1-11
|
| 178 |
+
P1 yes 34 (F) retrograde optic atrophy (yes) daily (advanced) papers at work, at the library daily (beginner) doctor's office, legal forms, banks yes (always) iphone 10 S (advanced) seeingAI, KNFB reader, voice dream reader. voice dream writer
|
| 179 |
+
|
| 180 |
+
1-11
|
| 181 |
+
P2 yes 63 (F) Acute congenital glaucoma (no) daily (advanced) taking notes for myself rarely (beginner) Leaving notes for sighted peers yes (always) iPhone 8 (advanced) be my eyes
|
| 182 |
+
|
| 183 |
+
1-11
|
| 184 |
+
P3 no 32 (F) medical malpractice (no) daily (beginner) elevators. remote control weekly (advanced) doctor's office. checks no iPhone 10 (advanced) None
|
| 185 |
+
|
| 186 |
+
1-11
|
| 187 |
+
P4 no 55 (M) retinal detachment (no) monthly (advanced) elevators, mails weekly (beginner) timesheet signature, checks. legal documents no iPhone (advanced) KNFB reader
|
| 188 |
+
|
| 189 |
+
1-11
|
| 190 |
+
P5 yes 54 (M) glaucoma (no) - - weekly (beginner) shopping receipts. credit card bills no iPhone (beginner) seeingAI, tap tap see
|
| 191 |
+
|
| 192 |
+
1-11
|
| 193 |
+
P6 yes 46 (F) retinitis pigmentosa (yes) never (beginner) elevators, mails monthly (beginner) legall documents. doctor's office no iPhone 8 (advanced) seeing AI
|
| 194 |
+
|
| 195 |
+
1-11
|
| 196 |
+
P7 no 38 (M) optic atrophy, retinitis pigmentosa (yes) monthly (advanced) reading documents daily (advanced) documents at work. taking notes for sighted peers yes (often) iPhone 11 pro (advanced) KNFB reader. seeingAI. Aira voice dream scanner
|
| 197 |
+
|
| 198 |
+
1-11
|
| 199 |
+
P8 no 61 (M) retinitis pigmentosa (yes) daily (advanced) elevator. calendar daily (advanced) timesheet signature. checks. legal documents ves (always) flip phone (beginner) None
|
| 200 |
+
|
| 201 |
+
1-11
|
| 202 |
+
|
| 203 |
+
Writing instructions: While writing is in progress, instructions to maintain the pen within the field is given whenever the estimated pen tip falls out of the rectangular boundary of the field. For example, if the user's pen position has strayed above (below) the field the user is guided back to field with a "Move down (up)" instruction.
|
| 204 |
+
|
| 205 |
+
Navigation instruction: Navigational instructions consists of four basic directives namely up, down, left and right. With these four directives the user is guided to any field on the paper. In [28] a simple navigation algorithm was used to guide the user along a rectilinear path that corresponded to the Manhattan distance between the pen and field, see figure 6 A. Recall that our pilot study revealed that the user's navigational movements, in response to the audio instructions can deviate from the intended axes when the paper is placed on a clipboard. Figure $6\mathrm{\;B}$ , demonstrates how the user’s pen tip trajectory can deviate from the expected path along the paper's coordinate system with simple rectilinear navigational instructions.
|
| 206 |
+
|
| 207 |
+
To address this problem we estimate the deviation angle of the pen tip's trajectory w.r.t. the paper's coordinate system. To compensate for this deviation, we rotate the paper's coordinate system to the same degree but in the opposite direction, so that the pen tip's trajectory is aligned with the transformed axes - see figure $6\mathrm{C}$ . The navigation directives are generated w.r.t. the transformed axes. Observe that the pen tip trajectory now follows a rectilinear path w.r.t. the transformed axes. To estimate the deviation angle, we use the pen tip’s estimated location $t$ on the paper and find the angle between $t$ and the intended axis (horizontal axis when the navigation instruction is left or right, and vertical when the navigation is up or down). To avoid noise and jitters of the transformed axes, the deviation angle is averaged over a sliding window of one second.
|
| 208 |
+
|
| 209 |
+
§ 5 EVALUATION
|
| 210 |
+
|
| 211 |
+
We conducted an IRB-approved user study of PaperPal to evaluate its effectiveness as a form-filling assistant for blind people. To this end, the study was designed to answer the following questions: (a) How accurately can users fill out forms in terms of writing on the correct location? (b) How long does it take to fill out forms? (c) What is the overall user experience of using PaperPal to fill out paper forms of different sizes, layouts, and texts?
|
| 212 |
+
|
| 213 |
+
§ 5.1 PARTICIPANTS
|
| 214 |
+
|
| 215 |
+
Ten fully blind participants were recruited. However, two participants could not attend and the study was conducted with the remaining eight participants whose ages ranged from 32 to 63 (average $= {47.88}$ , std $= {12.16}$ ,4 females and 4 males). Note that 4 out of the 8 participants were also part of the WoZ pilot study discussed in Section 3. Table 1 is the demographic data of the participants. The participants were compensated $50 per hour. All the participants were right handed, and none had any motor impairments that impeded their full participation in the study. All the participants (except P5) were familiar with braille and all of them affirmed that they knew how to write on paper. All participants stated that in real-life they always asked a sighted peer to do the form fill out for them except for affixing their signatures. For that, they were led by the sighted peer to the signature field where they would sign by themselves.
|
| 216 |
+
|
| 217 |
+
§ 5.2 APPARATUS
|
| 218 |
+
|
| 219 |
+
The PaperPal application was running on an iPhone 8+. The 3D-printed holder, lanyard, paper tracker, pen tracker, a regular ball point pen and a clipboard were provided to the user, see figure 1 . Finally, each participant was given 4 paper forms to fill out.
|
| 220 |
+
|
| 221 |
+
Forms: The forms were selected to have different properties and to reflect realistic scenarios. Specifically, the forms were:
|
| 222 |
+
|
| 223 |
+
* (F1): A regular-size check that consists of six fields namely pay to the order of, date, $\$$ , dollars, memo, and signature.
|
| 224 |
+
|
| 225 |
+
* (F2): A restaurant receipt that consists of three fields namely tip, total, and signature.
|
| 226 |
+
|
| 227 |
+
* (F3): A template for a lease agreement that consists of the following six fields: landlord's first name, landlord's last name, tenant's first name, tenant's last name, landlord's signature, date, tenant's signature, and date.
|
| 228 |
+
|
| 229 |
+
* (F4): An informed consent form that requires the participant to fill out four fields which are full name, date of birth, participant's signature, and date.
|
| 230 |
+
|
| 231 |
+
The two forms (F1 and F2) are quiet similar to the ones used in the evaluation of the WiYG [28]. Two additional forms were selected to evaluate more complex forms in terms of the number of fields (F3) and text items (F4) - see Figure 7. These forms have different paper sizes, orientation, and form layouts. Specifically, F1 and F2 forms are smaller than the standard letter size pages used in F3 and F4. The fields in F2 are vertically aligned and placed below one another, F3 fields are placed on horizontally on a table-like layout, and F1 and F4 have more complex layouts.
|
| 232 |
+
|
| 233 |
+
< g r a p h i c s >
|
| 234 |
+
|
| 235 |
+
Figure 7: The four forms used in the user study. Note that the scale of the images does not represent their relative size (refer to the dimensions in the figure). The participants' hand writings are annotated with blue (brown) as correct (incorrect) by human evaluators.
|
| 236 |
+
|
| 237 |
+
§ 5.3 DESIGN
|
| 238 |
+
|
| 239 |
+
The study was designed as a repeated measures within-subject study. Each participant was required to fill out each of the 4 forms ( 4 tasks) with PaperPal in a counterbalanced order using a latin square [18]. The task completion time was the elapsed duration from the moment the pen was detected over the paper for the first time and the moment the user finished writing on the last form field. Accuracy was measured as the percentage of overlap between the ground truth annotated rectangular region of a given form field (a priori) and the rectangle enclosing the participant's written text for that same field - see figure 7.
|
| 240 |
+
|
| 241 |
+
§ 5.4 PROCEDURE
|
| 242 |
+
|
| 243 |
+
To start with, we draw attention to the circumstances surrounding the study. It was conducted after the gradual re-opening of businesses shut down due to the COVID-19 pandemic. Consequently, the study procedure was adapted to follow CDC recommended safety measures. Specifically, both the participant and the experimenter wore face masks and kept the recommended social distance from each other. Therefore, the experimenter relied on verbal communications instead of physical demonstrations to conduct this study.
|
| 244 |
+
|
| 245 |
+
Each session began with a semi-structured interview to gather demographic data, reading/writing habits, and prior experiences with assistive smartphone apps ( $\approx {20}$ minutes) - see table 1 .
|
| 246 |
+
|
| 247 |
+
Following this step, the participant was instructed on how to set up the PaperPal apparatus. Towards that, the participant was asked to pick up each piece of the apparatus and the experimenter would provide a verbal description of the pieces, after which the participant began assembling the pieces with the experimenter giving step-by-step assembly instructions until the participant was able to attach the paper tracker to the paper, the pen tracker to the pen, the L-shaped fork to the phone case, clip paper to the clipboard and the lanyard to wear the phone around the neck. After that, the experimenter described the user interface. The participant was asked to practice reading and writing with PaperPal with a set of test forms that were different from those used in the study. During the practice the experimenter observed the progress of the participant and intervened with instructions and explanations as needed. The entire process of assembling and practicing use of the application took about an hour.
|
| 248 |
+
|
| 249 |
+
The participant was next asked to fill out the four forms F1, F2, F3 and F4, followed by a single ease question. A maximum of 10 minutes per form was allocated. An open-ended discussion with the experimenter took place upon completion of the tasks. The entire study session per participant lasted 2.5 hours, with the experimenter making notes throughout the video recorded session.
|
| 250 |
+
|
| 251 |
+
§ 5.5 RESULTS: TASK COMPLETION TIME
|
| 252 |
+
|
| 253 |
+
The task completion time is indicative of the efficiency of PaperPal as a form-filling assistant for blind people. On average, the total time spent to fill out forms F1 to F4 was 169.38, 73.91, 229.824, and 164.84 seconds respectively. The task completion time is divided into: (a) Navigation time, which is the time taken to navigate the to the target field; (c) Writing time, which is the time taken by the participant to fill in the field; (d) Coordination time, which is the time taken by the participant to bring the paper back in the camera's field of view. Figure 8 shows both the navigation time and writing time for each field.
|
| 254 |
+
|
| 255 |
+
For the coordination time the repeated measures ANOVA showed statistically significant difference in the time spent in coordination between the 4 forms with $F = {7.00}$ and $p = {0.002}$ . The pairwise analysis with post-hoc Tukey test showed that F2 coordination time is significantly less than F3 and F4 (with $p < {0.01}$ for both pairs) This can be explained as follows: working with larger papers such as F3 and F4 increases the likelihood for the paper or the pen to stray out of the camera's field of view which adds to the coordination time compared to a small-sized form like F2 where straying out is less likely.
|
| 256 |
+
|
| 257 |
+
§ 5.6 RESULTS: ASSEMBLY TIME
|
| 258 |
+
|
| 259 |
+
The holder assembly time was includes time spent to attach the L-shaped half fork to the phone case and wear the phone around the neck. All participants were able to assemble the holder with an average time of 16.55 seconds (std 4.33 seconds). In addition, the average time to attach the paper tracker card to the top-left corner of an A4 page was 18.75 seconds (std $= {6.45}$ seconds). One difficulty that arose was that most participants were wearing protective gloves which made it difficult for them to attach the paper to the paper tracker card.
|
| 260 |
+
|
| 261 |
+
§ 5.7 RESULTS: FORM FILLING ACCURACY
|
| 262 |
+
|
| 263 |
+
Overlap Percentage: Recall that the overlap percentage is defined as the percentage of the rectangular bounding box enclosing the participant's writing that falls inside the ground truth rectangular region of the field, see figure 7. The average percentage of field overlap for forms F1 to F4 was ${61.84}\% ,{66.02}\% ,{87.69}\%$ , and ${64.23}\%$ respectively.
|
| 264 |
+
|
| 265 |
+
Out of all the form fields in this study ( 8 participants x 21 fields = 168) participants attempted to fill out 156 of them. Out of these 156 fields, 117 (75%) of them had an overlap region of 50% or higher. Figure 8 (Field Overlap) shows the overlap percentages.
|
| 266 |
+
|
| 267 |
+
< g r a p h i c s >
|
| 268 |
+
|
| 269 |
+
Figure 8: left: navigation time and writing time per field. middle: accuracy in terms of the field overlap percentage. right: human assessment of the filled-out fields
|
| 270 |
+
|
| 271 |
+
Human assessment: We asked three human evaluators to assess whether each of the form fields were correctly filled out by the participants. The final verdict was rendered via majority voting. The inter-annotator agreement was high (Fleiss ${}^{\prime } = {0.77}$ ). Figure 8 (Human Assessment) shows the percentage of fields that were deeded as correct (81.55%), incorrect(10.71%), or missed(7.74%).
|
| 272 |
+
|
| 273 |
+
The human assessment of form-filling accuracy for forms F1 to F4 was ${88.09}\% ,{69.56}\% ,{96.77}\%$ , and 85.71% respectively. Note that human assessment shows higher accuracy compared to the average overlap. Akin to [28], this phenomenon is due to the fact that even when the written content is not perfectly inside the form field, human annotators still counted it as acceptable.
|
| 274 |
+
|
| 275 |
+
§ 5.8 RESULTS: PAPERPAL VS. WIYG
|
| 276 |
+
|
| 277 |
+
The standard check (F1) and receipt (F2) forms were similar to the forms used in WiYG's user study in that it also used a check and receipt of similar size and layout and identical set of fields. Although the differences in the setup and study participants preclude a rigorous comparison of the performance between PaperPal and WiYG, we can still get a sense of the differences in their performance via an informal comparison shown in table 2. This comparison suggests that in spite of the complexities of writing on a non-stationary clipboard with a wearable, users could fill out forms in a shorter time without compromising accuracy with PaperPal.
|
| 278 |
+
|
| 279 |
+
§ 5.9 RESULTS: SUBJECTIVE FEEDBACK
|
| 280 |
+
|
| 281 |
+
We administered a single ease question to each participant to rate the difficulty of assembling the holder and completing each form, on a scale of 1 to 7 with 1 being very difficult and 7 being very easy. The median rating for holder assembly was 7 which suggested that the assembly process was viewed as being easy. The median rating for forms were F1 : 6, F2: 5, F3: 4, F4: 3.5. In the open-ended discussion, participants mentioned that filling out forms that had long text content ( such as F4) or had more fields were more difficult.
|
| 282 |
+
|
| 283 |
+
All participants liked the fact that PaperPal lets them work with paper documents independently - quoting P5:" I like the ability to fill out my own forms and checks". All of them mentioned that PaperPal fills an unmet need for preserving privacy when filling out forms and affixing signatures. They all appreciated the integrated reading and writing feature in PaperPal as they got to hear what was in the form that they were filling out prior to affixing their signatures.
|
| 284 |
+
|
| 285 |
+
§ 5.10 DISCUSSION
|
| 286 |
+
|
| 287 |
+
In the user study participants missed filling 12 out of 168 fields. We observed most of the missing cases were associated with F3's two date fields, which were next to each other and had the same label, causing ambiguity. The last two fields in F4 were also missed by some participants because the long text gave them the false impression that there were no more fields left in the form. One way to address this in future work is to apriori notify the user of the total number of form fields.
|
| 288 |
+
|
| 289 |
+
Table 2: Comparison of PaperPal to WiYG (accuracy: overlap%)
|
| 290 |
+
|
| 291 |
+
max width=
|
| 292 |
+
|
| 293 |
+
2*X 2|c|Standard Check (F1) 2|c|Receipt (F2)
|
| 294 |
+
|
| 295 |
+
2-5
|
| 296 |
+
average time (s) average accuracy (%) average time (s) average accuracy (%)
|
| 297 |
+
|
| 298 |
+
1-5
|
| 299 |
+
$\mathrm{{WiYG}}\left\lbrack {28}\right\rbrack$ 249.75 (s) 64.85% 91.25 (s) 63.90%
|
| 300 |
+
|
| 301 |
+
1-5
|
| 302 |
+
PaperPal 159.38 (s) 61.84% 73.91(s) 66.02%
|
| 303 |
+
|
| 304 |
+
1-5
|
| 305 |
+
|
| 306 |
+
All participants mentioned that doing several consecutive rotate gestures required re-adjustments to their grip on the pen. Gestures with subtle finger movements such as finger flicks and taps on the pen are possible alternatives that can address this problem. This will require the use of computer vision recognition algorithms to detect subtle finger movements and is a topic for future research.
|
| 307 |
+
|
| 308 |
+
A unique aspect of PaperPal is that users can simply work with the paper and pen without having to hold any other objects like the phone in reading apps $\left\lbrack {2,8}\right\rbrack$ or signature guide while filling out paper forms as in WiYG. Finally, assembly of the 3D-printed attachment, attaching the trackers, and wearing the phone were all done independently by the study participants, affirming that the design of the apparatus associated with PaperPal was highly accessible for blind users.
|
| 309 |
+
|
| 310 |
+
The results of the study showed that blind participants were able to fill out forms in a few minutes (ranging from 1 min and 23 sec to $3\mathrm{\;{mins}}$ and ${83}\mathrm{{sec}}$ ) with high accuracy, measured in terms of the average overlap percentage, which was more than ${60}\%$ for all the forms. In addition, accuracy of the filled out form fields as judged by humans was also as high as 96.77%.
|
| 311 |
+
|
| 312 |
+
§ 5.11 FUTURE WORK
|
| 313 |
+
|
| 314 |
+
Use of Markers: The PaperPal's accuracy depends on accurate tracking of pen and paper. Which is why it is done with visual markers attached to these objects. Eliminating these markers is a challenging open computer vision research.
|
| 315 |
+
|
| 316 |
+
Document Annotation: While the focus of this paper has been the accessible HCI interface for filling paper forms with a wearable, we envision a front-end to PaperPal consisting of an app like KNFB reader or voice dream scanner to acquire the image of the form document which subsequently will be dispatched to the augmented AWS Textract services for automatic annotation of the form elements. We conducted preliminary experiments with this service. To this end, we took pictures of the 4 forms used in the study (Section 5) with the voice dream scanner app. These images were rectified using the "image to paper" transformation (section 4.3) and these rectified images were processed by AWS Textract augmented with human-in-the-loop workflow. Out of the 21 fields in the 4 forms, 17 fields and their labels were detected correctly by Textract and 2 were erroneously recognized and were marked as such through intervention via the the human-in-the-loop workflow. Integration of this process to PaperPal and its end-to-end evaluation is a topic of future work.
|
| 317 |
+
|
| 318 |
+
§ 6 CONCLUSION
|
| 319 |
+
|
| 320 |
+
PaperPal is a wearable reading and form-filling assistant for blind people. Wearability is achieved by transforming a smartphone, specifically an iPhone 8+, into a wearable around the chest with a 3D printed phone holder that can adjust the phone's viewing angle. PaperPal operates on both stationary flat tables as well as non-stationary portable clipboards. A preliminary study with blind participants demonstrated the feasibility and promise of PaperPal: blind users could fill out form fields at the correct locations with an accuracy of ${96.7}\%$ . PaperPal has potential to enhance their independence at home, at work and on the go and at school.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,397 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Fast Monte Carlo Rendering via Multi-Resolution Sampling
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Monte Carlo rendering algorithms are widely used to produce photo-realistic computer graphics images. However, these algorithms need to sample a substantial amount of rays per pixel to enable proper global illumination and thus require an immense amount of computation. In this paper, we present a hybrid rendering method to speed up Monte Carlo rendering algorithms. Our method first generates two versions of a rendering: one at a low resolution with a high sample rate (LRHS) and the other at a high resolution with a low sample rate (HRLS). We then develop a deep convolutional neural network to fuse these two renderings into a high-quality image as if it were rendered at a high resolution with a high sample rate. Specifically, we formulate this fusion task as a super resolution problem that generates a high resolution rendering from a low resolution input (LRHS), assisted with the HRLS rendering. The HRLS rendering provides critical high frequency details which are difficult to recover from the LRHS for any super resolution methods. Our experiments show that our hybrid rendering algorithm is significantly faster than the state-of-the-art Monte Carlo denoising methods while rendering high-quality images when tested on both our own BCR dataset and the Gharbi dataset [14].
|
| 8 |
+
|
| 9 |
+
Index Terms: Computing methodologies-Computer graphics-Ray tracing
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Physically-based image synthesis has attracted considerable attention due to its wide applications in visual effects, video games, design visualization, and simulation [26]. Among them, ray tracing methods have achieved remarkable success as the most practical realistic image synthesis algorithms. For each pixel, they cast numerous rays that are bounced back from the environment to collect photons from light sources and integrate them to compute the color of that pixel. In this way, ray tracing methods are able to generate images with a very high degree of visual realism. However, obtaining visually satisfactory renderings with ray tracing algorithms often requires casting a large number of rays and thus takes a vast amount of computations. The extensive computational and memory requirements of ray tracing methods pose a challenge especially when running these rendering algorithms on resource-constrained platforms and impede their applications that require high resolutions and refresh rates.
|
| 14 |
+
|
| 15 |
+
To speed up ray tracing, Monte Carlo rendering algorithms are used to reduce ray samples per pixel (spp) that a ray tracing method needs to cast [10]. For instance, adaptive reconstruction methods control sampling densities according to the reconstruction error estimation from existing ray samples [58]. However, when the ray sample rate is not sufficiently high, the rendering results from a Monte Carlo algorithm are often noisy. Therefore, the ray tracing results are usually post-processed to reduce the noise using algorithms like bilateral filtering and guided image filtering $\left\lbrack {{28},{38},{43},{45},{49},{52},{57}}\right\rbrack$ . Recently, deep learning-based denoising approaches are developed to reduce the noise from Monte Carlo rendering algorithms $\left\lbrack {2,9,{23},{29}}\right\rbrack$ . These methods achieved high-quality results with impressive time reduction and some of them are incorporated into commercial tools, such as VRay Renderer, Corona Renderer, and RenderMan, and open source renders like Blender. However, real-time ray tracing is still a challenging problem, especially on devices with limited computing resources.
|
| 16 |
+
|
| 17 |
+
Our idea to speed up ray tracing is to reduce the number of pixels that we need to estimate color values. For instance, upsampling by $2 \times 2$ can reduce ${75}\%$ of pixels that need ray tracing to estimate color for. There are two main challenges in super-resolving a Monte Carlo rendering. First, it is still a fundamentally ill-posed problem to recover the high-frequency visual details that are missing from the low-resolution input. Second, a Monte Carlo rendering is subject to sampling noise, especially when it is produced at a low spp rate. Upsampling a noisy image will often amplify the noise level as well. To address these challenges, we propose to generate two versions of rendering: a low-resolution rendering but at a reasonable high ssp rate (LRHS) and a high-resolution rendering but at a lower ssp rate (HRLS). LRHS is less noisy while the more noisy HRLS can potentially provide high-frequency visual details that are inherently difficult to recover from the low resolution image.
|
| 18 |
+
|
| 19 |
+
We accordingly develop a hybrid rendering method dedicated for images rendered by a Monte Carlo rendering algorithm. Our neural network takes both LRHS and HRLS renderings as input. We use a de-shuffle layer to downsample the HRLS rendering to make it the same size as LRHS and to reduce the computational cost. Then we concatenate the features from both LRHS and HRLS and feed them to the rest of the network to generate the high-quality high resolution rendering. Our experiments show that given the hybrid input, our method outperforms the state-of-the-art Monte-Carlo rendering algorithms significantly.
|
| 20 |
+
|
| 21 |
+
As there is no large scale ray-tracing dataset available to train our network, we collected the first large scale Blender Cycles Ray-tracing (BCR) dataset, which contains 2449 high-quality images rendered from 1463 models. The dataset consists of various factors that affect the Monte Carlo noise distribution, such as depth of field, motion blur, and reflections. We render the images at a range of spp rates, including $1 - 8,{12},{32},{64},{250},{1000}$ , and 4000 spp. All the images are rendered at the resolution of ${1080}\mathrm{p}$ . Each image contains not only the final rendered result but also the intermediate render layers, including albedo, normal, diffuse, glossy, and so on.
|
| 22 |
+
|
| 23 |
+
This paper contributes to the research on photo-realistic image synthesis by integrating Monte Carlo rendering and image super resolution for efficient high-quality image rendering. First, we explore super resolution to reduce the number of pixels that need ray tracing. Second, we use multi-resolution sampling to both reduce noises and create visual details. Third, we develop the first large ray-tracing image dataset, which will be made publicly available.
|
| 24 |
+
|
| 25 |
+
## 2 RELATED WORK
|
| 26 |
+
|
| 27 |
+
Monte Carlo rendering is an important technology for photo-realistic rendering. It aims to reduce the number of rays that a ray tracing algorithm needs to cast and integrate while synthesizing a high quality image $\left\lbrack {{10},{22}}\right\rbrack$ . Conventional Monte Carlo rendering algorithms investigate various ways to adaptively distribute ray samples [8, 13, 20, 33, 39-42, 47, 48]. When only a small number of rays are casted, the rendered images are often noisy. They are typically filtered using various algorithms $\left\lbrack {{11},{21},{30},{31},{37},{43} - {45},{50}}\right\rbrack$ . Due to the space limit, we refer readers to a recent survey on Monte Carlo rendering [58].
|
| 28 |
+
|
| 29 |
+
Our research is more related to the recent deep learning approaches to Monte Carlo rendering denoising. Kalantari et al. trained a multilayer perceptron neural network to learn the parameters of filters before applying these filters to the noisy images [23]. Bako et al. extended this method by employing filters with spatially
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
Figure 1: This paper presents a hybrid rendering method to speed up Monte Carlo rendering. Our method takes a low resolution with a high sample rate rendering (LRHS) and a high resolution with a low sample rate rendering (HRLS) as inputs, and produces the high resolution high quality result.
|
| 34 |
+
|
| 35 |
+
adaptive kernels to denoise Monte Carlo renderings [2]. They developed a convolutional neural network method to estimate spatially adaptive filter kernels. Chaitanya et al. developed an encoder-decoder network with recurrent connections to denoise a Monte Carlo image sequence [9]. Recently, Kuznetsov et al. [29] developed a deep convolutional neural network approach that combines adaptive sampling and image denoising to optimize the rendering performance. Different from the above methods, Gharbi et al. argued that splatting samples to relevant pixels is more effective than gathering relevant samples for each pixel for denoising. Accordingly they developed a novel kernel-splatting architecture that estimates the splatting kernel for each sample, which was shown particularly effective when only a small number of samples were used [14]. Compared to these methods, our method improves the speed of Monte Carlo rendering by reducing the number of pixels that we need to cast rays for.
|
| 36 |
+
|
| 37 |
+
Our work also builds upon the success of deep image super resolution methods $\left\lbrack {1,{12},{15},{19},{27},{34} - {36},{51},{53},{56}}\right\rbrack$ . Dong et al. developed the first deep learning approach to image super resolution [12]. They designed a three-layer fully convolutional neural network and showed that a neural network could be trained end to end for super resolution. Since that, a variety of neural network architectures, such as residual network [16], densely connected network [18], and squeeze-and-excitation network [17], are introduced to the task of image super resolution. For instance, Kim et al. developed a deep neural network that employs residual architectures and obtained promising results [27]. Lim et al. further improved super resolution results by removing batch norm layers and increasing the depth of networks [35]. Zhang et al. developed a residual densely connected network that is able to explore intermediate features via local dense connections for better image super resolution [55]. Zhang et al. recently reported that a channel-wise attention network which is able to learn attention as guidance to model channel-wise features can more effectively super resolve a low resolution image [54]. While these image super resolution methods achieved promising results, recovering visual details that do not exist in the input image is necessarily an ill-posed problem. Our method addresses this fundamentally challenging problem by leveraging a high-resolution image but rendered at a low ray sample rate. Such an auxiliary rendering can be quickly rendered and yet provide visual details that do not exist in the low resolution input rendered at a high sample rate.
|
| 38 |
+
|
| 39 |
+
## 3 THE BLENDER CYCLES RAY-TRACING DATASET
|
| 40 |
+
|
| 41 |
+
To the best of our knowledge, there is no large scale ray-tracing dataset that is publicly available for training a deep neural network. Therefore, we develop the Blender Cycles Ray-tracing dataset (BCR) that consists of a large number of high quality scenes together with the ray-tracing images and the intermediate rendering layers. We will share BCR with our community.
|
| 42 |
+
|
| 43 |
+
### 3.1 Source Scenes
|
| 44 |
+
|
| 45 |
+
Blender's Cycles is a popular ray tracing engine that is capable of high-quality production rendering. It has an open and active community where thousands of artists share their work. Using the Blender community assets, we collected over 8000 scenes under Creative Commons licenses ${}^{123}$ , which allow us to share our dataset with the research community. We rendered these scenes at 4000 spp and manually checked the rendered images and all the rendering layers. We eliminated scenes with missing materials, lack of high frequency information, or with noticeable rendering noises even rendered at 4000 spp. This culling process reduced the total number of source scenes to 1465. These remaining scenes produced 2449 images by rendering from 1 to 10 viewpoints per scene. We split the dataset into 3 subsets: 2126 images from 1283 scenes as the training set, 193 images from 76 scenes as the validation set, and 130 images from 104 scenes as the test set. There is no overlap scene among them. As shown in Figure 2, our dataset covers various optical phenomena, such as motion blur, depth of field, and complex light transport effects. It covers a variety of scene contents, including indoor scenes, buildings, landscapes, fruits, plants, vehicles, animals. glass, and so on.
|
| 46 |
+
|
| 47 |
+
### 3.2 Rendering Settings
|
| 48 |
+
|
| 49 |
+
To generate the high-quality "ground-truth" renderings, we rendered each scene at 4000 spp. As described previously, we noticed that the rendered images for some scenes still contain noticeable noises even when rendered at 4000 spp and we removed them through manual inspection. On average, it took around 20 minutes to render an image on an Nvidia Titan X Pascal GPU. We set the rendering resolution to ${1920} \times {1080}$ or ${1080} \times {1080}$ to cover the most content of scenes. For each image, we provide both the final rendered image and the render layers, which are essential for Monte Carlo rendering $\left\lbrack {2,3,9,{23},{25},{29},{32},{33}}\right\rbrack$ . In total, each image has 33 rendering layers, including albedo, normals, depth, diffuse color, diffuse direct, diffuse indirect, glossy color and so on. All images in the BCR dataset can be produced using the render layers as follows [6]:
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
{I}_{HR} = {I}_{\text{Diff }} + {I}_{\text{Gloss }} + {I}_{\text{Sub }} + {I}_{\text{Trans }} + {I}_{\text{Env }} + {I}_{\text{Emit }}, \tag{1}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
where the diffuse, gloss, subsurface, trans layers can be generated with their color, direct light and indirect light layers
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{I}_{\text{Diff }} = {I}_{\text{DiffCol }} * \left( {{I}_{\text{DiffDir }} + {I}_{\text{DiffInd }}}\right) ,
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{I}_{\text{Gloss }} = {I}_{\text{GlossCol }} * \left( {{I}_{\text{GlossDir }} + {I}_{\text{GlossInd }}}\right) , \tag{2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{I}_{\text{Sub }} = {I}_{\text{SubCol }} * \left( {{I}_{\text{SubDir }} + {I}_{\text{SubInd }}}\right) ,
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{I}_{\text{Trans }} = {I}_{\text{TransCol }} * \left( {{I}_{\text{TransDir }} + {I}_{\text{TransInd }}}\right) .
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
---
|
| 74 |
+
|
| 75 |
+
${}^{1}$ http://www.blendswap.com
|
| 76 |
+
|
| 77 |
+
${}^{2}$ https://blenderartists.org
|
| 78 |
+
|
| 79 |
+
${}^{3}$ https://gumroad.com/senad
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
|
| 85 |
+
Figure 2: Examples from our BCR dataset.
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
|
| 89 |
+
Figure 3: Pixel value distribution of our BCR dataset. The rendered images use the scene linear color space and the pixel value is represented in float32. We use the logarithmic scale for the $y$ axis. While most pixels are in the range of $\left\lbrack {0,1}\right\rbrack$ , the distribution has a long tail.
|
| 90 |
+
|
| 91 |
+
Besides rendering 4000-spp images as ground truth, we rendered each scene at $1 - 8,{12},{16},{32},{64},{128},{250}$ , and 1000 spp as input for Monte Carlo rendering enhancement algorithms, including ours. The rendered images and the auxiliary results in the scene were saved in the scene linear color space, which closely corresponds to natural colors [5]. These images were rendered with a high dynamic range. The pixel values were represented in Float32. As shown in Figure 3, most of the pixel values were in the range of $\left\lbrack {0,1}\right\rbrack$ . However. the pixel value distribution had a long tail. We also noticed that many of the very large values come from the firefly rendering artifacts. Therefore we removed these outliers by clipping at value 100 . An image in the scene linear space can be converted to sRGB space for visualization in this paper as follows.
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
s = \left\{ \begin{array}{ll} 0 & \text{ if }l \leq 0, \\ {12.92} \times l & \text{ if }0 < l \leq {0.0031308}, \\ {1.055} \times {l}^{\frac{1}{2.4}} - {0.055} & \text{ if }{0.0031308} < l < 1, \\ 1 & \text{ if }l \geq 1, \end{array}\right. \tag{3}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where $l$ and $s$ indicate the pixel value in scene linear color space and sRGB respectively [4].
|
| 98 |
+
|
| 99 |
+
### 3.3 Low Resolution Image Generation
|
| 100 |
+
|
| 101 |
+
A straightforward way to generate low resolution images is to change the output resolution in Cycles. However, directly rendering a low resolution image does not always work [7]. For example, some scenes are modelled using a subdivision technology and changing the rendering resolution will disrupt the inherent relationship among the material and geometry settings in the scene files and thus cause mismatch between images rendered at different resolutions. Therefore, we generate low resolution images by downsampling the corresponding high resolution rendered images via the nearest neighbour degradation. We did not use bilinear or bicubic sampling as the nearest neighbor degradation more accurately simulates the real-world rendering engine. That is, rays for low resolution renderings are sampled at a sparse grid compared with high resolution ones.
|
| 102 |
+
|
| 103 |
+
3.4 Monte Carlo Rendering Dataset Comparison
|
| 104 |
+
|
| 105 |
+
<table><tr><td>Dataset</td><td>Images</td><td>Scenes</td><td>SPP</td><td>Layers</td></tr><tr><td>Kalantari [24]</td><td>500</td><td>20</td><td>4,8,16,32,64,32000</td><td>5</td></tr><tr><td>KPCN [2]</td><td>600</td><td>-</td><td>32, 128, 1024</td><td>6</td></tr><tr><td>Chaitanya [9]</td><td>-</td><td>3</td><td>1,4,8,16,32,256,2000</td><td>3</td></tr><tr><td>Kuznetsov [29] 700</td><td/><td>50</td><td>1,2,3,4,1024</td><td>4</td></tr><tr><td>BCR dataset</td><td>2449</td><td>1463</td><td>1-8, 12, 16, 32, 64, 128,250,1000,4000</td><td>33</td></tr></table>
|
| 106 |
+
|
| 107 |
+
Table 1: Monte Carlo rendering dataset comparison.
|
| 108 |
+
|
| 109 |
+
We compare our dataset with those used in recent deep learning-based Monte Carlo rendering denoising algorithms, including [2, 9, ${23},{29}\rbrack$ . As reported in Table 1, our dataset has over $3 \times$ the amount of images and over ${25} \times$ the number of scenes than the other datasets. Moreover, all these existing datasets are private and we will make our dataset public.
|
| 110 |
+
|
| 111 |
+
## 4 METHOD
|
| 112 |
+
|
| 113 |
+
Our method takes a low-resolution-high-spp image ${I}_{LRHS}$ and its corresponding high-resolution-low-spp image ${I}_{HRLS}$ as input and aims to estimate a corresponding HR image ${I}_{SR}.{I}_{LRHS}$ contains the RGB channel, while ${I}_{HRLS}$ is composed of RGB channel and extra layers, including Albedo, Normal, Diffuse, Specular, Variance layer as these extra layers can provide high-frequency visual details.
|
| 114 |
+
|
| 115 |
+
As shown in Figure 4, we design a two-encoder-one-decoder network to estimate the HR image. Given ${I}_{LRHS}$ and ${I}_{HRLS}$ , our network firstly extracts the features ${F}_{LRHS}$ and ${F}_{HRLS}$ , respectively. We leverage a downscale module with de-shuffle layers [46] instead of pooling layers to downscale the feature maps as de-shuffle layers can keep the high-frequency information. Compared with upscaling LRHS features, downsampling HRLS features to the same size of ${F}_{LRHS}$ can reduce the computational complexity of the network significantly. It also enables the features to fuse in the earlier layer of the network. We obtain the fused feature ${F}_{0}$ by combining ${F}_{HRLS}$ with ${F}_{LRHS}$ through a fusion module and feed it to a sequence of residual dense groups (RDG) $\left\lbrack {{54},{55}}\right\rbrack$ . With the feature ${F}_{G}$ from RDGs, we combine it with ${F}_{LRHS}$ by element-wise adding. Finally, we upscale the resulting dense feature ${F}_{DF}$ and predict the final $\mathrm{{HR}}$ image ${I}_{SR}$ through a convolutional layer. Below we describe the network in details.
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
|
| 119 |
+
Figure 4: The architecture of our network. Our network takes a low-resolution-high- spp rendering (LRHS) and its corresponding high-resolution-low-spp rendering (HRLS) as input and predicts the final high-resolution-high-quality image.
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
|
| 123 |
+
Figure 5: Deshuffle layer for downscaling feature maps.
|
| 124 |
+
|
| 125 |
+
LRHS shallow feature ${F}_{LRHS}$ . Following [35,54,55], we adopt a convolutional layer to get the shallow feature ${F}_{LRHS}$
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
{F}_{LRHS} = {H}_{lrhs}\left( {I}_{LRHS}\right) , \tag{4}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
where $H\left( \cdot \right)$ indicates the convolution operation.
|
| 132 |
+
|
| 133 |
+
HRLS shallow feature ${F}_{HRLS}$ . We first extract the shallow feature from ${I}_{HRLS}$ with a convolutional layer,
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
{F}_{HRLS}^{0} = {H}_{\text{hrls }}\left( {I}_{HRLS}\right) . \tag{5}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
Inspired by ESPCN [46], we design a deshuffle layer to downscale the features. As shown in Figure 5, we downscale the feature map with a stride of $\alpha$ . In our network, we set $\alpha = 2$ . To downscale the feature map, we stack deshuffle layers together. Supposing our network has $D$ deshuffle layers, we can get the output ${F}_{HRLS}$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
{F}_{HRLS} = {DS}{F}^{D}\left( {{DS}{F}^{D - 1}\left( {\cdots {DS}{F}^{1}\left( {F}_{HRLS}^{0}\right) \cdots }\right) }\right) , \tag{6}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where ${DSF}\left( \cdot \right)$ indicates the operation of the deshuffle layer. By downscaling auxiliary features, our network can work in the size of the LRHS image, which can significantly reduce the computational complexity of the overall network.
|
| 146 |
+
|
| 147 |
+
We concatenate ${F}_{LRHS}$ from LRHS image and ${F}_{HRLS}$ from HRLS into a combined feature map ${F}_{0}$ .
|
| 148 |
+
|
| 149 |
+
Residual densely connected block. We employ the densely connected network and residual groups to build the backbone of our neural network as they are shown effective for image super resolution [54, 55]. In our network, we use 4 convolutional layers in each residual densely connected block (RDB). By stacking $B = 5\mathrm{{RDBs}}$ , we build a residual densely connected group (RDG) as follows,
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
{F}_{g} = {RD}{B}_{B}\left( {{RD}{B}_{B - 1}\left( {\cdots {RD}{B}_{1}\left( {F}_{g - 1}\right) \cdots }\right) }\right) + {F}_{g - 1} \tag{7}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
We predict the dense feature ${F}_{DF}$ with $G = 3\mathrm{{RDGs}}$ as follows,
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{F}_{DF} = {RD}{G}_{G}\left( {{RD}{G}_{G - 1}\left( {\cdots {RD}{B}_{1}\left( {F}_{0}\right) \cdots }\right) }\right) + {F}_{0} \tag{8}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
Upscale. In our network, we adopt the shuffle layer from ES-PCN [46] to upscale the features and estimate the high resolution prediction ${I}_{SR}$ ,
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
{I}_{SR} = {H}_{Rec}\left( {{UP}\left( {F}_{DF}\right) }\right) , \tag{9}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where ${UP}\left( \cdot \right)$ indicates the operation of upscale [46].
|
| 168 |
+
|
| 169 |
+
Loss function. The BCR dataset is in the scene linear color space. As shown in Figure 3, the pixel value distribution of this BCR dataset has a long tail. ${\ell }_{1}$ loss can not handle it well because it might be biased to the extremely large pixel values. To handle this problem, we adopt the following robust loss
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
{\ell }_{r} = \frac{1}{N}\mathop{\sum }\limits_{{p \in {I}_{HR}}}\frac{\left| {I}_{HR}^{p} - {I}_{SR}^{p}\right| }{\beta + \left| {{I}_{HR}^{p} - {I}_{SR}^{p}}\right| }, \tag{10}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
where $\beta$ indicates the robust factor. For the small difference, ${\ell }_{r}$ works quite similar to ${\ell }_{1}$ . For the extremely large difference, ${\ell }_{r}$ will be close to but always below 1 . This will prevent our network from the bias towards rare but extremely large pixel values. We set $\beta = {0.1}$ in our experiments.
|
| 176 |
+
|
| 177 |
+
Implement details. We set the kernel size of all convolutional layers to $3 \times 3$ , except for the fuse convolutional layer, whose kernel size is $1 \times 1$ . Every convolutional layer is followed by a RELU layer, except for the last convolutional layer. The shallow features, fusion features, and dense features have 64 channels. During each iteration of the training, we randomly select the spp of ${I}_{HRLS}$ from the set of $\left\lbrack {1 - 8,{12},{16},{32}}\right\rbrack$ and the spp of ${I}_{LRHS}$ from the set of $\lbrack 2 - 8,{12},{16},{32}$ , ${64},{128},{250},{1000},{4000}\rbrack$ while making sure that the spp of ${I}_{HRLS}$ is smaller than that of ${I}_{LRHS}$ .
|
| 178 |
+
|
| 179 |
+
We use Pytorch to implement our network. We use a mini-batch size of 16 and train the network for 500 epochs. It takes about 1 week on one Nvidia Titan Xp for training. We use the SGD optimizer with the learning rate of ${10}^{-4}$ . We also perform data augmentation on-the-fly by randomly cropping patches. In order to save data loading time, we pre-crop training HR images into ${300} \times {300}$ large patches. During training, we further crop smaller patches on those large patches. The final patch size of HR is set to 96 for $\times 2,{192}$ for $\times 4$ , and 256 for $\times 8$ . We select the model that works best on the validation set.
|
| 180 |
+
|
| 181 |
+
## 5 EXPERIMENTS
|
| 182 |
+
|
| 183 |
+
We evaluate our method by comparing it representative state-of-the-art denoising methods for Monte Carlo rendering and image super resolution algorithms. We also conduct ablation studies to further examine our method. We use two metrics to evaluate our results. First, we adopt RelMSE (Relative Mean Square Error) to report the results in the scene linear color space, which is defined as
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\operatorname{RelMSE} = {\lambda }_{1} * \frac{{\left( {I}_{SR} - {I}_{HR}\right) }^{2}}{{I}_{HR}^{2} + {\lambda }_{2}}, \tag{11}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
where ${\lambda }_{1} = {0.5}$ and ${\lambda }_{2} = {0.01}$ when experimenting on our BCR dataset following KPCN [2]. For the Gharbi dataset, we use the evaluation code from its authors [14] where ${\lambda }_{1} = 1$ and ${\lambda }_{2} = {10}^{-4}$ .
|
| 190 |
+
|
| 191 |
+
We also use PSNR to evaluate the results in the ${sRGB}$ space. For our BCR dataset, we convert images to sRGB to calculate PSNR use
|
| 192 |
+
|
| 193 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">2spp</td><td colspan="2">4spp</td><td colspan="2">8spp</td></tr><tr><td>PSNR</td><td>RelMSE</td><td>PSNR</td><td>RelMSE</td><td>PSNR</td><td>RelMSE</td></tr><tr><td>Input</td><td>18.12</td><td>0.2953</td><td>21.51</td><td>0.1400</td><td>24.75</td><td>0.0646</td></tr><tr><td>KPCN [2]</td><td>25.87</td><td>0.0390</td><td>27.31</td><td>0.0299</td><td>28.11</td><td>0.0276</td></tr><tr><td>KPCN-ft [2]</td><td>31.03</td><td>0.0078</td><td>33.69</td><td>0.0043</td><td>35.83</td><td>0.0026</td></tr><tr><td>Bitterli [3]</td><td>26.67</td><td>0.0293</td><td>27.22</td><td>0.0252</td><td>27.45</td><td>0.0226</td></tr><tr><td>Gharbi [14]</td><td>30.73</td><td>0.0068</td><td>31.61</td><td>0.0057</td><td>32.29</td><td>0.0050</td></tr><tr><td rowspan="2">Ours $\times 2$</td><td colspan="2">(4 - 1)</td><td colspan="2">(8 - 2)</td><td colspan="2">(16-4)</td></tr><tr><td>33.27</td><td>0.0044</td><td>35.15</td><td>0.0027</td><td>36.74</td><td>0.0019</td></tr><tr><td rowspan="2">Ours $\times 4$</td><td colspan="2">(16 - 1)</td><td colspan="2">(32 - 2)</td><td colspan="2">(64 - 4)</td></tr><tr><td>33.94</td><td>0.0039</td><td>35.21</td><td>0.0028</td><td>36.31</td><td>0.0022</td></tr><tr><td rowspan="2">Ours $\times 8$</td><td colspan="2">(64 - 1)</td><td colspan="2">(128 - 2)</td><td colspan="2">(250 - 4)</td></tr><tr><td>31.37</td><td>0.0075</td><td>32.35</td><td>0.0057</td><td>33.14</td><td>0.0049</td></tr></table>
|
| 194 |
+
|
| 195 |
+
Table 2: Comparison on our BCR dataset. Ours $\times 2$ indicates that our method performs $\times 2$ super resolution and (4 - 1) indicates that our method takes 4 spp LRHS and 1 spp HRLS as input, which is effectively 2 spp on average for all the pixels.
|
| 196 |
+
|
| 197 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">4 spp</td><td colspan="2">8 spp</td><td colspan="2">16 spp</td></tr><tr><td>PSNR</td><td>RelMSE</td><td>PSNR</td><td>RelMSE</td><td>PSNR</td><td>RelMSE</td></tr><tr><td>Input</td><td>19.58</td><td>17.5358</td><td>21.91</td><td>7.5682</td><td>24.17</td><td>11.2189</td></tr><tr><td>Sen [45]</td><td>28.23</td><td>1.0484</td><td>28.00</td><td>0.5744</td><td>27.64</td><td>0.3396</td></tr><tr><td>Rousselle [42]</td><td>30.01</td><td>1.9407</td><td>32.32</td><td>1.9660</td><td>34.36</td><td>1.9446</td></tr><tr><td>Kalantari [24]</td><td>31.33</td><td>1.5573</td><td>33.00</td><td>1.6635</td><td>34.43</td><td>1.8021</td></tr><tr><td>Bitterli [3]</td><td>28.98</td><td>1.1024</td><td>30.92</td><td>0.9297</td><td>32.40</td><td>0.9640</td></tr><tr><td>KPCN [2]</td><td>29.75</td><td>1.0616</td><td>30.56</td><td>7.0774</td><td>31.00</td><td>20.2309</td></tr><tr><td>KPCN-ft [2]</td><td>29.86</td><td>0.5004</td><td>31.66</td><td>0.8616</td><td>33.39</td><td>0.2981</td></tr><tr><td>Gharbi [14]</td><td>33.11</td><td>0.0486</td><td>34.45</td><td>0.0385</td><td>35.36</td><td>0.0318</td></tr><tr><td rowspan="2">Ours $\times 2$</td><td colspan="2">(8 - 2)</td><td colspan="2">(16 - 4)</td><td colspan="2">(32 - 8)</td></tr><tr><td>34.02</td><td>1.5025</td><td>35.30</td><td>1.4902</td><td>36.43</td><td>1.4748</td></tr><tr><td rowspan="2">Ours $\times 4$</td><td colspan="2">(32 - 2)</td><td colspan="2">(64 - 4)</td><td colspan="2">(128 - 8)</td></tr><tr><td>33.94</td><td>5.5586</td><td>35.22</td><td>5.6781</td><td>35.97</td><td>5.7436</td></tr><tr><td rowspan="2">Ours $\times 8$</td><td colspan="2">(128 - 2)</td><td colspan="2">(16 - 8)</td><td colspan="2">(32 - 16)</td></tr><tr><td>31.56</td><td>3.7228</td><td>32.60</td><td>4.2300</td><td>33.22</td><td>4.5045</td></tr></table>
|
| 198 |
+
|
| 199 |
+
Table 3: Comparison on the Gharbi dataset [14].
|
| 200 |
+
|
| 201 |
+
Equation 3. For the Gharbi dataset, we convert images to the sRGB space using codes provided by its authors as follows [14],
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
s = \min \left( {1,\max \left( {0, l}\right) }\right) \text{,} \tag{12}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
where $l$ indicates images in the scene linear space, $s$ indicates images in the sRGB space.
|
| 208 |
+
|
| 209 |
+
### 5.1 Comparison with Denoising Methods
|
| 210 |
+
|
| 211 |
+
We compare our method to both state-of-the-art traditional denoising methods, including Sen et al. [45], Rousselle et al. [42], Kalantari et al. [23], Bitterli et al. [3], and recent representative deep learning based methods, including KPCN [2] and Gharbi et al. [14]. Unlike other methods, our method takes both a LRHS rendering and a HRLS rendering as input. Therefore, we compute the average ssp for our input as ${sp}{p}_{avg} = {sp}{p}_{LRHS}/{s}^{2} + {sp}{p}_{HRLS}$ , where $s$ indicates the super resolution scale. For instance, in Table 2,"Ours $\times 2$ " indicates that our method performs $\times 2$ super resolution and (4 - 1) indicates that our method takes 4 spp LRHS and 1 spp HRLS as input, which is effectively $2\mathrm{{spp}}$ on average. We conducted on the comparisons on both the Gharbi dataset and our BCR dataset.
|
| 212 |
+
|
| 213 |
+
Table 2 compares our method to Bitterli et al. [3], KPCN [2], and Gharbi et al. [14]. We used the code / model shared by their authors in this experiments. For KPCN [2], we provide another version of the results produced by their neural network but fine-tuned on our BCR dataset. This experiment showes that our method, especially ours $\times 2$ and $\times 4$ , outperform the state-of-the-art methods by a large margin. Specifically, our $\times 4$ method wins ${2.91}\mathrm{\;{dB}}$ on PSNR and 0.0039 on RelMSE when the spp is 2 . When spp is relatively high, our $\times 2$ method wins ${0.91}\mathrm{\;{dB}}$ on PSNR and 0.0007 on RelMSE. Figure 7 shows several visual examples on the BCR dataset. Our results contain fewer artifacts than the other methods.
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
|
| 217 |
+
Figure 6: Error map visualization.
|
| 218 |
+
|
| 219 |
+
<table><tr><td>spp</td><td>4</td><td>8</td><td>16</td><td>32</td><td>64</td><td>128</td></tr><tr><td>Rousselle [42]</td><td/><td/><td/><td/><td/><td>13.3</td></tr><tr><td>Kalantari [24]</td><td/><td/><td/><td/><td/><td>10.4</td></tr><tr><td>Bitterli [3]</td><td/><td/><td/><td/><td/><td>21.9</td></tr><tr><td>KPCN [2]</td><td/><td/><td/><td/><td/><td>14.6</td></tr><tr><td>Sen [45]</td><td>281.2</td><td>638.1</td><td>1603.1</td><td>4847.8</td><td>-</td><td>-</td></tr><tr><td>Gharbi [14]</td><td>6.0</td><td>10.1</td><td>18.9</td><td>35.9</td><td>67.0</td><td>156.5</td></tr><tr><td>Ours $\times 2$</td><td/><td/><td/><td/><td/><td>0.362</td></tr><tr><td>Ours $\times 4$</td><td/><td/><td/><td/><td/><td>0.118</td></tr><tr><td>Ours $\times 8$</td><td/><td/><td/><td/><td/><td>0.052</td></tr></table>
|
| 220 |
+
|
| 221 |
+
Table 4: Comparison of runtime cost (second) to denoise a ${1024} \times$ 1024 image. The data is from Gharbi [14]. If the runtime is constant, we report it in the last column. Our $\times 2, \times 4$ and $\times 8$ method are at least ${17} \times ,{51} \times$ and ${115} \times$ faster than the start-of-the-art methods, respectively.
|
| 222 |
+
|
| 223 |
+
We were not able to compare to additional methods on our BCR dataset as these methods work with other rendering engines or use very different input format. We compare to these methods on the Gharbi dataset, as reported in Table 3. We obtained the results for the comparing methods from Gharbi et al. [14]. For our results, we directly used our neural network trained on our BCR dataset without fine-tuning on the Gharbi training dataset. ${}^{4}$ As shown in Table 3, our method outperforms all the other methods in terms of PSNR.
|
| 224 |
+
|
| 225 |
+
However, the RelMSE of our results is higher than some of existing methods, such as KPCN [2] and Gharbi [14]. We looked into the discrepancy between the results measured using PSNR and RelMSE. We found that the RelMSE metric is heavily affected by a small number of pixels with abnormally large errors in our results. Figure 6 shows the RelMSE error map of one of our results with a much larger error than Gharbi. Figure 6 shows an example of our result where the errors concentrate in the region around the bright light, with 16 pixels having errors larger than ${10}^{6}$ , which contribute to most of the error of the whole image. After excluding these 16 pixels, while our error is still larger than Gharbi, the difference is much smaller. We would like to point out that our method was trained on our BCR dataset only and was not fine-tuned on the Gharbi dataset as its training set is not available. Moreover, BCR and the Gharbi dataset were rendered using different engines and thus contained different intermediate layers. To test on the Gharbi examples, we had to set the variance layer to a constant value, which compromises our results.
|
| 226 |
+
|
| 227 |
+
Figure 8 shows visual comparisons between our method and several existing methods. Although our RelMSE is higher than Gharbi [14], our results look more plausible, which is consistent with our higher PSNR values measured in the sRGB space. In the first example, the seat in our result contains much fewer artifacts. In the second example, the highlight in our result is more accurate than others. It shows that our model and BCR dataset have a great generalization capability.
|
| 228 |
+
|
| 229 |
+
---
|
| 230 |
+
|
| 231 |
+
${}^{4}$ We removed one image from the Gharbi testing set as its source model is also included into our BCR training set.
|
| 232 |
+
|
| 233 |
+
---
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+
Figure 8: Visual comparison on the Gharbi dataset [14].
|
| 238 |
+
|
| 239 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="2">$\times 2$</td><td colspan="2">$\times 4$</td><td colspan="2">$\times 8$</td></tr><tr><td>PSNR</td><td>RelMSE</td><td>PSNR</td><td>RelMSE</td><td>PSNR</td><td>RelMSE</td></tr><tr><td>Bicubic</td><td>30.57</td><td>0.0141</td><td>25.39</td><td>0.0858</td><td>22.36</td><td>0.2473</td></tr><tr><td>EDSR [35]</td><td>32.01</td><td>0.0079</td><td>30.70</td><td>0.0119</td><td>27.97</td><td>0.0241</td></tr><tr><td>RCAN [54]</td><td>32.03</td><td>0.0084</td><td>30.73</td><td>0.0117</td><td>27.92</td><td>0.0253</td></tr><tr><td>Ours</td><td>38.40</td><td>0.0015</td><td>34.27</td><td>0.0039</td><td>31.08</td><td>0.0079</td></tr></table>
|
| 240 |
+
|
| 241 |
+
Table 5: Comparison with super resolution algorithms on the BCR dataset.
|
| 242 |
+
|
| 243 |
+
<table><tr><td rowspan="2">HRLS LRHS</td><td colspan="2">1spp</td><td colspan="2">2spp</td><td colspan="2">4spp</td></tr><tr><td>PSNR</td><td>RelMSE</td><td>PSNR</td><td>RelMSE</td><td>PSNR</td><td>RelMSE</td></tr><tr><td>2spp</td><td>32.14</td><td>0.0056</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>4spp</td><td>32.94</td><td>0.0048</td><td>33.76</td><td>0.0038</td><td>-</td><td>-</td></tr><tr><td>8spp</td><td>33.52</td><td>0.0042</td><td>34.41</td><td>0.0033</td><td>35.20</td><td>0.0027</td></tr><tr><td>16spp</td><td>33.94</td><td>0.0039</td><td>34.88</td><td>0.0030</td><td>35.71</td><td>0.0025</td></tr><tr><td>32spp</td><td>34.22</td><td>0.0037</td><td>35.21</td><td>0.0028</td><td>36.06</td><td>0.0023</td></tr><tr><td>64spp</td><td>34.42</td><td>0.0035</td><td>35.44</td><td>0.0027</td><td>36.31</td><td>0.0022</td></tr><tr><td>128spp</td><td>34.56</td><td>0.0035</td><td>35.60</td><td>0.0026</td><td>36.49</td><td>0.0021</td></tr></table>
|
| 244 |
+
|
| 245 |
+
Table 6: The effect of spp values on the final rendering results.
|
| 246 |
+
|
| 247 |
+
Speed. We report the speeds of the above methods in Table 4. We use the same setting as Gharbi [14] and obtain the timing data for the comparing methods from them as well. For our method, We report the aggregated spp for our method by combining samples used to render both HRLS and LRHS. Since all the methods use the same spp, we only include the time needed for denoising. We report the time of processing one ${1024} \times {1024}$ image on one Nvidia Xp GPU. We can find that our $\times 2, \times 4$ and $\times 8$ method are at least ${17} \times ,{51} \times$ and ${115} \times$ faster than the state-of-the-art method Gharbi [14].
|
| 248 |
+
|
| 249 |
+
### 5.2 Comparisons with Super Resolution Methods
|
| 250 |
+
|
| 251 |
+
We also compare our method with several baseline methods that use super resolution to upsample the low-resolution-high-spp renderings to the target size. In this experiment, we used the trained models shared by the authors of these super resolution methods $\left\lbrack {{35},{54}}\right\rbrack$ and fine-tuned them on our BCR dataset. As reported in Table 5, our method generates significantly better results than these super resolution methods. While this comparison is unfair to these baseline methods, it indeed shows the benefits of taking an extra high-resolution-low-spp rendering as input. As shown in Figure 11, our results contain more fine details that are missing from the super resolution results.
|
| 252 |
+
|
| 253 |
+
### 5.3 Ablation study
|
| 254 |
+
|
| 255 |
+
## We now examine several key components of our method.
|
| 256 |
+
|
| 257 |
+
Input layers of ${I}_{LRHS}$ and ${I}_{HRLS}$ . We examine how the rendering layers affect the final results. In this experiment, we use 1 spp for ${I}_{HRLS}$ and 4000 spp for ${I}_{LRHS}$ . The upsampling scale is set to $\times 4$ . Our neural network contains two input branches, one for ${I}_{LRHS}$ and the other for ${I}_{HRLS}$ . In this experiment, we fix the input layer of one branch to RGB while changing the input layers of the other. For the model with "None", we remove this branch. As shown in Figure 9, compared with no inputs, ${I}_{HRLS}$ can greatly improve the results. Among various input layers, RGB improves the results by a large margin. The result can be further improved if the ${I}_{HRLS}$ takes all rendering layers. We believe that these improvements come from the high frequency information in the ${I}_{HRLS}$ . For the ${I}_{LRHS}$ , while all the input layers still help, the RGB result alone can achieve the best result. We conjecture that since ${I}_{LRHS}$ is rendered with a high spp, its RGB layer is already of a very high quality and the other intermediate layers do not further contribute. On the other hand, the intermediate layers for ${I}_{HRLS}$ provide useful information for denoising, which is consistent with the findings of the previous denoising methods $\left\lbrack {2,{14}}\right\rbrack$ .
|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
|
| 261 |
+
Figure 10: Comparison between ${\ell }_{r}$ and ${\ell }_{1}$ .
|
| 262 |
+
|
| 263 |
+
Robust loss ${\ell }_{r}$ . We examine the effect of the parameter $\beta$ in our robust loss. We also compare it to the standard ${\ell }_{1}$ loss. In this experiment, we use 4000 spp for ${I}_{LRHS}$ and 1 spp for ${I}_{HRLS}$ . The upsampling scale is set to $\times 4$ . The input channels of ${I}_{LRHS}$ and ${I}_{HRLS}$ are set to RGB. Figure 10 shows that the robust loss ${\ell }_{r}$ with $\beta = {0.1}$ outperforms ${\ell }_{1}$ by a large margin as it can avoid the bias towards a very small number of pixels with extremely large pixel values. We also find that using a too large or too small $\beta$ will be harmful to the results. This is because a very large $\beta$ value reduces the robust loss to the ${\ell }_{1}$ loss while a very small beta value makes the loss always close to 1 without regard to the error between the output and the ground truth.
|
| 264 |
+
|
| 265 |
+
SPP of ${I}_{LRHS}$ and ${I}_{HRLS}$ . We examine how our method works with different spp values used to render ${I}_{LRHS}$ and ${I}_{HRLS}$ . In the experiment, we set the upsampling scale to $\times 4$ . Table 6 shows rendering at high spp values consistently leads to better final results.
|
| 266 |
+
|
| 267 |
+
## 6 CONCLUSION
|
| 268 |
+
|
| 269 |
+
This paper presented a hybrid rendering method to speed up Monte Carlo rendering algorithms. We designed a two-encoder-one-decoder network for this task. Our network takes a low resolution image with a high spp and a high resolution image with a low spp as inputs, and estimates the high resolution high quality images. We built a large-scale ray-tracing dataset Blender Cycles Ray-tracing dataset. Our experiments showed that our method is able to generate high quality high resolution images quickly. Our experiments also showed that HRLS and the robust loss are helpful to generate high quality results.
|
| 270 |
+
|
| 271 |
+
## REFERENCES
|
| 272 |
+
|
| 273 |
+
[1] N. Ahn, B. Kang, and K.-A. Sohn. Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the European Conference on Computer Vision, pp. 252-268, 2018.
|
| 274 |
+
|
| 275 |
+
[2] S. Bako, T. Vogels, B. McWilliams, M. Meyer, J. Novák, A. Harvill, P. Sen, T. Derose, and F. Rousselle. Kernel-predicting convolutional networks for denoising monte carlo renderings. ACM Transactions on Graphics (TOG), 36(4):97, 2017.
|
| 276 |
+
|
| 277 |
+
[3] B. Bitterli, F. Rousselle, B. Moon, J. A. Iglesias-Guitián, D. Adler, K. Mitchell, W. Jarosz, and J. Novák. Nonlinearly weighted first-order
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+
regression for denoising monte carlo renderings. In Computer Graphics Forum, vol. 35, pp. 107-117. Wiley Online Library, 2016.
|
| 282 |
+
|
| 283 |
+
[4] Blender. Blender color linear to srgb. https://github.com/blender/blender/blob/ 6c9178b183f5267e@7a6c55497b6d496e468a709/intern/ cycles/util/util_color.h#L77. Accessed: 2021-04-01.
|
| 284 |
+
|
| 285 |
+
[5] Blender. Blender color management. https://docs.blender.org/ manual/en/dev/render/color_management.html, 2020. Accessed: 2020-03-05.
|
| 286 |
+
|
| 287 |
+
[6] Blender. Blender passes. https://docs.blender.org/manual/ en/latest/render/layers/passes.html, 2020. Accessed: 2020- 03-06.
|
| 288 |
+
|
| 289 |
+
[7] Blender. Cycles design goals. https://wiki.blender.org/wiki/ Source/Render/Cycles/DesignGoals, 2020. Accessed: 2021-04- 01.
|
| 290 |
+
|
| 291 |
+
[8] M. R. Bolin and G. W. Meyer. A perceptually based adaptive sampling algorithm. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pp. 299-309. ACM, 1998.
|
| 292 |
+
|
| 293 |
+
[9] C. R. A. Chaitanya, A. S. Kaplanyan, C. Schied, M. Salvi, A. Lefohn, D. Nowrouzezahrai, and T. Aila. Interactive reconstruction of monte carlo image sequences using a recurrent denoising autoencoder. ${ACM}$ Transactions on Graphics (TOG), 36(4):98, 2017.
|
| 294 |
+
|
| 295 |
+
[10] R. L. Cook, T. Porter, and L. Carpenter. Distributed ray tracing. In ACM SIGGRAPH computer graphics, vol. 18, pp. 137-145, 1984.
|
| 296 |
+
|
| 297 |
+
[11] H. Dammertz, D. Sewtz, J. Hanika, and H. Lensch. Edge-avoiding à-trous wavelet transform for fast global illumination filtering. In Proceedings of the Conference on High Performance Graphics, pp. 67-75. Eurographics Association, 2010.
|
| 298 |
+
|
| 299 |
+
[12] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In European conference on computer vision, pp. 184-199. Springer, 2014.
|
| 300 |
+
|
| 301 |
+
[13] K. Egan, Y.-T. Tseng, N. Holzschuch, F. Durand, and R. Ramamoorthi. Frequency analysis and sheared reconstruction for rendering motion blur. In ACM Transactions on Graphics, vol. 28, p. 93, 2009.
|
| 302 |
+
|
| 303 |
+
[14] M. Gharbi, T.-M. Li, M. Aittala, J. Lehtinen, and F. Durand. Sample-based monte carlo denoising using a kernel-splatting network. ACM
|
| 304 |
+
|
| 305 |
+
Transactions on Graphics (TOG), 38(4):1-12, 2019.
|
| 306 |
+
|
| 307 |
+
[15] M. Haris, G. Shakhnarovich, and N. Ukita. Recurrent back-projection network for video super-resolution. In Proceedings of the IEEE Con-
|
| 308 |
+
|
| 309 |
+
ference on Computer Vision and Pattern Recognition, pp. 3897-3906, 2019.
|
| 310 |
+
|
| 311 |
+
[16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
|
| 312 |
+
|
| 313 |
+
[17] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141, 2018.
|
| 314 |
+
|
| 315 |
+
[18] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017.
|
| 316 |
+
|
| 317 |
+
[19] Z. Hui, X. Wang, and X. Gao. Fast and accurate single image super-resolution via information distillation network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 723-731, 2018.
|
| 318 |
+
|
| 319 |
+
[20] H. W. Jensen. Realistic image synthesis using photon mapping. AK Peters/CRC Press, 2001.
|
| 320 |
+
|
| 321 |
+
[21] H. W. Jensen and N. J. Christensen. Optimizing path tracing using noise reduction filters. 1995.
|
| 322 |
+
|
| 323 |
+
[22] J. T. Kajiya. The rendering equation. In ACM SIGGRAPH computer graphics, vol. 20, pp. 143-150. ACM, 1986.
|
| 324 |
+
|
| 325 |
+
[23] N. K. Kalantari, S. Bako, and P. Sen. A machine learning approach for filtering monte carlo noise. ACM Trans. Graph., 34(4):122-1, 2015.
|
| 326 |
+
|
| 327 |
+
[24] N. K. Kalantari and P. Sen. Removing the noise in monte carlo rendering with general image denoising algorithms. In Computer Graphics Forum, vol. 32, pp. 93-102. Wiley Online Library, 2013.
|
| 328 |
+
|
| 329 |
+
[25] S. Kallweit, T. Müller, B. Mcwilliams, M. Gross, and J. Novák. Deep scattering: Rendering atmospheric clouds with radiance-predicting neural networks. ACM Transactions on Graphics, 36(6):1-11, 2017.
|
| 330 |
+
|
| 331 |
+
[26] A. Keller, L. Fascione, M. Fajardo, I. Georgiev, P. H. Christensen, J. Hanika, C. Eisenacher, and G. Nichols. The path tracing revolution in the movie industry. In SIGGRAPH Courses, pp. 24-1, 2015.
|
| 332 |
+
|
| 333 |
+
[27] J. Kim, J. Kwon Lee, and K. Mu Lee. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1637-1645, 2016.
|
| 334 |
+
|
| 335 |
+
[28] M. Koskela, K. Immonen, M. Mäkitalo, A. Foi, T. Viitanen, P. Jääskeläinen, H. Kultala, and J. Takala. Blockwise multi-order feature regression for real-time path-tracing reconstruction. ${ACM}$ Transactions on Graphics (TOG), 38(5):138, 2019.
|
| 336 |
+
|
| 337 |
+
[29] A. Kuznetsov, N. K. Kalantari, and R. Ramamoorthi. Deep adaptive sampling for low sample count rendering. In Computer Graphics Forum, vol. 37, pp. 35-44, 2018.
|
| 338 |
+
|
| 339 |
+
[30] S. Laine, H. Saransaari, J. Kontkanen, J. Lehtinen, and T. Aila. Incremental instant radiosity for real-time indirect illumination. In Proceedings of the 18th Eurographics conference on Rendering Techniques, pp. 277-286. Eurographics Association, 2007.
|
| 340 |
+
|
| 341 |
+
[31] M. E. Lee and R. A. Redner. A note on the use of nonlinear filtering in computer graphics. IEEE Computer Graphics and Applications, ${10}\left( 3\right) : {23} - {29},{1990}$ .
|
| 342 |
+
|
| 343 |
+
[32] T. Leimkühler, H.-P. Seidel, and T. Ritschel. Laplacian kernel splatting for efficient depth-of-field and motion blur synthesis or reconstruction. ACM Transactions on Graphics, 37(4), 2018.
|
| 344 |
+
|
| 345 |
+
[33] T.-M. Li, Y.-T. Wu, and Y.-Y. Chuang. Sure-based optimization for adaptive sampling and reconstruction. ACM Transactions on Graphics (TOG), 31(6):194, 2012.
|
| 346 |
+
|
| 347 |
+
[34] Y. Li, V. Tsiminaki, R. Timofte, M. Pollefeys, and L. V. Gool. 3d appearance super-resolution with deep learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9671-9680, 2019.
|
| 348 |
+
|
| 349 |
+
[35] B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 136-144, 2017.
|
| 350 |
+
|
| 351 |
+
[36] Z.-S. Liu, L.-W. Wang, C.-T. Li, and W.-C. Siu. Hierarchical back projection network for image super-resolution. In Proceedings of
|
| 352 |
+
|
| 353 |
+
the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0-0, 2019.
|
| 354 |
+
|
| 355 |
+
[37] M. D. McCool. Anisotropic diffusion for monte carlo noise reduction. ACM Transactions on Graphics (TOG), 18(2):171-194, 1999.
|
| 356 |
+
|
| 357 |
+
[38] S. U. Mehta, J. Yao, R. Ramamoorthi, and F. Durand. Factored axis-aligned filtering for rendering multiple distribution effects. ACM Transactions on Graphics (TOG), 33(4):57, 2014.
|
| 358 |
+
|
| 359 |
+
[39] M. Meyer and J. Anderson. Statistical acceleration for animated global illumination. In ACM Transactions on Graphics (TOG), vol. 25, pp. 1075-1080. ACM, 2006.
|
| 360 |
+
|
| 361 |
+
[40] B. Moon, S. McDonagh, K. Mitchell, and M. Gross. Adaptive polynomial rendering. ACM Transactions on Graphics (TOG), 35(4):40, 2016.
|
| 362 |
+
|
| 363 |
+
[41] R. S. Overbeck, C. Donner, and R. Ramamoorthi. Adaptive wavelet rendering. ACM Trans. Graph., 28(5):140, 2009.
|
| 364 |
+
|
| 365 |
+
[42] F. Rousselle, C. Knaus, and M. Zwicker. Adaptive sampling and reconstruction using greedy error minimization. In ${ACM}$ Transactions on Graphics (TOG), vol. 30, p. 159. ACM, 2011.
|
| 366 |
+
|
| 367 |
+
[43] H. E. Rushmeier and G. J. Ward. Energy preserving non-linear filters. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pp. 131-138. ACM, 1994.
|
| 368 |
+
|
| 369 |
+
[44] B. Segovia, J. C. Iehl, R. Mitanchey, and B. Péroche. Non-interleaved deferred shading of interleaved sample patterns. In Graphics Hardware, pp. 53-60, 2006.
|
| 370 |
+
|
| 371 |
+
[45] P. Sen and S. Darabi. On filtering the noise from the random parameters in monte carlo rendering. ACM Trans. Graph., 31(3):18-1, 2012.
|
| 372 |
+
|
| 373 |
+
[46] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1874-1883, 2016.
|
| 374 |
+
|
| 375 |
+
[47] B. Walter, A. Arbree, K. Bala, and D. P. Greenberg. Multidimensional lightcuts. ACM Transactions on graphics, 25(3):1081-1088, 2006.
|
| 376 |
+
|
| 377 |
+
[48] G. J. Ward, F. M. Rubinstein, and R. D. Clear. A ray tracing solution for diffuse interreflection. ACM SIGGRAPH Computer Graphics, 22(4):85-92, 1988.
|
| 378 |
+
|
| 379 |
+
[49] L. Wu, L.-Q. Yan, A. Kuznetsov, and R. Ramamoorthi. Multiple axis-aligned filters for rendering of combined distribution effects. In Computer Graphics Forum, vol. 36, pp. 155-166, 2017.
|
| 380 |
+
|
| 381 |
+
[50] R. Xu and S. N. Pattanaik. A novel monte carlo noise reduction operator. IEEE Computer Graphics and Applications, 25(2):31-35, 2005.
|
| 382 |
+
|
| 383 |
+
[51] X. Xu, Y. Ma, and W. Sun. Towards real scene super-resolution with raw images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1723-1731, 2019.
|
| 384 |
+
|
| 385 |
+
[52] L.-Q. Yan, S. U. Mehta, R. Ramamoorthi, and F. Durand. Fast 4d sheared filtering for interactive rendering of distribution effects. ${ACM}$ Transactions on Graphics (TOG), 35(1):7, 2015.
|
| 386 |
+
|
| 387 |
+
[53] K. Zhang, W. Zuo, and L. Zhang. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3262-3271, 2018.
|
| 388 |
+
|
| 389 |
+
[54] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 286-301, 2018.
|
| 390 |
+
|
| 391 |
+
[55] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu. Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2472-2481, 2018.
|
| 392 |
+
|
| 393 |
+
[56] Z. Zhang, Z. Wang, Z. Lin, and H. Qi. Image super-resolution by neural texture transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7982-7991, 2019.
|
| 394 |
+
|
| 395 |
+
[57] H. Zimmer, F. Rousselle, W. Jakob, O. Wang, D. Adler, W. Jarosz, O. Sorkine-Hornung, and A. Sorkine-Hornung. Path-space motion estimation and decomposition for robust animation filtering. In Computer Graphics Forum, vol. 34, pp. 131-142, 2015.
|
| 396 |
+
|
| 397 |
+
[58] M. Zwicker, W. Jarosz, J. Lehtinen, B. Moon, R. Ramamoorthi, F. Rous-selle, P. Sen, C. Soler, and S.-E. Yoon. Recent advances in adaptive sampling and reconstruction for monte carlo rendering. In Computer Graphics Forum, vol. 34, pp. 667-681, 2015.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/m4WytW0txaS/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,445 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ FAST MONTE CARLO RENDERING VIA MULTI-RESOLUTION SAMPLING
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Monte Carlo rendering algorithms are widely used to produce photo-realistic computer graphics images. However, these algorithms need to sample a substantial amount of rays per pixel to enable proper global illumination and thus require an immense amount of computation. In this paper, we present a hybrid rendering method to speed up Monte Carlo rendering algorithms. Our method first generates two versions of a rendering: one at a low resolution with a high sample rate (LRHS) and the other at a high resolution with a low sample rate (HRLS). We then develop a deep convolutional neural network to fuse these two renderings into a high-quality image as if it were rendered at a high resolution with a high sample rate. Specifically, we formulate this fusion task as a super resolution problem that generates a high resolution rendering from a low resolution input (LRHS), assisted with the HRLS rendering. The HRLS rendering provides critical high frequency details which are difficult to recover from the LRHS for any super resolution methods. Our experiments show that our hybrid rendering algorithm is significantly faster than the state-of-the-art Monte Carlo denoising methods while rendering high-quality images when tested on both our own BCR dataset and the Gharbi dataset [14].
|
| 8 |
+
|
| 9 |
+
Index Terms: Computing methodologies-Computer graphics-Ray tracing
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Physically-based image synthesis has attracted considerable attention due to its wide applications in visual effects, video games, design visualization, and simulation [26]. Among them, ray tracing methods have achieved remarkable success as the most practical realistic image synthesis algorithms. For each pixel, they cast numerous rays that are bounced back from the environment to collect photons from light sources and integrate them to compute the color of that pixel. In this way, ray tracing methods are able to generate images with a very high degree of visual realism. However, obtaining visually satisfactory renderings with ray tracing algorithms often requires casting a large number of rays and thus takes a vast amount of computations. The extensive computational and memory requirements of ray tracing methods pose a challenge especially when running these rendering algorithms on resource-constrained platforms and impede their applications that require high resolutions and refresh rates.
|
| 14 |
+
|
| 15 |
+
To speed up ray tracing, Monte Carlo rendering algorithms are used to reduce ray samples per pixel (spp) that a ray tracing method needs to cast [10]. For instance, adaptive reconstruction methods control sampling densities according to the reconstruction error estimation from existing ray samples [58]. However, when the ray sample rate is not sufficiently high, the rendering results from a Monte Carlo algorithm are often noisy. Therefore, the ray tracing results are usually post-processed to reduce the noise using algorithms like bilateral filtering and guided image filtering $\left\lbrack {{28},{38},{43},{45},{49},{52},{57}}\right\rbrack$ . Recently, deep learning-based denoising approaches are developed to reduce the noise from Monte Carlo rendering algorithms $\left\lbrack {2,9,{23},{29}}\right\rbrack$ . These methods achieved high-quality results with impressive time reduction and some of them are incorporated into commercial tools, such as VRay Renderer, Corona Renderer, and RenderMan, and open source renders like Blender. However, real-time ray tracing is still a challenging problem, especially on devices with limited computing resources.
|
| 16 |
+
|
| 17 |
+
Our idea to speed up ray tracing is to reduce the number of pixels that we need to estimate color values. For instance, upsampling by $2 \times 2$ can reduce ${75}\%$ of pixels that need ray tracing to estimate color for. There are two main challenges in super-resolving a Monte Carlo rendering. First, it is still a fundamentally ill-posed problem to recover the high-frequency visual details that are missing from the low-resolution input. Second, a Monte Carlo rendering is subject to sampling noise, especially when it is produced at a low spp rate. Upsampling a noisy image will often amplify the noise level as well. To address these challenges, we propose to generate two versions of rendering: a low-resolution rendering but at a reasonable high ssp rate (LRHS) and a high-resolution rendering but at a lower ssp rate (HRLS). LRHS is less noisy while the more noisy HRLS can potentially provide high-frequency visual details that are inherently difficult to recover from the low resolution image.
|
| 18 |
+
|
| 19 |
+
We accordingly develop a hybrid rendering method dedicated for images rendered by a Monte Carlo rendering algorithm. Our neural network takes both LRHS and HRLS renderings as input. We use a de-shuffle layer to downsample the HRLS rendering to make it the same size as LRHS and to reduce the computational cost. Then we concatenate the features from both LRHS and HRLS and feed them to the rest of the network to generate the high-quality high resolution rendering. Our experiments show that given the hybrid input, our method outperforms the state-of-the-art Monte-Carlo rendering algorithms significantly.
|
| 20 |
+
|
| 21 |
+
As there is no large scale ray-tracing dataset available to train our network, we collected the first large scale Blender Cycles Ray-tracing (BCR) dataset, which contains 2449 high-quality images rendered from 1463 models. The dataset consists of various factors that affect the Monte Carlo noise distribution, such as depth of field, motion blur, and reflections. We render the images at a range of spp rates, including $1 - 8,{12},{32},{64},{250},{1000}$ , and 4000 spp. All the images are rendered at the resolution of ${1080}\mathrm{p}$ . Each image contains not only the final rendered result but also the intermediate render layers, including albedo, normal, diffuse, glossy, and so on.
|
| 22 |
+
|
| 23 |
+
This paper contributes to the research on photo-realistic image synthesis by integrating Monte Carlo rendering and image super resolution for efficient high-quality image rendering. First, we explore super resolution to reduce the number of pixels that need ray tracing. Second, we use multi-resolution sampling to both reduce noises and create visual details. Third, we develop the first large ray-tracing image dataset, which will be made publicly available.
|
| 24 |
+
|
| 25 |
+
§ 2 RELATED WORK
|
| 26 |
+
|
| 27 |
+
Monte Carlo rendering is an important technology for photo-realistic rendering. It aims to reduce the number of rays that a ray tracing algorithm needs to cast and integrate while synthesizing a high quality image $\left\lbrack {{10},{22}}\right\rbrack$ . Conventional Monte Carlo rendering algorithms investigate various ways to adaptively distribute ray samples [8, 13, 20, 33, 39-42, 47, 48]. When only a small number of rays are casted, the rendered images are often noisy. They are typically filtered using various algorithms $\left\lbrack {{11},{21},{30},{31},{37},{43} - {45},{50}}\right\rbrack$ . Due to the space limit, we refer readers to a recent survey on Monte Carlo rendering [58].
|
| 28 |
+
|
| 29 |
+
Our research is more related to the recent deep learning approaches to Monte Carlo rendering denoising. Kalantari et al. trained a multilayer perceptron neural network to learn the parameters of filters before applying these filters to the noisy images [23]. Bako et al. extended this method by employing filters with spatially
|
| 30 |
+
|
| 31 |
+
Low Resolution High SPP High Resolution Low SPP
|
| 32 |
+
|
| 33 |
+
Figure 1: This paper presents a hybrid rendering method to speed up Monte Carlo rendering. Our method takes a low resolution with a high sample rate rendering (LRHS) and a high resolution with a low sample rate rendering (HRLS) as inputs, and produces the high resolution high quality result.
|
| 34 |
+
|
| 35 |
+
adaptive kernels to denoise Monte Carlo renderings [2]. They developed a convolutional neural network method to estimate spatially adaptive filter kernels. Chaitanya et al. developed an encoder-decoder network with recurrent connections to denoise a Monte Carlo image sequence [9]. Recently, Kuznetsov et al. [29] developed a deep convolutional neural network approach that combines adaptive sampling and image denoising to optimize the rendering performance. Different from the above methods, Gharbi et al. argued that splatting samples to relevant pixels is more effective than gathering relevant samples for each pixel for denoising. Accordingly they developed a novel kernel-splatting architecture that estimates the splatting kernel for each sample, which was shown particularly effective when only a small number of samples were used [14]. Compared to these methods, our method improves the speed of Monte Carlo rendering by reducing the number of pixels that we need to cast rays for.
|
| 36 |
+
|
| 37 |
+
Our work also builds upon the success of deep image super resolution methods $\left\lbrack {1,{12},{15},{19},{27},{34} - {36},{51},{53},{56}}\right\rbrack$ . Dong et al. developed the first deep learning approach to image super resolution [12]. They designed a three-layer fully convolutional neural network and showed that a neural network could be trained end to end for super resolution. Since that, a variety of neural network architectures, such as residual network [16], densely connected network [18], and squeeze-and-excitation network [17], are introduced to the task of image super resolution. For instance, Kim et al. developed a deep neural network that employs residual architectures and obtained promising results [27]. Lim et al. further improved super resolution results by removing batch norm layers and increasing the depth of networks [35]. Zhang et al. developed a residual densely connected network that is able to explore intermediate features via local dense connections for better image super resolution [55]. Zhang et al. recently reported that a channel-wise attention network which is able to learn attention as guidance to model channel-wise features can more effectively super resolve a low resolution image [54]. While these image super resolution methods achieved promising results, recovering visual details that do not exist in the input image is necessarily an ill-posed problem. Our method addresses this fundamentally challenging problem by leveraging a high-resolution image but rendered at a low ray sample rate. Such an auxiliary rendering can be quickly rendered and yet provide visual details that do not exist in the low resolution input rendered at a high sample rate.
|
| 38 |
+
|
| 39 |
+
§ 3 THE BLENDER CYCLES RAY-TRACING DATASET
|
| 40 |
+
|
| 41 |
+
To the best of our knowledge, there is no large scale ray-tracing dataset that is publicly available for training a deep neural network. Therefore, we develop the Blender Cycles Ray-tracing dataset (BCR) that consists of a large number of high quality scenes together with the ray-tracing images and the intermediate rendering layers. We will share BCR with our community.
|
| 42 |
+
|
| 43 |
+
§ 3.1 SOURCE SCENES
|
| 44 |
+
|
| 45 |
+
Blender's Cycles is a popular ray tracing engine that is capable of high-quality production rendering. It has an open and active community where thousands of artists share their work. Using the Blender community assets, we collected over 8000 scenes under Creative Commons licenses ${}^{123}$ , which allow us to share our dataset with the research community. We rendered these scenes at 4000 spp and manually checked the rendered images and all the rendering layers. We eliminated scenes with missing materials, lack of high frequency information, or with noticeable rendering noises even rendered at 4000 spp. This culling process reduced the total number of source scenes to 1465. These remaining scenes produced 2449 images by rendering from 1 to 10 viewpoints per scene. We split the dataset into 3 subsets: 2126 images from 1283 scenes as the training set, 193 images from 76 scenes as the validation set, and 130 images from 104 scenes as the test set. There is no overlap scene among them. As shown in Figure 2, our dataset covers various optical phenomena, such as motion blur, depth of field, and complex light transport effects. It covers a variety of scene contents, including indoor scenes, buildings, landscapes, fruits, plants, vehicles, animals. glass, and so on.
|
| 46 |
+
|
| 47 |
+
§ 3.2 RENDERING SETTINGS
|
| 48 |
+
|
| 49 |
+
To generate the high-quality "ground-truth" renderings, we rendered each scene at 4000 spp. As described previously, we noticed that the rendered images for some scenes still contain noticeable noises even when rendered at 4000 spp and we removed them through manual inspection. On average, it took around 20 minutes to render an image on an Nvidia Titan X Pascal GPU. We set the rendering resolution to ${1920} \times {1080}$ or ${1080} \times {1080}$ to cover the most content of scenes. For each image, we provide both the final rendered image and the render layers, which are essential for Monte Carlo rendering $\left\lbrack {2,3,9,{23},{25},{29},{32},{33}}\right\rbrack$ . In total, each image has 33 rendering layers, including albedo, normals, depth, diffuse color, diffuse direct, diffuse indirect, glossy color and so on. All images in the BCR dataset can be produced using the render layers as follows [6]:
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
{I}_{HR} = {I}_{\text{ Diff }} + {I}_{\text{ Gloss }} + {I}_{\text{ Sub }} + {I}_{\text{ Trans }} + {I}_{\text{ Env }} + {I}_{\text{ Emit }}, \tag{1}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
where the diffuse, gloss, subsurface, trans layers can be generated with their color, direct light and indirect light layers
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{I}_{\text{ Diff }} = {I}_{\text{ DiffCol }} * \left( {{I}_{\text{ DiffDir }} + {I}_{\text{ DiffInd }}}\right) ,
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{I}_{\text{ Gloss }} = {I}_{\text{ GlossCol }} * \left( {{I}_{\text{ GlossDir }} + {I}_{\text{ GlossInd }}}\right) , \tag{2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{I}_{\text{ Sub }} = {I}_{\text{ SubCol }} * \left( {{I}_{\text{ SubDir }} + {I}_{\text{ SubInd }}}\right) ,
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{I}_{\text{ Trans }} = {I}_{\text{ TransCol }} * \left( {{I}_{\text{ TransDir }} + {I}_{\text{ TransInd }}}\right) .
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
${}^{1}$ http://www.blendswap.com
|
| 74 |
+
|
| 75 |
+
${}^{2}$ https://blenderartists.org
|
| 76 |
+
|
| 77 |
+
${}^{3}$ https://gumroad.com/senad
|
| 78 |
+
|
| 79 |
+
< g r a p h i c s >
|
| 80 |
+
|
| 81 |
+
Figure 2: Examples from our BCR dataset.
|
| 82 |
+
|
| 83 |
+
${10}^{10}$ _____。 100 125 150 175 200 Pixel value Number of pixels ${10}^{8}$ ${10}^{6}$ ${10}^{4}$ ${10}^{2}$ ${10}^{0}$ 25 50 75
|
| 84 |
+
|
| 85 |
+
Figure 3: Pixel value distribution of our BCR dataset. The rendered images use the scene linear color space and the pixel value is represented in float32. We use the logarithmic scale for the $y$ axis. While most pixels are in the range of $\left\lbrack {0,1}\right\rbrack$ , the distribution has a long tail.
|
| 86 |
+
|
| 87 |
+
Besides rendering 4000-spp images as ground truth, we rendered each scene at $1 - 8,{12},{16},{32},{64},{128},{250}$ , and 1000 spp as input for Monte Carlo rendering enhancement algorithms, including ours. The rendered images and the auxiliary results in the scene were saved in the scene linear color space, which closely corresponds to natural colors [5]. These images were rendered with a high dynamic range. The pixel values were represented in Float32. As shown in Figure 3, most of the pixel values were in the range of $\left\lbrack {0,1}\right\rbrack$ . However. the pixel value distribution had a long tail. We also noticed that many of the very large values come from the firefly rendering artifacts. Therefore we removed these outliers by clipping at value 100 . An image in the scene linear space can be converted to sRGB space for visualization in this paper as follows.
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
s = \left\{ \begin{array}{ll} 0 & \text{ if }l \leq 0, \\ {12.92} \times l & \text{ if }0 < l \leq {0.0031308}, \\ {1.055} \times {l}^{\frac{1}{2.4}} - {0.055} & \text{ if }{0.0031308} < l < 1, \\ 1 & \text{ if }l \geq 1, \end{array}\right. \tag{3}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where $l$ and $s$ indicate the pixel value in scene linear color space and sRGB respectively [4].
|
| 94 |
+
|
| 95 |
+
§ 3.3 LOW RESOLUTION IMAGE GENERATION
|
| 96 |
+
|
| 97 |
+
A straightforward way to generate low resolution images is to change the output resolution in Cycles. However, directly rendering a low resolution image does not always work [7]. For example, some scenes are modelled using a subdivision technology and changing the rendering resolution will disrupt the inherent relationship among the material and geometry settings in the scene files and thus cause mismatch between images rendered at different resolutions. Therefore, we generate low resolution images by downsampling the corresponding high resolution rendered images via the nearest neighbour degradation. We did not use bilinear or bicubic sampling as the nearest neighbor degradation more accurately simulates the real-world rendering engine. That is, rays for low resolution renderings are sampled at a sparse grid compared with high resolution ones.
|
| 98 |
+
|
| 99 |
+
3.4 Monte Carlo Rendering Dataset Comparison
|
| 100 |
+
|
| 101 |
+
max width=
|
| 102 |
+
|
| 103 |
+
Dataset Images Scenes SPP Layers
|
| 104 |
+
|
| 105 |
+
1-5
|
| 106 |
+
Kalantari [24] 500 20 4,8,16,32,64,32000 5
|
| 107 |
+
|
| 108 |
+
1-5
|
| 109 |
+
KPCN [2] 600 - 32, 128, 1024 6
|
| 110 |
+
|
| 111 |
+
1-5
|
| 112 |
+
Chaitanya [9] - 3 1,4,8,16,32,256,2000 3
|
| 113 |
+
|
| 114 |
+
1-5
|
| 115 |
+
Kuznetsov [29] 700 X 50 1,2,3,4,1024 4
|
| 116 |
+
|
| 117 |
+
1-5
|
| 118 |
+
BCR dataset 2449 1463 1-8, 12, 16, 32, 64, 128,250,1000,4000 33
|
| 119 |
+
|
| 120 |
+
1-5
|
| 121 |
+
|
| 122 |
+
Table 1: Monte Carlo rendering dataset comparison.
|
| 123 |
+
|
| 124 |
+
We compare our dataset with those used in recent deep learning-based Monte Carlo rendering denoising algorithms, including [2, 9, ${23},{29}\rbrack$ . As reported in Table 1, our dataset has over $3 \times$ the amount of images and over ${25} \times$ the number of scenes than the other datasets. Moreover, all these existing datasets are private and we will make our dataset public.
|
| 125 |
+
|
| 126 |
+
§ 4 METHOD
|
| 127 |
+
|
| 128 |
+
Our method takes a low-resolution-high-spp image ${I}_{LRHS}$ and its corresponding high-resolution-low-spp image ${I}_{HRLS}$ as input and aims to estimate a corresponding HR image ${I}_{SR}.{I}_{LRHS}$ contains the RGB channel, while ${I}_{HRLS}$ is composed of RGB channel and extra layers, including Albedo, Normal, Diffuse, Specular, Variance layer as these extra layers can provide high-frequency visual details.
|
| 129 |
+
|
| 130 |
+
As shown in Figure 4, we design a two-encoder-one-decoder network to estimate the HR image. Given ${I}_{LRHS}$ and ${I}_{HRLS}$ , our network firstly extracts the features ${F}_{LRHS}$ and ${F}_{HRLS}$ , respectively. We leverage a downscale module with de-shuffle layers [46] instead of pooling layers to downscale the feature maps as de-shuffle layers can keep the high-frequency information. Compared with upscaling LRHS features, downsampling HRLS features to the same size of ${F}_{LRHS}$ can reduce the computational complexity of the network significantly. It also enables the features to fuse in the earlier layer of the network. We obtain the fused feature ${F}_{0}$ by combining ${F}_{HRLS}$ with ${F}_{LRHS}$ through a fusion module and feed it to a sequence of residual dense groups (RDG) $\left\lbrack {{54},{55}}\right\rbrack$ . With the feature ${F}_{G}$ from RDGs, we combine it with ${F}_{LRHS}$ by element-wise adding. Finally, we upscale the resulting dense feature ${F}_{DF}$ and predict the final $\mathrm{{HR}}$ image ${I}_{SR}$ through a convolutional layer. Below we describe the network in details.
|
| 131 |
+
|
| 132 |
+
Conv ${F}_{LRHS}$ Long skip connection ${F}_{g}$ RDG ${F}_{DF}$ Up Conv ${I}_{SR}$ ${F}^{B}$ Upscale module Conv $\bigoplus$ Element add ${RD}{B}_{B}$ ${r}_{g - 1}$ Downscale module Fusion module Residual dense module ${I}_{LRHS}$ Fusion ${F}_{0}$ ${F}_{1}$ RDG, ${\mathrm{{RDG}}}_{a}$ Down ${F}_{HRLS}$ Short skip connection ${F}_{g - 1}$ ${\mathrm{{RDB}}}_{1}$ ${\mathrm{{RDB}}}_{\mathrm{b}}$ ${I}_{HRLS}$
|
| 133 |
+
|
| 134 |
+
Figure 4: The architecture of our network. Our network takes a low-resolution-high- spp rendering (LRHS) and its corresponding high-resolution-low-spp rendering (HRLS) as input and predicts the final high-resolution-high-quality image.
|
| 135 |
+
|
| 136 |
+
${F}_{HRLS}^{d - 1}$ Conv ${F}_{HRLS}^{d}$ $\left( {{\alpha }^{2}C}\right) * H * W$ $\mathrm{C} * \left( {\alpha H}\right) * \left( {\alpha W}\right)$ Figure 5: Deshuffle layer for downscaling feature maps.
|
| 137 |
+
|
| 138 |
+
LRHS shallow feature ${F}_{LRHS}$ . Following [35,54,55], we adopt a convolutional layer to get the shallow feature ${F}_{LRHS}$
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
{F}_{LRHS} = {H}_{lrhs}\left( {I}_{LRHS}\right) , \tag{4}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
where $H\left( \cdot \right)$ indicates the convolution operation.
|
| 145 |
+
|
| 146 |
+
HRLS shallow feature ${F}_{HRLS}$ . We first extract the shallow feature from ${I}_{HRLS}$ with a convolutional layer,
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
{F}_{HRLS}^{0} = {H}_{\text{ hrls }}\left( {I}_{HRLS}\right) . \tag{5}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
Inspired by ESPCN [46], we design a deshuffle layer to downscale the features. As shown in Figure 5, we downscale the feature map with a stride of $\alpha$ . In our network, we set $\alpha = 2$ . To downscale the feature map, we stack deshuffle layers together. Supposing our network has $D$ deshuffle layers, we can get the output ${F}_{HRLS}$
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
{F}_{HRLS} = {DS}{F}^{D}\left( {{DS}{F}^{D - 1}\left( {\cdots {DS}{F}^{1}\left( {F}_{HRLS}^{0}\right) \cdots }\right) }\right) , \tag{6}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
where ${DSF}\left( \cdot \right)$ indicates the operation of the deshuffle layer. By downscaling auxiliary features, our network can work in the size of the LRHS image, which can significantly reduce the computational complexity of the overall network.
|
| 159 |
+
|
| 160 |
+
We concatenate ${F}_{LRHS}$ from LRHS image and ${F}_{HRLS}$ from HRLS into a combined feature map ${F}_{0}$ .
|
| 161 |
+
|
| 162 |
+
Residual densely connected block. We employ the densely connected network and residual groups to build the backbone of our neural network as they are shown effective for image super resolution [54, 55]. In our network, we use 4 convolutional layers in each residual densely connected block (RDB). By stacking $B = 5\mathrm{{RDBs}}$ , we build a residual densely connected group (RDG) as follows,
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
{F}_{g} = {RD}{B}_{B}\left( {{RD}{B}_{B - 1}\left( {\cdots {RD}{B}_{1}\left( {F}_{g - 1}\right) \cdots }\right) }\right) + {F}_{g - 1} \tag{7}
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
We predict the dense feature ${F}_{DF}$ with $G = 3\mathrm{{RDGs}}$ as follows,
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
{F}_{DF} = {RD}{G}_{G}\left( {{RD}{G}_{G - 1}\left( {\cdots {RD}{B}_{1}\left( {F}_{0}\right) \cdots }\right) }\right) + {F}_{0} \tag{8}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
Upscale. In our network, we adopt the shuffle layer from ES-PCN [46] to upscale the features and estimate the high resolution prediction ${I}_{SR}$ ,
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
{I}_{SR} = {H}_{Rec}\left( {{UP}\left( {F}_{DF}\right) }\right) , \tag{9}
|
| 178 |
+
$$
|
| 179 |
+
|
| 180 |
+
where ${UP}\left( \cdot \right)$ indicates the operation of upscale [46].
|
| 181 |
+
|
| 182 |
+
Loss function. The BCR dataset is in the scene linear color space. As shown in Figure 3, the pixel value distribution of this BCR dataset has a long tail. ${\ell }_{1}$ loss can not handle it well because it might be biased to the extremely large pixel values. To handle this problem, we adopt the following robust loss
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
{\ell }_{r} = \frac{1}{N}\mathop{\sum }\limits_{{p \in {I}_{HR}}}\frac{\left| {I}_{HR}^{p} - {I}_{SR}^{p}\right| }{\beta + \left| {{I}_{HR}^{p} - {I}_{SR}^{p}}\right| }, \tag{10}
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
where $\beta$ indicates the robust factor. For the small difference, ${\ell }_{r}$ works quite similar to ${\ell }_{1}$ . For the extremely large difference, ${\ell }_{r}$ will be close to but always below 1 . This will prevent our network from the bias towards rare but extremely large pixel values. We set $\beta = {0.1}$ in our experiments.
|
| 189 |
+
|
| 190 |
+
Implement details. We set the kernel size of all convolutional layers to $3 \times 3$ , except for the fuse convolutional layer, whose kernel size is $1 \times 1$ . Every convolutional layer is followed by a RELU layer, except for the last convolutional layer. The shallow features, fusion features, and dense features have 64 channels. During each iteration of the training, we randomly select the spp of ${I}_{HRLS}$ from the set of $\left\lbrack {1 - 8,{12},{16},{32}}\right\rbrack$ and the spp of ${I}_{LRHS}$ from the set of $\lbrack 2 - 8,{12},{16},{32}$ , ${64},{128},{250},{1000},{4000}\rbrack$ while making sure that the spp of ${I}_{HRLS}$ is smaller than that of ${I}_{LRHS}$ .
|
| 191 |
+
|
| 192 |
+
We use Pytorch to implement our network. We use a mini-batch size of 16 and train the network for 500 epochs. It takes about 1 week on one Nvidia Titan Xp for training. We use the SGD optimizer with the learning rate of ${10}^{-4}$ . We also perform data augmentation on-the-fly by randomly cropping patches. In order to save data loading time, we pre-crop training HR images into ${300} \times {300}$ large patches. During training, we further crop smaller patches on those large patches. The final patch size of HR is set to 96 for $\times 2,{192}$ for $\times 4$ , and 256 for $\times 8$ . We select the model that works best on the validation set.
|
| 193 |
+
|
| 194 |
+
§ 5 EXPERIMENTS
|
| 195 |
+
|
| 196 |
+
We evaluate our method by comparing it representative state-of-the-art denoising methods for Monte Carlo rendering and image super resolution algorithms. We also conduct ablation studies to further examine our method. We use two metrics to evaluate our results. First, we adopt RelMSE (Relative Mean Square Error) to report the results in the scene linear color space, which is defined as
|
| 197 |
+
|
| 198 |
+
$$
|
| 199 |
+
\operatorname{RelMSE} = {\lambda }_{1} * \frac{{\left( {I}_{SR} - {I}_{HR}\right) }^{2}}{{I}_{HR}^{2} + {\lambda }_{2}}, \tag{11}
|
| 200 |
+
$$
|
| 201 |
+
|
| 202 |
+
where ${\lambda }_{1} = {0.5}$ and ${\lambda }_{2} = {0.01}$ when experimenting on our BCR dataset following KPCN [2]. For the Gharbi dataset, we use the evaluation code from its authors [14] where ${\lambda }_{1} = 1$ and ${\lambda }_{2} = {10}^{-4}$ .
|
| 203 |
+
|
| 204 |
+
We also use PSNR to evaluate the results in the ${sRGB}$ space. For our BCR dataset, we convert images to sRGB to calculate PSNR use
|
| 205 |
+
|
| 206 |
+
max width=
|
| 207 |
+
|
| 208 |
+
2*Method 2|c|2spp 2|c|4spp 2|c|8spp
|
| 209 |
+
|
| 210 |
+
2-7
|
| 211 |
+
PSNR RelMSE PSNR RelMSE PSNR RelMSE
|
| 212 |
+
|
| 213 |
+
1-7
|
| 214 |
+
Input 18.12 0.2953 21.51 0.1400 24.75 0.0646
|
| 215 |
+
|
| 216 |
+
1-7
|
| 217 |
+
KPCN [2] 25.87 0.0390 27.31 0.0299 28.11 0.0276
|
| 218 |
+
|
| 219 |
+
1-7
|
| 220 |
+
KPCN-ft [2] 31.03 0.0078 33.69 0.0043 35.83 0.0026
|
| 221 |
+
|
| 222 |
+
1-7
|
| 223 |
+
Bitterli [3] 26.67 0.0293 27.22 0.0252 27.45 0.0226
|
| 224 |
+
|
| 225 |
+
1-7
|
| 226 |
+
Gharbi [14] 30.73 0.0068 31.61 0.0057 32.29 0.0050
|
| 227 |
+
|
| 228 |
+
1-7
|
| 229 |
+
2*Ours $\times 2$ 2|c|(4 - 1) 2|c|(8 - 2) 2|c|(16-4)
|
| 230 |
+
|
| 231 |
+
2-7
|
| 232 |
+
33.27 0.0044 35.15 0.0027 36.74 0.0019
|
| 233 |
+
|
| 234 |
+
1-7
|
| 235 |
+
2*Ours $\times 4$ 2|c|(16 - 1) 2|c|(32 - 2) 2|c|(64 - 4)
|
| 236 |
+
|
| 237 |
+
2-7
|
| 238 |
+
33.94 0.0039 35.21 0.0028 36.31 0.0022
|
| 239 |
+
|
| 240 |
+
1-7
|
| 241 |
+
2*Ours $\times 8$ 2|c|(64 - 1) 2|c|(128 - 2) 2|c|(250 - 4)
|
| 242 |
+
|
| 243 |
+
2-7
|
| 244 |
+
31.37 0.0075 32.35 0.0057 33.14 0.0049
|
| 245 |
+
|
| 246 |
+
1-7
|
| 247 |
+
|
| 248 |
+
Table 2: Comparison on our BCR dataset. Ours $\times 2$ indicates that our method performs $\times 2$ super resolution and (4 - 1) indicates that our method takes 4 spp LRHS and 1 spp HRLS as input, which is effectively 2 spp on average for all the pixels.
|
| 249 |
+
|
| 250 |
+
max width=
|
| 251 |
+
|
| 252 |
+
2*Method 2|c|4 spp 2|c|8 spp 2|c|16 spp
|
| 253 |
+
|
| 254 |
+
2-7
|
| 255 |
+
PSNR RelMSE PSNR RelMSE PSNR RelMSE
|
| 256 |
+
|
| 257 |
+
1-7
|
| 258 |
+
Input 19.58 17.5358 21.91 7.5682 24.17 11.2189
|
| 259 |
+
|
| 260 |
+
1-7
|
| 261 |
+
Sen [45] 28.23 1.0484 28.00 0.5744 27.64 0.3396
|
| 262 |
+
|
| 263 |
+
1-7
|
| 264 |
+
Rousselle [42] 30.01 1.9407 32.32 1.9660 34.36 1.9446
|
| 265 |
+
|
| 266 |
+
1-7
|
| 267 |
+
Kalantari [24] 31.33 1.5573 33.00 1.6635 34.43 1.8021
|
| 268 |
+
|
| 269 |
+
1-7
|
| 270 |
+
Bitterli [3] 28.98 1.1024 30.92 0.9297 32.40 0.9640
|
| 271 |
+
|
| 272 |
+
1-7
|
| 273 |
+
KPCN [2] 29.75 1.0616 30.56 7.0774 31.00 20.2309
|
| 274 |
+
|
| 275 |
+
1-7
|
| 276 |
+
KPCN-ft [2] 29.86 0.5004 31.66 0.8616 33.39 0.2981
|
| 277 |
+
|
| 278 |
+
1-7
|
| 279 |
+
Gharbi [14] 33.11 0.0486 34.45 0.0385 35.36 0.0318
|
| 280 |
+
|
| 281 |
+
1-7
|
| 282 |
+
2*Ours $\times 2$ 2|c|(8 - 2) 2|c|(16 - 4) 2|c|(32 - 8)
|
| 283 |
+
|
| 284 |
+
2-7
|
| 285 |
+
34.02 1.5025 35.30 1.4902 36.43 1.4748
|
| 286 |
+
|
| 287 |
+
1-7
|
| 288 |
+
2*Ours $\times 4$ 2|c|(32 - 2) 2|c|(64 - 4) 2|c|(128 - 8)
|
| 289 |
+
|
| 290 |
+
2-7
|
| 291 |
+
33.94 5.5586 35.22 5.6781 35.97 5.7436
|
| 292 |
+
|
| 293 |
+
1-7
|
| 294 |
+
2*Ours $\times 8$ 2|c|(128 - 2) 2|c|(16 - 8) 2|c|(32 - 16)
|
| 295 |
+
|
| 296 |
+
2-7
|
| 297 |
+
31.56 3.7228 32.60 4.2300 33.22 4.5045
|
| 298 |
+
|
| 299 |
+
1-7
|
| 300 |
+
|
| 301 |
+
Table 3: Comparison on the Gharbi dataset [14].
|
| 302 |
+
|
| 303 |
+
Equation 3. For the Gharbi dataset, we convert images to the sRGB space using codes provided by its authors as follows [14],
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
s = \min \left( {1,\max \left( {0,l}\right) }\right) \text{ , } \tag{12}
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
where $l$ indicates images in the scene linear space, $s$ indicates images in the sRGB space.
|
| 310 |
+
|
| 311 |
+
§ 5.1 COMPARISON WITH DENOISING METHODS
|
| 312 |
+
|
| 313 |
+
We compare our method to both state-of-the-art traditional denoising methods, including Sen et al. [45], Rousselle et al. [42], Kalantari et al. [23], Bitterli et al. [3], and recent representative deep learning based methods, including KPCN [2] and Gharbi et al. [14]. Unlike other methods, our method takes both a LRHS rendering and a HRLS rendering as input. Therefore, we compute the average ssp for our input as ${sp}{p}_{avg} = {sp}{p}_{LRHS}/{s}^{2} + {sp}{p}_{HRLS}$ , where $s$ indicates the super resolution scale. For instance, in Table 2,"Ours $\times 2$ " indicates that our method performs $\times 2$ super resolution and (4 - 1) indicates that our method takes 4 spp LRHS and 1 spp HRLS as input, which is effectively $2\mathrm{{spp}}$ on average. We conducted on the comparisons on both the Gharbi dataset and our BCR dataset.
|
| 314 |
+
|
| 315 |
+
Table 2 compares our method to Bitterli et al. [3], KPCN [2], and Gharbi et al. [14]. We used the code / model shared by their authors in this experiments. For KPCN [2], we provide another version of the results produced by their neural network but fine-tuned on our BCR dataset. This experiment showes that our method, especially ours $\times 2$ and $\times 4$ , outperform the state-of-the-art methods by a large margin. Specifically, our $\times 4$ method wins ${2.91}\mathrm{\;{dB}}$ on PSNR and 0.0039 on RelMSE when the spp is 2 . When spp is relatively high, our $\times 2$ method wins ${0.91}\mathrm{\;{dB}}$ on PSNR and 0.0007 on RelMSE. Figure 7 shows several visual examples on the BCR dataset. Our results contain fewer artifacts than the other methods.
|
| 316 |
+
|
| 317 |
+
Rendering Result Error Map
|
| 318 |
+
|
| 319 |
+
Figure 6: Error map visualization.
|
| 320 |
+
|
| 321 |
+
max width=
|
| 322 |
+
|
| 323 |
+
spp 4 8 16 32 64 128
|
| 324 |
+
|
| 325 |
+
1-7
|
| 326 |
+
Rousselle [42] X X X X X 13.3
|
| 327 |
+
|
| 328 |
+
1-7
|
| 329 |
+
Kalantari [24] X X X X X 10.4
|
| 330 |
+
|
| 331 |
+
1-7
|
| 332 |
+
Bitterli [3] X X X X X 21.9
|
| 333 |
+
|
| 334 |
+
1-7
|
| 335 |
+
KPCN [2] X X X X X 14.6
|
| 336 |
+
|
| 337 |
+
1-7
|
| 338 |
+
Sen [45] 281.2 638.1 1603.1 4847.8 - -
|
| 339 |
+
|
| 340 |
+
1-7
|
| 341 |
+
Gharbi [14] 6.0 10.1 18.9 35.9 67.0 156.5
|
| 342 |
+
|
| 343 |
+
1-7
|
| 344 |
+
Ours $\times 2$ X X X X X 0.362
|
| 345 |
+
|
| 346 |
+
1-7
|
| 347 |
+
Ours $\times 4$ X X X X X 0.118
|
| 348 |
+
|
| 349 |
+
1-7
|
| 350 |
+
Ours $\times 8$ X X X X X 0.052
|
| 351 |
+
|
| 352 |
+
1-7
|
| 353 |
+
|
| 354 |
+
Table 4: Comparison of runtime cost (second) to denoise a ${1024} \times$ 1024 image. The data is from Gharbi [14]. If the runtime is constant, we report it in the last column. Our $\times 2, \times 4$ and $\times 8$ method are at least ${17} \times ,{51} \times$ and ${115} \times$ faster than the start-of-the-art methods, respectively.
|
| 355 |
+
|
| 356 |
+
We were not able to compare to additional methods on our BCR dataset as these methods work with other rendering engines or use very different input format. We compare to these methods on the Gharbi dataset, as reported in Table 3. We obtained the results for the comparing methods from Gharbi et al. [14]. For our results, we directly used our neural network trained on our BCR dataset without fine-tuning on the Gharbi training dataset. ${}^{4}$ As shown in Table 3, our method outperforms all the other methods in terms of PSNR.
|
| 357 |
+
|
| 358 |
+
However, the RelMSE of our results is higher than some of existing methods, such as KPCN [2] and Gharbi [14]. We looked into the discrepancy between the results measured using PSNR and RelMSE. We found that the RelMSE metric is heavily affected by a small number of pixels with abnormally large errors in our results. Figure 6 shows the RelMSE error map of one of our results with a much larger error than Gharbi. Figure 6 shows an example of our result where the errors concentrate in the region around the bright light, with 16 pixels having errors larger than ${10}^{6}$ , which contribute to most of the error of the whole image. After excluding these 16 pixels, while our error is still larger than Gharbi, the difference is much smaller. We would like to point out that our method was trained on our BCR dataset only and was not fine-tuned on the Gharbi dataset as its training set is not available. Moreover, BCR and the Gharbi dataset were rendered using different engines and thus contained different intermediate layers. To test on the Gharbi examples, we had to set the variance layer to a constant value, which compromises our results.
|
| 359 |
+
|
| 360 |
+
Figure 8 shows visual comparisons between our method and several existing methods. Although our RelMSE is higher than Gharbi [14], our results look more plausible, which is consistent with our higher PSNR values measured in the sRGB space. In the first example, the seat in our result contains much fewer artifacts. In the second example, the highlight in our result is more accurate than others. It shows that our model and BCR dataset have a great generalization capability.
|
| 361 |
+
|
| 362 |
+
${}^{4}$ We removed one image from the Gharbi testing set as its source model is also included into our BCR training set.
|
| 363 |
+
|
| 364 |
+
Ground Truth Ground Truth (PSNR↑/RelMSE↓) 2spp (7.27/1.8574) KPCN [2] (18.94/0.0677) Gharbi [14] (28.09/0.0079) Ours $\times 4\left( \mathbf{{31}.{67}/{0.0037}}\right)$ 2spp (11.64/0.4790) KPCN [2] (23.76/0.0474) Gharbi [14] (31.90/0.0042) Ours $\times 4$ (36.61/0.0015) Sen [45] Rousselle [42] Kalantari [23] 30.92/0.2301 31.43/0.0254 32.28/0.0624 KPCN-ft [2] Gharbi [14] Ours $\times 2$ 29.77/0.0274 33.60/0.0104 $\mathbf{{34.99}}/{0.0188}$ Sen [45] Rousselle [42] Kalantari [23] 29.28/0.0516 36.18/0.5295 33.79/0.0943 KPCN-ft [2] Gharbi [14] Ours $\times 2$ 34.09/0.0354 36.24/0.0192 38.5910.0766 KPCN-ft [2] (27.41/0.0113) Bitterli [3] (21.11/0.0449) Ground Truth Ground Truth (PSNR↑/RelMSE↓) KPCN-ft [2] (27.84/0.0173) Bitterli [3] (23.21/0.0125) Figure 7: Visual comparison on the BCR dataset Ground Truth 4spp PSNR↑/RelMSE↓ 17.89/0.7711 Ground Truth Bitterli [3] KPCN [2] 26.40/0.0499 28,14/0.4158 Ground Truth 4spp PSNR↑/RelMSE↓ 26.45/73.63 Ground Truth Bitterli [3] KPCN [2] 33.70/0.3937 33.37/0.5008
|
| 365 |
+
|
| 366 |
+
Figure 8: Visual comparison on the Gharbi dataset [14].
|
| 367 |
+
|
| 368 |
+
max width=
|
| 369 |
+
|
| 370 |
+
2*Methods 2|c|$\times 2$ 2|c|$\times 4$ 2|c|$\times 8$
|
| 371 |
+
|
| 372 |
+
2-7
|
| 373 |
+
PSNR RelMSE PSNR RelMSE PSNR RelMSE
|
| 374 |
+
|
| 375 |
+
1-7
|
| 376 |
+
Bicubic 30.57 0.0141 25.39 0.0858 22.36 0.2473
|
| 377 |
+
|
| 378 |
+
1-7
|
| 379 |
+
EDSR [35] 32.01 0.0079 30.70 0.0119 27.97 0.0241
|
| 380 |
+
|
| 381 |
+
1-7
|
| 382 |
+
RCAN [54] 32.03 0.0084 30.73 0.0117 27.92 0.0253
|
| 383 |
+
|
| 384 |
+
1-7
|
| 385 |
+
Ours 38.40 0.0015 34.27 0.0039 31.08 0.0079
|
| 386 |
+
|
| 387 |
+
1-7
|
| 388 |
+
|
| 389 |
+
Table 5: Comparison with super resolution algorithms on the BCR dataset.
|
| 390 |
+
|
| 391 |
+
max width=
|
| 392 |
+
|
| 393 |
+
2*HRLS LRHS 2|c|1spp 2|c|2spp 2|c|4spp
|
| 394 |
+
|
| 395 |
+
2-7
|
| 396 |
+
PSNR RelMSE PSNR RelMSE PSNR RelMSE
|
| 397 |
+
|
| 398 |
+
1-7
|
| 399 |
+
2spp 32.14 0.0056 - - - -
|
| 400 |
+
|
| 401 |
+
1-7
|
| 402 |
+
4spp 32.94 0.0048 33.76 0.0038 - -
|
| 403 |
+
|
| 404 |
+
1-7
|
| 405 |
+
8spp 33.52 0.0042 34.41 0.0033 35.20 0.0027
|
| 406 |
+
|
| 407 |
+
1-7
|
| 408 |
+
16spp 33.94 0.0039 34.88 0.0030 35.71 0.0025
|
| 409 |
+
|
| 410 |
+
1-7
|
| 411 |
+
32spp 34.22 0.0037 35.21 0.0028 36.06 0.0023
|
| 412 |
+
|
| 413 |
+
1-7
|
| 414 |
+
64spp 34.42 0.0035 35.44 0.0027 36.31 0.0022
|
| 415 |
+
|
| 416 |
+
1-7
|
| 417 |
+
128spp 34.56 0.0035 35.60 0.0026 36.49 0.0021
|
| 418 |
+
|
| 419 |
+
1-7
|
| 420 |
+
|
| 421 |
+
Table 6: The effect of spp values on the final rendering results.
|
| 422 |
+
|
| 423 |
+
Speed. We report the speeds of the above methods in Table 4. We use the same setting as Gharbi [14] and obtain the timing data for the comparing methods from them as well. For our method, We report the aggregated spp for our method by combining samples used to render both HRLS and LRHS. Since all the methods use the same spp, we only include the time needed for denoising. We report the time of processing one ${1024} \times {1024}$ image on one Nvidia Xp GPU. We can find that our $\times 2, \times 4$ and $\times 8$ method are at least ${17} \times ,{51} \times$ and ${115} \times$ faster than the state-of-the-art method Gharbi [14].
|
| 424 |
+
|
| 425 |
+
§ 5.2 COMPARISONS WITH SUPER RESOLUTION METHODS
|
| 426 |
+
|
| 427 |
+
We also compare our method with several baseline methods that use super resolution to upsample the low-resolution-high-spp renderings to the target size. In this experiment, we used the trained models shared by the authors of these super resolution methods $\left\lbrack {{35},{54}}\right\rbrack$ and fine-tuned them on our BCR dataset. As reported in Table 5, our method generates significantly better results than these super resolution methods. While this comparison is unfair to these baseline methods, it indeed shows the benefits of taking an extra high-resolution-low-spp rendering as input. As shown in Figure 11, our results contain more fine details that are missing from the super resolution results.
|
| 428 |
+
|
| 429 |
+
§ 5.3 ABLATION STUDY
|
| 430 |
+
|
| 431 |
+
§ WE NOW EXAMINE SEVERAL KEY COMPONENTS OF OUR METHOD.
|
| 432 |
+
|
| 433 |
+
Input layers of ${I}_{LRHS}$ and ${I}_{HRLS}$ . We examine how the rendering layers affect the final results. In this experiment, we use 1 spp for ${I}_{HRLS}$ and 4000 spp for ${I}_{LRHS}$ . The upsampling scale is set to $\times 4$ . Our neural network contains two input branches, one for ${I}_{LRHS}$ and the other for ${I}_{HRLS}$ . In this experiment, we fix the input layer of one branch to RGB while changing the input layers of the other. For the model with "None", we remove this branch. As shown in Figure 9, compared with no inputs, ${I}_{HRLS}$ can greatly improve the results. Among various input layers, RGB improves the results by a large margin. The result can be further improved if the ${I}_{HRLS}$ takes all rendering layers. We believe that these improvements come from the high frequency information in the ${I}_{HRLS}$ . For the ${I}_{LRHS}$ , while all the input layers still help, the RGB result alone can achieve the best result. We conjecture that since ${I}_{LRHS}$ is rendered with a high spp, its RGB layer is already of a very high quality and the other intermediate layers do not further contribute. On the other hand, the intermediate layers for ${I}_{HRLS}$ provide useful information for denoising, which is consistent with the findings of the previous denoising methods $\left\lbrack {2,{14}}\right\rbrack$ .
|
| 434 |
+
|
| 435 |
+
35 34.0 33.9 PSNR 32 31.6 31.5 31.0 30.7 30.1 30.3 Layers of ${I}_{HRLS}$ Figure 9: The effect of input rendering layers. 0.0060 ${\ell }_{i}$ 0.0055 ${\ell }_{1}$ 0.0052 RelMSE 0.0045 0.0045 0.0044 0.0040 0.0042 0.0035 0.0039 0.0030 ${10}^{-2}$ ${10}^{-1}$ ${10}^{c}$ ${10}^{1}$ 34.1 34 PSNR 33 33.0 32.6 32.1 31.6 31 30.6 30.5 30 Layers of ${I}_{HRLS}$ 34.4 34.27 34.2 PSNR 34.0 33.99 33.78 33.63 33.69 ${10}^{-2}$ ${10}^{-1}$ ${10}^{0}$ ${10}^{1}$ $\beta$
|
| 436 |
+
|
| 437 |
+
Figure 10: Comparison between ${\ell }_{r}$ and ${\ell }_{1}$ .
|
| 438 |
+
|
| 439 |
+
Robust loss ${\ell }_{r}$ . We examine the effect of the parameter $\beta$ in our robust loss. We also compare it to the standard ${\ell }_{1}$ loss. In this experiment, we use 4000 spp for ${I}_{LRHS}$ and 1 spp for ${I}_{HRLS}$ . The upsampling scale is set to $\times 4$ . The input channels of ${I}_{LRHS}$ and ${I}_{HRLS}$ are set to RGB. Figure 10 shows that the robust loss ${\ell }_{r}$ with $\beta = {0.1}$ outperforms ${\ell }_{1}$ by a large margin as it can avoid the bias towards a very small number of pixels with extremely large pixel values. We also find that using a too large or too small $\beta$ will be harmful to the results. This is because a very large $\beta$ value reduces the robust loss to the ${\ell }_{1}$ loss while a very small beta value makes the loss always close to 1 without regard to the error between the output and the ground truth.
|
| 440 |
+
|
| 441 |
+
SPP of ${I}_{LRHS}$ and ${I}_{HRLS}$ . We examine how our method works with different spp values used to render ${I}_{LRHS}$ and ${I}_{HRLS}$ . In the experiment, we set the upsampling scale to $\times 4$ . Table 6 shows rendering at high spp values consistently leads to better final results.
|
| 442 |
+
|
| 443 |
+
§ 6 CONCLUSION
|
| 444 |
+
|
| 445 |
+
This paper presented a hybrid rendering method to speed up Monte Carlo rendering algorithms. We designed a two-encoder-one-decoder network for this task. Our network takes a low resolution image with a high spp and a high resolution image with a low spp as inputs, and estimates the high resolution high quality images. We built a large-scale ray-tracing dataset Blender Cycles Ray-tracing dataset. Our experiments showed that our method is able to generate high quality high resolution images quickly. Our experiments also showed that HRLS and the robust loss are helpful to generate high quality results.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,379 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Perspective Charts
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
We introduce three novel data visualizations, called perspective charts, based on the concept of size constancy in linear perspective projection. Bar charts are a popular and commonly used tool for the interpretation of datasets, however, representing datasets with multi-scale variation is challenging in a bar chart due to limitations in viewing space. Each of our designs focuses on the static representation of datasets with large ranges with respect to important variations in the data. Through a user study, we measure the effectiveness of our designs for representing these datasets in comparison to traditional methods, such as a standard bar chart or a broken-axis bar chart, and state-of-the-art methods, such as a scale-stack bar chart. The evaluation reveals that our designs allow pieces of data to be visually compared at a level of accuracy similar to traditional visualizations. Our designs demonstrate advantages when compared to state-of-the-art visualizations designed to represent datasets with large outliers.
|
| 8 |
+
|
| 9 |
+
Index Terms: Human-centered computing-Visualization-Visualization techniques-Information Visualization;
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Today we are faced with large amounts of data with varying complexity [24]. This makes the visualization of large datasets challenging, especially when viewing space is limited. Different tools and charts are suited to different types of data [16]. Bar charts are one of the most commonly used data visualizations as they are simple and easy to interpret. However, datasets with a large range, with important variation at multiple scales, present unique visualization challenges. Examples can commonly be found in population data, as illustrated in Figure 1, which shows population data for several Canadian cities. A vertical limitation of the viewing space may require that a large amount of compression be applied to the data, which makes differences between values less readable. For example, in Figure 1, the largest value is the population of Toronto; the scale of the chart needs to be set to accommodate such large values. Showing Toronto's population in the same chart as smaller cities such as Guelph and Kingston makes it difficult to measure the population of the smaller cities. When the scaling factor increases, or when data becomes more compressed, it becomes more difficult to make comparisons between pieces of data with close values.
|
| 14 |
+
|
| 15 |
+
The limitation that we focus on is the readability of charts with multi-scale variation in the dataset. A linear mapping between the range of the data and the height of the viewing space may result in undesirable compression of the charts. One potential solution is to use a non-linear mapping, such as a logarithmic function (see Figure 1, right). However, this type of mapping is difficult to read and understand in comparison to simple linear mappings [9].
|
| 16 |
+
|
| 17 |
+
How do we find a more natural solution to mapping datasets with large outliers onto a small viewing space? We propose a new technique for visualizing data with important variation at multiple scales using perspective projection. Humans naturally perceive perspective, and are able to estimate the size of distant objects through a property known as size constancy [2]. Using simple linear perspective, geometric proportions can be used to measure the size and relative differences of objects [4].
|
| 18 |
+
|
| 19 |
+
Our first design, which we call the slanted perspective chart, shows a bar chart that is slanted backwards from the viewing plane, such that it is viewed in perspective (See Figure 1, bottom). As the lower part of the graph appears closer to the reader, small values in the dataset become larger in comparison to a traditional bar chart.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: Canadian cities with a population of more than 150,000, in a traditional bar chart (left), a bar chart with a logarithmic scale (right), and a slanted perspective chart (bottom).
|
| 24 |
+
|
| 25 |
+
The main problem with the solution of slanting a traditional bar chart is that larger values in the dataset become compressed due to the perspective projection. This may make large values more difficult to read and compare.
|
| 26 |
+
|
| 27 |
+
Our next chart, the stepped perspective chart, is designed to address the issue of scaling large values in our slanted perspective chart, while also improving the readability of small values. In bar charts, space in some parts of the chart are often wasted due to large differences in values or outliers in the dataset. We can reduce the amount of wasted space by visualizing this area in an extreme slant. This puts only the less important range of the data at an extreme angle; each bar's value is still measurable in an area that is perpendicular to the view (see Figure 2, left).
|
| 28 |
+
|
| 29 |
+
This design is intended to resemble a staircase; we can insert multiple bends in the axis in a single chart to compress multiple areas of the chart and eliminate multiple areas of unused space (see Figure 2, right). Since the tops of the bars are not slanted or foreshortened in the stepped perspective chart, the values are emphasized more strongly than in the slanted perspective chart.
|
| 30 |
+
|
| 31 |
+
The stepped perspective chart is conceptually similar to a traditional broken-axis bar chart (see Figure 3), which also addresses the issue of wasted space in areas where there are large gaps in the data. However, since a broken-axis bar chart essentially cuts out a portion of the graph, the ability to visually estimate and compare data is lost, unlike in our stepped perspective chart.
|
| 32 |
+
|
| 33 |
+
Both our slanted perspective chart and stepped perspective chart contain some wasted space around the upper corners of the viewing space. To eliminate areas of unused space wherever possible, we introduce a third type of Perspective Chart, called the circular perspective chart (see Figure 4). Our design for this chart is inspired by the impression of looking up at tall buildings and skyscrapers from a low vantage point. The horizontal axis of the chart is mapped to a circle, with the vertical axis extending away from the reader's view. This chart occupies a consistent viewing space regardless of the scale of the data or the number of entries in the dataset.
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+
Figure 2: Left: A stepped perspective chart. Right: A stepped perspective chart with multiple bends in the axis.
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
Figure 3: A broken-axis bar chart.
|
| 42 |
+
|
| 43 |
+
The data visualization challenges that we discuss related to readability in datasets with multi-scale variation can be addressed using dynamic visualization methods, such as focus-plus-context; however, we focus on a static method of addressing these issues. We introduce a new class of charts comparable to traditional static bar charts, and note that commonly used interactive techniques for bar charts can also be used with our perspective charts.
|
| 44 |
+
|
| 45 |
+
To evaluate our visualizations, we conducted a user study with twenty-four participants. The study quantitatively measured the speed and accuracy with which users could read data from our charts in comparison to traditional methods, such as a standard bar chart or a broken-axis bar chart, and state-of-the-art methods, such as a scale-stack bar chart. We also performed a qualitative evaluation of our three designs. Participants generally responded favorably to our visualizations, and were able to read data from them as accurately as with the traditional methods in fifteen out of seventeen task types, and performed more strongly than a recent method, scale-stack bar charts [9], in three out of four tasks.
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
Figure 4: Population of cities in the state of Wisconsin in the United States, in a circular perspective chart. Each line marks an increment of 50,000 .
|
| 50 |
+
|
| 51 |
+
## 2 BACKGROUND AND RELATED WORK
|
| 52 |
+
|
| 53 |
+
Given the increasing size and complexity of available datasets, finding clear and readable methods of visualization is becoming more challenging [12]. Traditional methods such as bar charts are not always a practical choice when visualizing datasets with a large range with respect to important variations in the data [11]. In this section, we first provide a short review of research on visualizing complex datasets using variations of bar charts. Since we use perspective projection in our charts, we provide a short review of literature on the ways that perspective affects human perception.
|
| 54 |
+
|
| 55 |
+
### 2.1 Visualizing Data with Multi-Scale Variation
|
| 56 |
+
|
| 57 |
+
When a range of data is mapped to a bar chart, a scaling factor is applied such that all of the data can be represented in the viewing space. In datasets with large outliers, it may not be possible to fit the chart into a limited viewing space without applying scaling that decreases legibility. To address this problem, alternatives to bar charts are used in some applications. Karduni et al. wrap large bars over a certain threshold back over the y-axis in their Du Bois wrapped bar chart [10]; a similar technique is described by Reijner's horizon graphs [19], evaluated by Heer et al. [8] as an effective technique. Hlawatsch et al. compare their scale-stack bar charts with logarithmic and broken bar charts for the visualization of datasets with a large scale [9]. An example of a scale-stack bar chart is shown in Figure 6.
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
Figure 5: A radial bar chart.
|
| 62 |
+
|
| 63 |
+
One traditional alternative to bar charts for this use case is the broken-axis bar chart. Broken-axis bar charts eliminate areas of unused space between values in a bar chart, visualized as a discrete jump in values in the y-axis. However, truncating the y-axis of a chart in this manner has been shown to negatively affect the perception of scale in datasets [3]. We compare broken-axis bar charts to our stepped perspective chart in our evaluation.
|
| 64 |
+
|
| 65 |
+
Charts scaled with a logarithmic function are also sometimes used to represent datasets with a large range. However, this type of scale is not typically used in bar charts, as it may be difficult to interpret given that it is non-linear [9].
|
| 66 |
+
|
| 67 |
+
### 2.2 Variations of Bar Charts
|
| 68 |
+
|
| 69 |
+
There exist several proposed solutions to common problems with bar charts. In cases where a guaranteed $1 : 1$ aspect ratio may be desirable for a visualization, a circular chart such as a radial bar chart may be suitable (see Figure 5). A radial bar chart occupies a fixed viewing space regardless of the scale of its data. Luboschik's work on particle-based map label placement [14] highlights the use of circular charts in geospatial data visualization, where point-based icons are useful. Despite their popularity, circular chart types are generally discouraged by visualization experts, as they tend to be more difficult to read than a traditional bar chart [7].
|
| 70 |
+
|
| 71 |
+
Skau et al. evaluate the impact of visual embellishments in bar charts [21], taking into account human perception and aesthetic factors in their analysis. The results of their evaluation show that simple embellishments like rounded or triangular bars have strong effects on human perception, and in some tasks will negatively affect performance. Their evaluation found that humans rely on strong lines at the ends of the bars to accurately estimate values. In our stepped perspective charts, the tops of the bars remain visible and perpendicular to the view in each cluster of data.
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Figure 6: A scale-stack bar chart. The chart represents one dataset at three different scales stacked on top of one another.
|
| 76 |
+
|
| 77 |
+
### 2.3 Human Perception
|
| 78 |
+
|
| 79 |
+
Perspective, in combination with lighting, distance and angle, contribute to human perception of information [6]. The visual shape and size of objects changes as the object's distance and orientation changes relative to the viewer [20]. However, the concept of size constancy explains that the perception of an object's size does not change with the object's distance from the viewer [18]. This is true even for two-dimensional representations of three-dimensional scenes (see Figure 7). This is due to humans' natural ability to account for perspective and the reduction of the projected size of an object when estimating its true size [23]. Size constancy is one of the types of natural constancy in human perception of distance and scale of objects. We use this feature of perception in the design of our perspective charts.
|
| 80 |
+
|
| 81 |
+
Mackinlay et al. [15] use perspective projection in their technique called the Perspective Wall. This interactive technique addresses data with "wide aspect ratios" by placing the area of focus on a flat plane, with surrounding contextual data placed on planes slanted away from the viewer. Other aspects of human perception are used in the design of various hierarchical data visualizations, such as those described by Gestalt psychology principles [13] of closure [17] and continuity [5].
|
| 82 |
+
|
| 83 |
+
## 3 Methodology
|
| 84 |
+
|
| 85 |
+
We propose the use of perspective as a mapping of bar charts in three different designs: the slanted, stepped, and circular perspective charts. In this section, we present design rationale and methods for creating our three different perspective charts.
|
| 86 |
+
|
| 87 |
+
### 3.1 Slanted Perspective Charts
|
| 88 |
+
|
| 89 |
+
Slanted perspective charts, as shown in Figure 1, are similar to traditional bar charts, but have the vertical axis of the chart slanting away from the viewer. This design is inspired by drawings and images that portray one-point perspective, i.e. images with a single vanishing point, as shown in the photograph in Figure 7. We use a simple 3D environment and set a predefined camera setup to avoid user input for 3D interaction. Slanting the chart brings smaller values closer to the viewer, while moving larger values away.
|
| 90 |
+
|
| 91 |
+
The slant in the vertical axis of the chart can be achieved either by viewing the chart from a lower angle, or by maintaining the same viewpoint and instead slanting the chart plane backwards from the viewer in three-dimensional space. We choose the latter option in order to maintain a consistent viewing space. Slanting the chart moves large values in the chart away from the viewer, and decreases the space between scale lines.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
|
| 95 |
+
Figure 7: An example of single-point perspective. Due to size constancy, we perceive the height of the statues in the red and blue boxes to be equal. In image space, the statue in the red box is half the height of the statue in the blue box.
|
| 96 |
+
|
| 97 |
+
We restrict the foreshortening ratio in order to limit the compression of large values as they move away from the viewer. The foreshortening ratio measures how objects viewed at an angle appear to be shorter than their true measurement. We slant the y-axis of the chart at a fixed angle $\theta$ , which controls the foreshortening ratio ${f}_{r}$ of the slanted line $L$ compared to the viewing plane $V$ :
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
{f}_{r} = \frac{L}{V} = \sec \left( \theta \right) .
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
For example, when $\theta = {60}^{ \circ }$ , the foreshortening ratio ${f}_{r}$ is 2 . In general, $1 \leq {f}_{r} < \infty$ where $0 \leq \theta < \frac{\pi }{2}$ .
|
| 104 |
+
|
| 105 |
+
We avoid slanting the chart at extreme angles in order to control the foreshortening ratio ${f}_{r}$ and maintain readability of large values. Figure 9 demonstrates how a change in $\theta$ impacts foreshortening.
|
| 106 |
+
|
| 107 |
+
### 3.2 Stepped Perspective Charts
|
| 108 |
+
|
| 109 |
+
Our stepped perspective chart, as seen in Figure 2, resembles a staircase showing multiple "tiers" of data. According to these tiers, we divide the range of the data into subranges ${R}_{1},{T}_{1},{R}_{2},{T}_{2},\ldots ,{R}_{n}$ (see Figure 10) where each ${R}_{i}$ is a cluster of the data and ${T}_{i}$ are transitions. Each of these subranges represents a rectangular region of the chart. To create the stepped perspective chart we use a vertical view plane with a view angle ${\theta }_{v} = {0}^{ \circ }$ for the ${R}_{i}$ , and an extreme slant $\left( {{\theta }_{v} = {60}^{ \circ }}\right.$ for Figure 10) for the transitive regions ${T}_{i}$ .
|
| 110 |
+
|
| 111 |
+
The stepped perspective chart is comparable to traditional broken-axis bar charts, which are also intended to address issues associated with large gaps between values in a dataset. However, broken-axis bar charts have been shown to negatively affect the perception of scale in datasets [3]. In a traditional broken-axis bar chart, without the use of labels, it is impossible to visually compare values on opposing sides of the break in the axis. In the stepped perspective chart, the area within the gap is still visible, as it runs at a different angle rather than being cut out of the chart entirely (see Figure 10). This way, visual estimation is still possible, and the reader is able to perceive the approximate size of the gap.
|
| 112 |
+
|
| 113 |
+
The amount of the transitional region of the chart that is visible can be adjusted. Figure 8 shows varying heights from which the chart can be viewed. While the axes are always bent at an angle of ${90}^{ \circ }$ , the height of the camera affects the view angle. A height that is too low results in a high view angle, with scale lines positioned so closely together that they are no longer readable, while a low view angle lessens the impact of the separate regions of the chart We choose a height that is just high enough to allow the viewer to distinguish between scale lines. The exact appropriate height is dependent on the resolution and size at which the chart is viewed.
|
| 114 |
+
|
| 115 |
+
### 3.3 Circular Perspective Charts
|
| 116 |
+
|
| 117 |
+
As seen in Figure 4, our circular perspective chart is inspired by the perception of tall buildings as viewed from a low vantage point, converging on a singular vanishing point. In this chart, the bars are placed in a closed polygon and extend away from the viewer, converging at a vanishing point at the center of the polygon.
|
| 118 |
+
|
| 119 |
+
We create our circular perspective chart by bending the horizontal axis to remove areas of unused space and accommodate a larger number of values in a limited viewing space. We can imagine that the entire bar chart is divided into multiple smaller sub-charts that are slanted individually (see Figure 11). The slanted sub-charts are then rotated to form a closed polygon. In the extreme case, each sub-chart is allocated to only one single value (see Figure 4). When there is no preferred clustering to create sub-charts, we use this extreme case as the main design of our circular perspective chart. By wrapping the horizontal axis of the chart to form a closed polygon, the chart is contained within a consistent view.
|
| 120 |
+
|
| 121 |
+
The vantage point of the viewer is a potential variable to use in interactive visualization using the circular perspective chart. Figure 12 shows an example of a low and a high viewing height for the chart shown in Figure 4. In the left chart, each scale line represents an increment of 50,000 for a total of twenty-four scale lines. The right chart’s scale lines represent an increment of 150,000 for a total of eight scale lines. To make efficient use of space in the circular perspective chart, the scale should be chosen such that bars are compressed as little as possible while avoiding the issue of closely converging scale lines. The occurrence of this issue is dependent on the size and resolution at which the chart is displayed.
|
| 122 |
+
|
| 123 |
+
The use of circular chart types is generally discouraged by visualization professionals [7]. However, they are frequently used in practice due to their known 1:1 aspect ratio, independent of the number of bars represented. This property is also present in our circular perspective chart. Luboschik demonstrates that circular chart types are useful in geospatial data visualization [14]. The use of bar charts in spatial data visualization may present limitations in the available viewing space, hence it is worth exploring a circular chart type that has a consistent aspect ratio.
|
| 124 |
+
|
| 125 |
+
## 4 EVALUATION
|
| 126 |
+
|
| 127 |
+
We conducted a within-subjects user study to evaluate the readability and novelty of perspective charts. The study quantitatively measured the accuracy and speed with which users answered a series of questions based on data shown in various charts, and collected participants' opinions of the three types of perspective charts in a qualitative study.
|
| 128 |
+
|
| 129 |
+
In our user study, we evaluate the following hypotheses:
|
| 130 |
+
|
| 131 |
+
H1 For data with important variation at multiple scales, small values are more easily readable in a slanted perspective chart than in a traditional bar chart.
|
| 132 |
+
|
| 133 |
+
H2 The stepped perspective chart allows for faster and easier estimation and comparison of values than broken-axis bar charts and scale-stack bar charts, other axis-breaking methods for visualizing datasets with large outliers.
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+
Figure 8: A stepped perspective chart viewed at three different heights, resulting in varying view angles and amounts of viewing space occupied by the transitional region of the chart. Left: ${\theta }_{v} = {80}^{ \circ }$ . Center: ${\theta }_{v} = {60}^{ \circ }$ . Right: ${\theta }_{v} = {40}^{ \circ }$
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+
Figure 9: A comparison of methods for slanting a traditional bar chart. The left chart's vertical axis is slanted away from the viewer at an angle of ${30}^{ \circ }$ , stretching the axis in the process. The chart on the right is slanted at an angle of ${60}^{ \circ }$ .
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 10: An example of a dataset with two clear clusters ${R}_{1}$ and ${R}_{2}$ , and a transitive range ${T}_{1}$ .
|
| 146 |
+
|
| 147 |
+
H3 In a dataset with important variation at multiple scales, small values are more easily readable in a circular perspective chart than in a traditional bar chart, while occupying a consistent viewing space. The ease-of-use of the chart should be comparable to existing fixed-viewing-space visualizations like the radial bar chart.
|
| 148 |
+
|
| 149 |
+
We describe one hypothesis per perspective chart design. The hypotheses are designed based on the concept of size constancy. In each block of tasks, smaller values, or values that appear "closer" to the user should be more easily readable due to the increased scale, without sacrificing readability of larger values, which appear farther away from the user and therefore visually smaller.
|
| 150 |
+
|
| 151 |
+
H1 compares a traditional bar chart to the slanted perspective chart, a simple modification of a traditional visualization that introduces three-dimensional perspective. H2 compares the stepped perspective chart to other chart types that use axis-breaking methods for showing important variation at multiple scales. Since the stepped perspective chart uses a bend in the axis to show a large difference in scale between values, we choose to compare this design to existing chart designs that feature breaks in the y-axis. We evaluate whether the ability to visualize the area within the axis break allows values on either side to be more easily compared. H3 compares the circular perspective chart to a traditional bar chart and a radial bar chart. This is to evaluate the circular perspective chart's performance compared to an existing circular chart type as well as a traditional chart type.
|
| 152 |
+
|
| 153 |
+
### 4.1 Study Design
|
| 154 |
+
|
| 155 |
+
We performed studies on an individual basis over the course of approximately 60 minutes per participant. Each participant was shown a series of various types of charts and answered a list of questions based on the data in these charts. Each participant performed tasks based on common visualization task taxonomies [1, 22]. Participants answered questions based on a traditional bar chart, a radial bar chart, a broken-axis bar chart, a scale-stack bar chart [9], as shown in Figure 6, and our slanted, stepped, and circular perspective charts.
|
| 156 |
+
|
| 157 |
+
#### 4.1.1 Tasks
|
| 158 |
+
|
| 159 |
+
We designed six types of tasks based on visualization taxonomies [1, 22]. The task types used are:
|
| 160 |
+
|
| 161 |
+
- Retrieve Value - "What is the population of Franklin?"
|
| 162 |
+
|
| 163 |
+
- Determine Magnitude Difference - "How much larger is Milwaukee than Oak Creek? (For example, 2x larger, 3.5x larger)"
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+
Figure 11: The circular perspective chart is created by grouping sections of a large chart into several smaller sub-charts. In this example we have five sub-charts. Each of the sub-charts is individually slanted, then rotated to form a closed polygon.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+
Figure 12: A circular perspective chart showing population of cities in Alberta, with two different scaling factors. In the top chart, each line marks an increment of 50,000 . In the bottom chart, each line marks an increment of 150,000 .
|
| 172 |
+
|
| 173 |
+
- Determine Range - "What is the range of the data? (Smallest population to largest population)"
|
| 174 |
+
|
| 175 |
+
- Find Extremum - "Which city has the smallest population?"
|
| 176 |
+
|
| 177 |
+
- Filter - "List all cities with a population of less than 100,000 ."
|
| 178 |
+
|
| 179 |
+
- Sort - "Sort the cities by population from smallest to largest."
|
| 180 |
+
|
| 181 |
+
#### 4.1.2 Methodology
|
| 182 |
+
|
| 183 |
+
Each participant completed five task blocks (B1 - B5), during which they completed a set of tasks using one chart type followed by a matching set of tasks using a second chart type:
|
| 184 |
+
|
| 185 |
+
B1: traditional bar chart I slanted perspective chart
|
| 186 |
+
|
| 187 |
+
B2: scale-stack bar chart I stepped perspective chart
|
| 188 |
+
|
| 189 |
+
B3: broken-axis bar chart I stepped perspective chart
|
| 190 |
+
|
| 191 |
+
B4: radial bar chart I circular perspective chart
|
| 192 |
+
|
| 193 |
+
B5: traditional bar chart / circular perspective chart
|
| 194 |
+
|
| 195 |
+
Within each block, each participant first performed a series of either four or five different tasks (see Section 4.1.1) using a chart of the first type, then completed a matching set of tasks using a chart of the second type that visualized a different data set. We maintained the same block, task, and dataset order for all participants, but varied the order of the chart types within each block. For example, in B1 half of the participants completed their first five tasks using a traditional bar chart. The other half started by completing the same set of tasks using a slanted perspective chart that visualized the exact same data.
|
| 196 |
+
|
| 197 |
+
We chose this blocking scheme for the evaluation in order to maintain a pairing between our designs and existing chart types, and tailor task types within the blocks based on the charts used.
|
| 198 |
+
|
| 199 |
+
Each participant completed a total of forty-two tasks - five tasks (determine magnitude difference between a small and a large value, determine magnitude difference between two small values, filter, find extremum, and retrieve small value) for each chart type in B1, and four tasks for each chart type in B2-5. B2-5 had three tasks of the same type (determine magnitude difference between a small and a large value, determine magnitude difference between two small values, and retrieve small value), and one additional task of a varying type. Participants completed a find extremum task for B2, a retrieving large value task for B3, a sorting task for B4 and a filtering task for B5.
|
| 200 |
+
|
| 201 |
+
After completing the main set of tasks, participants answered a short post-study questionnaire evaluating each type of chart used in the study. This was followed by a short verbal interview where we further gathered their opinions on the charts used in the study.
|
| 202 |
+
|
| 203 |
+
#### 4.1.3 Participants
|
| 204 |
+
|
| 205 |
+
We recruited 24 participants (12 female) using posters distributed across our local campus as well as via word of mouth. We based this cohort size on those used in similar evaluations [9] [8]. Twenty-two participants were students at the time of the study; 16 participants studied in STEM fields, while the remaining participants worked or studied in arts, business or social sciences.
|
| 206 |
+
|
| 207 |
+
A post-hoc power analysis was performed on the results of our evaluation for each task type in each block with our sample size of 24 participants, using GPower 3.1. Among these analyses, the lowest reported power was 0.74 . Thus for each task type in our evaluation we have at least a 74% probability of finding true significance, given our sample size of 24 .
|
| 208 |
+
|
| 209 |
+
#### 4.1.4 Datasets
|
| 210 |
+
|
| 211 |
+
Since tasks were divided into five different blocks for each participant, we used two unique datasets in each block for a total of ten unique datasets. We chose real datasets that satisfied our use case of clusters of data across a large range. Six datasets showed population data across various regions, two represented pollution data, and two represented precipitation data. We used data that was likely to be unfamiliar to the participants to reduce potential bias resulting from preexisting knowledge about the data. We did this by using data from regions that were not geographically close to the location where the study was performed, or by obscuring the names of the locations in the datasets ("City 1" instead of "Vancouver", etc.).
|
| 212 |
+
|
| 213 |
+
#### 4.1.5 Test Environment
|
| 214 |
+
|
| 215 |
+
Before beginning the study, participants were given an explanation of the consent process, monetary compensation and risks associated with the study, as approved by the Conjoint Faculties Research Ethics Board of the University of Calgary. After indicating their consent, participants completed a short pre-questionnaire, then proceeded to the main portion of the study. Tasks were completed on paper in a prepared booklet provided to participants.
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+
Figure 13: Results showing the total time for participants to complete all tasks in a block of the evaluation. For this and subsequent charts, the bar in the grey box represents the median error rate. The p-value and effect size (r) of each task type is shown in the table. Each box denotes the 95% confidence interval of the median. Each dot represents the task completion time of an individual participant.
|
| 220 |
+
|
| 221 |
+
Each chart presented to participants included scale and axis labels in a consistent font style and size across charts. For the circular perspective chart, the chart's scale was labeled in the top-left corner of the visualization, as in Figure 4. Bars were unshaded to maintain simplicity in the chart designs. The appearance of charts used in the evaluation is comparable to the charts in Figures 1, 5 and 6.
|
| 222 |
+
|
| 223 |
+
### 4.2 Results
|
| 224 |
+
|
| 225 |
+
We compare task completion time and percentage of error between sets of tasks. The Shapiro-Wilk test indicates that our data does not follow a normal distribution, so we compare methods using a Mann-Whitney U test. As a result, we examine the median error rate for each task. For each test we report effect sizes (r) and p-values. All data is shared on the Open Science Framework (https://osf.io/ w3fce/?view_only=23ff1dded@b74363a68ec86419c9c373).
|
| 226 |
+
|
| 227 |
+
#### 4.2.1 Time
|
| 228 |
+
|
| 229 |
+
Timing data was measured as the sum of the total amount time each participant took to complete all the tasks for one block. Participants had the same number and type of tasks for each chart in a pairwise comparison. This data is shown in Figure 13.
|
| 230 |
+
|
| 231 |
+
Slanted perspective chart: We did not observe a significant difference in task completion time $\left( {p = {0.687}, r = {0.06}}\right)$ between the slanted perspective chart and a traditional bar chart.
|
| 232 |
+
|
| 233 |
+
Stepped perspective chart: We observed that participants' task completion time was significantly faster $\left( {p < {0.001}, r = {0.53}}\right)$ using our stepped perspective chart $\left( {\mathrm{{med}} = {186.5}\mathrm{\;s}}\right)$ than a scale-stack bar chart $\left( {\mathrm{{med}} = {313.5}\mathrm{\;s}}\right)$ . We did not observe a difference $(p = {0.988}$ , $r = {0.00}$ ) between our stepped perspective chart and a broken-axis bar chart.
|
| 234 |
+
|
| 235 |
+
Circular perspective chart: We did not observe a significant difference in task completion time $\left( {p = {0.140}, r = {0.21}}\right)$ between our circular perspective chart and a traditional bar chart. There was also no significant difference $\left( {p = {0.702}, r = {0.06}}\right)$ between our circular perspective chart and a radial bar chart.
|
| 236 |
+
|
| 237 |
+
#### 4.2.2 Accuracy
|
| 238 |
+
|
| 239 |
+
We examine the absolute percentage of error when determining statistical significance of accuracy results; we compute this for each task by comparing each participant's numerical response with the true correct value.
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+
Figure 14: Accuracy results for block one of our evaluation, comparing the slanted perspective chart (orange) to a traditional bar chart (blue). Results are grouped by task type. Each dot represents the percentage of error of an individual response. For this and subsequent charts, each box denotes the 95% confidence interval of the median.
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
|
| 247 |
+
Figure 15: Accuracy results for block two, evaluating the stepped perspective chart compared to the scale-stack bar chart.
|
| 248 |
+
|
| 249 |
+
For sorting tasks and filtering tasks, which had non-numerical responses, error rate was determined by counting the number of mistakes made compared to the true answer, and deducting "points" for each incorrect response. For example, in a filtering task to identify the number of cities with population below 100,000 , if ten cities fulfill this criteria, a response that gives only nine correct cities would result in an error rate of $1/{10} = {10}\%$ .
|
| 250 |
+
|
| 251 |
+
We did not observe a significant trend in the the directionality of error for any task block. For each block of tasks, results were corrected for multiple comparisons to control the false discovery rate. Median results for each block of the evaluation are shown in Figs. 14 to 18.
|
| 252 |
+
|
| 253 |
+
Slanted perspective chart: Between this and a traditional bar chart, no significant difference was shown in the median error rate for any task type $(p > {0.4}, r < {0.3}$ for all tasks, see Figure 14).
|
| 254 |
+
|
| 255 |
+
Stepped perspective chart: Between this and a broken-axis bar chart, there was no significant difference in accuracy demonstrated for any task type $(p > {0.6}, r < {0.3}$ for all tasks, see Figure 16).
|
| 256 |
+
|
| 257 |
+
The stepped perspective chart significantly outperformed the scale-stack bar chart for tasks related to determining magnitude difference between large and small values (stepped med $= {11.87}$ , scale-stack med $= {48.72}, p = {0.047}, r = {0.32}$ ), determining magnitude difference between two small values (stepped med $= {10.04}$ , scale-stack med $= {29.64}, p = {0.035}, r = {0.34}$ ), and finding extremum (stepped med $= {0.53}$ , scale-stack med $= {12.12}, p < {0.001}$ , $r = {0.76})$ . We did not observe a significant difference in error rate for tasks related to retrieving small values from the chart $(p = {0.263}$ , $r = {0.17})$ . Results are shown in Figure 15.
|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
|
| 261 |
+
Figure 16: Accuracy results for block three, evaluating the stepped perspective chart compared to a broken-axis bar chart.
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
|
| 265 |
+
Figure 17: Accuracy results for block four, evaluating the circular perspective chart compared to a radial bar chart.
|
| 266 |
+
|
| 267 |
+
Circular perspective chart: No significant difference was shown in the median error rate for tasks performed with a traditional bar chart and our circular perspective chart $(p > {0.7}, r < {0.2}$ for all task types) (see Figure 18).
|
| 268 |
+
|
| 269 |
+
Compared to the circular perspective chart, participants were significantly more accurate using a radial bar chart for tasks related to retrieving values (circular med $= {5.56}$ , radial med $= {0.00}$ , $p = {0.031}, r = {0.35})$ and sorting (circular med $= {8.33}$ , radial ${med} = {0.00}, p = {0.031}, r = {0.36})$ . No significant difference was demonstrated for tasks determining magnitude difference, either between small and large values $\left( {p = {0.361}, r = {0.13}}\right)$ , or two small values $\left( {p = {0.361}, r = {0.16}}\right)$ . Results are shown in Figure 17.
|
| 270 |
+
|
| 271 |
+
#### 4.2.3 Readability and Novelty
|
| 272 |
+
|
| 273 |
+
We gathered participants' opinions on the different types of visualizations in a post-study questionnaire. Responses are visible in Figure 19. The two most well-known types of visualizations, the traditional bar chart and the broken-axis bar chart, were the most well-received. The other types of visualizations were less familiar to participants and received lower scores, with the circular perspective chart receiving slightly less favourable scores.
|
| 274 |
+
|
| 275 |
+
Slanted perspective chart: According to the post-study questionnaire, the slanted perspective chart was the most well-received of our three designs. Two participants described the chart as "straightforward" (P1, P8). P12 stated that they preferred the slanted perspective chart because it allows the user to "see the whole range of data, unchanged, and still read the smaller values."
|
| 276 |
+
|
| 277 |
+
Stepped perspective chart: Several participants indicated that they felt the design of the scale-stack bar chart was complicated, which made it more difficult to use. P10 felt it was "hard to go back and forth" between scales when using this chart.
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+
Figure 18: Accuracy results for block five, evaluating the circular perspective chart compared to a traditional bar chart.
|
| 282 |
+
|
| 283 |
+
Circular perspective chart: Two participants felt that the circular perspective chart was more suitable as an artistic representation of data. P6 felt it was an effective chart type for "making an impact."
|
| 284 |
+
|
| 285 |
+
The circular perspective chart received the most "very hard to use" scores out of all the types of visualizations; nine out of twenty-four participants found it difficult to retrieve large values from the circular perspective chart. P10 noted that they would often "lose count when the perspective got smaller for the higher numbers." Nine out of twenty-four participants indicated that the effect of the perspective was too extreme in the circular perspective chart.
|
| 286 |
+
|
| 287 |
+
## 5 DISCUSSION
|
| 288 |
+
|
| 289 |
+
When compared to traditional chart types, we saw similar results for our three different designs; none of these comparisons showed a significant difference in accuracy rate for any of the evaluated task types. This suggests that due to size constancy, the use of perspective did not impact participants' ability to visually interpret values.
|
| 290 |
+
|
| 291 |
+
Slanted perspective chart: While we initially hypothesized that reading small values would be easier for participants using a slanted perspective chart than in a traditional bar chart, the results did not demonstrate a significant difference in the accuracy or completion time of the two charts. Since our introduced charts are unfamiliar visualizations for the general public, it is unsurprising that participants were more comfortable with traditional methods, as indicated by the post-study evaluation and interview.
|
| 292 |
+
|
| 293 |
+
Participants performed similarly with both types of charts. This demonstrates that the use of perspective did not hinder their ability to perform tasks, due to size constancy. However, there was no evidence that participants retrieved small values more accurately with our designs as we hypothesized they would. In fact, median error rate was exactly the same for tasks of this type with both the slanted perspective chart and a traditional bar chart.
|
| 294 |
+
|
| 295 |
+
Stepped perspective chart: As with the slanted perspective chart, the stepped perspective chart showed no significant difference in accuracy or timing compared to the traditional comparison method, in this case the broken-axis bar chart.
|
| 296 |
+
|
| 297 |
+
Participants were able to complete tasks significantly more quickly and accurately with the stepped perspective chart than the scale-stack bar chart, another unfamiliar visualization. This reinforces that axis-breaking methods have a negative affect on visual perception, as suggested by Correll et al. [3]. These results also suggest that the use of perspective allowed for a more intuitive understanding of the visualization than the scale-stack bar chart for representing datasets with important variation at different scales. Charts that utilize perspective and size constancy may be a viable alternative to axis-breaking techniques for data visualization.
|
| 298 |
+
|
| 299 |
+
Circular perspective chart: The circular perspective chart did not show a difference in accuracy compared to a traditional bar chart. However, the comparison fixed-viewing-space visualization, the radial bar chart, showed significantly higher accuracy than the circular perspective chart for sorting and value-retrieving tasks.
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 19: Qualitative results of our user study evaluation.
|
| 304 |
+
|
| 305 |
+
A previous evaluation performed by Goldberg and Helfman suggested that task completion speed was significantly lower in circular chart types than in traditional bar charts, due in part to the placement of labels relative to the chart's data [7]. One of the evaluated charts was a radial area graph, which has similar label placement features to the circular perspective chart. However, we have not observed a significant difference in task completion time between the circular perspective chart and either the radial bar chart or a traditional bar chart.
|
| 306 |
+
|
| 307 |
+
Participant feedback about the circular perspective chart was mixed but generally more negative than the other charts shown in the study; some participants felt that the chart was visually interesting but perhaps not suitable for retrieving data in the same way as the other evaluated charts. In the interview portion of the evaluation, several participants indicated that for larger values, the bars became too severely compressed in the circular perspective chart.
|
| 308 |
+
|
| 309 |
+
Based on error rates and participant feedback, it seems that the circular perspective chart design is not often suitable for accurate reading of values, as some participants stated the design was disorienting or confusing. However, participants also felt that the design was impactful, and may be appropriate for artistic visualizations.
|
| 310 |
+
|
| 311 |
+
While it is promising that participants had similar accuracy between our designs and traditional chart types, there were limitations to our evaluation. Further studies could evaluate the potential of size constancy applied to data visualization. Our evaluation has demonstrated that it is a viable solution to certain types of visualization challenges.
|
| 312 |
+
|
| 313 |
+
## 6 CONCLUSION
|
| 314 |
+
|
| 315 |
+
We have introduced three novel chart designs, called perspective charts, to address limitations of traditional bar charts caused by undesirable scaling factors in a fixed viewing space. Our designs can open up new possibilities for visualizing datasets with multi-scale variation using the natural perception of size constancy. We provide design rationale for our three chart designs and evaluate their usability in a user study.
|
| 316 |
+
|
| 317 |
+
Evaluation showed no significant difference in performance between traditional visualizations, a traditional bar chart and a broken-axis bar chart, and our slanted and stepped perspective charts. This suggests that the use of perspective did not affect participants' ability to perform tasks in these types of charts.
|
| 318 |
+
|
| 319 |
+
Participants performed tasks significantly more quickly and accurately with the stepped perspective chart than with the scale-stack bar chart, another recent visualization design intended to represent datasets with important data at multiple scales, in three out of four task types. The circular perspective chart showed less accurate results than a radial bar chart for some tasks.
|
| 320 |
+
|
| 321 |
+
Our circular perspective chart was generally not well-received by participants. Some participants raised concerns about the compression applied to large values in the chart, and others expressed that it may be more suitable as an artistic data visualization.
|
| 322 |
+
|
| 323 |
+
Results for H1 and H2 indicate that the use of perspective projection in data visualization is worth further examination. The designs of the slanted perspective chart and stepped perspective chart were positively received by participants and performed comparatively to traditional methods in our evaluation. The stepped perspective chart in particular performed well compared to an existing visualization design, the scale-stack bar chart, for our use case of visualizing data with important variation at multiple scales.
|
| 324 |
+
|
| 325 |
+
Further evaluation could reinforce the results observed here. Based on our evaluation results, the slanted and stepped perspective charts particularly merit more in-depth evaluation as alternatives to existing chart types.
|
| 326 |
+
|
| 327 |
+
## 7 ACKNOWLEDGMENTS
|
| 328 |
+
|
| 329 |
+
The authors would like to thank Lora Oehlberg for her guidance with the analysis of our evaluation results.
|
| 330 |
+
|
| 331 |
+
## REFERENCES
|
| 332 |
+
|
| 333 |
+
[1] R. Amar, J. Eagan, and J. Stasko. Low-level components of analytic activity in information visualization. In Proceedings of the Proceedings of the 2005 IEEE Symposium on Information Visualization, INFOVIS '05, pp. 15-. IEEE Computer Society, Washington, DC, USA, 2005. doi: 10.1109/INFOVIS.2005.24
|
| 334 |
+
|
| 335 |
+
[2] N. Carlson. Psychology: The Science Behaviour. Pearson Canada Inc, 4ed.,2010.
|
| 336 |
+
|
| 337 |
+
[3] M. Correll, E. Bertini, and S. L. Franconeri. Truncating the y-axis: Threat or menace? In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20. Association for Computing Machinery, New York, NY, USA, 2020.
|
| 338 |
+
|
| 339 |
+
[4] C. J. Erkelens. Computation and measurement of slant specified by linear perspective. Journal of Vision, 13(13):16-27, Nov. 2013.
|
| 340 |
+
|
| 341 |
+
[5] K. Etemad, D. Baur, J. Brosz, S. Carpendale, and F. F. Samavati. Paisleytrees: A size-invariant tree visualization. EAI Endorsed Trans. Creative Technologies, 1(1):e2, 2014.
|
| 342 |
+
|
| 343 |
+
[6] J. J. Gibson. Pictures, perspective, and perception. Daedalus, 89(1):216-227, 1960.
|
| 344 |
+
|
| 345 |
+
[7] J. Goldberg and J. Helfman. Eye tracking for visualization evaluation: Reading values on linear versus radial graphs. Information Visualization, 10(3):182-195, 2011. doi: 10.1177/1473871611406623
|
| 346 |
+
|
| 347 |
+
[8] J. Heer, N. Kong, and M. Agrawala. Sizing the horizon: The effects of chart size and layering on the graphical perception of time series visualizations. pp. 1303-1312, 04 2009. doi: 10.1145/1518701.1518897
|
| 348 |
+
|
| 349 |
+
[9] M. Hlawatsch, F. Sadlo, M. Burch, and D. Weiskopf. Scale-stack bar charts. Computer Graphics Forum, 32(3):181-190, 2013.
|
| 350 |
+
|
| 351 |
+
[10] A. Karduni, R. Wesslen, I.-S. Cho, and W. Dou. Du bois wrapped bar chart: Visualizing categorical data with disproportionate values. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20. Association for Computing Machinery, New York, NY, USA, 2020.
|
| 352 |
+
|
| 353 |
+
[11] D. A. Keim, M. C. Hao, U. Dayal, and M. Hsu. Pixel bar charts: a visualization technique for very large multi-attribute data sets. Information Visualization, 1(1):20-34, 2002.
|
| 354 |
+
|
| 355 |
+
[12] D. A. Keim, F. Mansmann, J. Schneidewind, and H. Ziegler. Challenges in visual data analysis. In Tenth International Conference on Information Visualisation (IV'06), pp. 9-16. IEEE, 2006. doi: 10.1109/IV. 2006.31
|
| 356 |
+
|
| 357 |
+
[13] K. Koffka. Principles of Gestalt psychology. Routledge, 1935.
|
| 358 |
+
|
| 359 |
+
[14] M. Luboschik, H. Schumann, and H. Cords. Particle-based labeling: Fast point-feature labeling without obscuring other visual features. IEEE Transactions on Visualization and Computer Graphics, 14(6):1237-1244, Nov. 2008.
|
| 360 |
+
|
| 361 |
+
[15] J. D. Mackinlay, G. G. Robertson, and S. K. Card. The perspective wall: Detail and context smoothly integrated. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '91, pp. 173-176. ACM, New York, NY, USA, 1991.
|
| 362 |
+
|
| 363 |
+
[16] T. Munzner. Visualization analysis and design. AK Peters/CRC Press, 2014.
|
| 364 |
+
|
| 365 |
+
[17] P. Neumann, M. S. T. Carpendale, and A. Agarawala. Phyllotrees: Phyllotactic patterns for tree layout. In EuroVis, vol. 6, pp. 59-66. Citeseer, 2006.
|
| 366 |
+
|
| 367 |
+
[18] J. A. Polack, L. A. Piegl, and M. L. Carter. Perception of images using cylindrical mapping. The Visual Computer, 13(4):155-167, 1997.
|
| 368 |
+
|
| 369 |
+
[19] H. Reijner. The development of the horizon graph. In Electronic Proceedings of the VisWeek Workshop From Theory to Practice: Design, Vision and Visualization., 2008.
|
| 370 |
+
|
| 371 |
+
[20] A. Shlahova. Problems in the perception of perspective in drawing. Journal of Art & Design Education, 19(1):102-109, 2000.
|
| 372 |
+
|
| 373 |
+
[21] D. Skau, L. Harrison, and R. Kosara. An evaluation of the impact of visual embellishments in bar charts. Computer Graphics Forum, 34(3):221-230, 2015.
|
| 374 |
+
|
| 375 |
+
[22] J. Talbot, V. Setlur, and A. Anand. Four experiments on the perception of bar charts. IEEE Transactions on Visualization and Computer Graphics, 20:2152-2160, 2014.
|
| 376 |
+
|
| 377 |
+
[23] C. Tyler and M. Kubovy. The rise of renaissance perspective. Science and art of perspective, 2004.
|
| 378 |
+
|
| 379 |
+
[24] C. Ware. Information visualization: perception for design. Elsevier, 2012.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/n2FDcSYcGkR/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ PERSPECTIVE CHARTS
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
We introduce three novel data visualizations, called perspective charts, based on the concept of size constancy in linear perspective projection. Bar charts are a popular and commonly used tool for the interpretation of datasets, however, representing datasets with multi-scale variation is challenging in a bar chart due to limitations in viewing space. Each of our designs focuses on the static representation of datasets with large ranges with respect to important variations in the data. Through a user study, we measure the effectiveness of our designs for representing these datasets in comparison to traditional methods, such as a standard bar chart or a broken-axis bar chart, and state-of-the-art methods, such as a scale-stack bar chart. The evaluation reveals that our designs allow pieces of data to be visually compared at a level of accuracy similar to traditional visualizations. Our designs demonstrate advantages when compared to state-of-the-art visualizations designed to represent datasets with large outliers.
|
| 8 |
+
|
| 9 |
+
Index Terms: Human-centered computing-Visualization-Visualization techniques-Information Visualization;
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Today we are faced with large amounts of data with varying complexity [24]. This makes the visualization of large datasets challenging, especially when viewing space is limited. Different tools and charts are suited to different types of data [16]. Bar charts are one of the most commonly used data visualizations as they are simple and easy to interpret. However, datasets with a large range, with important variation at multiple scales, present unique visualization challenges. Examples can commonly be found in population data, as illustrated in Figure 1, which shows population data for several Canadian cities. A vertical limitation of the viewing space may require that a large amount of compression be applied to the data, which makes differences between values less readable. For example, in Figure 1, the largest value is the population of Toronto; the scale of the chart needs to be set to accommodate such large values. Showing Toronto's population in the same chart as smaller cities such as Guelph and Kingston makes it difficult to measure the population of the smaller cities. When the scaling factor increases, or when data becomes more compressed, it becomes more difficult to make comparisons between pieces of data with close values.
|
| 14 |
+
|
| 15 |
+
The limitation that we focus on is the readability of charts with multi-scale variation in the dataset. A linear mapping between the range of the data and the height of the viewing space may result in undesirable compression of the charts. One potential solution is to use a non-linear mapping, such as a logarithmic function (see Figure 1, right). However, this type of mapping is difficult to read and understand in comparison to simple linear mappings [9].
|
| 16 |
+
|
| 17 |
+
How do we find a more natural solution to mapping datasets with large outliers onto a small viewing space? We propose a new technique for visualizing data with important variation at multiple scales using perspective projection. Humans naturally perceive perspective, and are able to estimate the size of distant objects through a property known as size constancy [2]. Using simple linear perspective, geometric proportions can be used to measure the size and relative differences of objects [4].
|
| 18 |
+
|
| 19 |
+
Our first design, which we call the slanted perspective chart, shows a bar chart that is slanted backwards from the viewing plane, such that it is viewed in perspective (See Figure 1, bottom). As the lower part of the graph appears closer to the reader, small values in the dataset become larger in comparison to a traditional bar chart.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: Canadian cities with a population of more than 150,000, in a traditional bar chart (left), a bar chart with a logarithmic scale (right), and a slanted perspective chart (bottom).
|
| 24 |
+
|
| 25 |
+
The main problem with the solution of slanting a traditional bar chart is that larger values in the dataset become compressed due to the perspective projection. This may make large values more difficult to read and compare.
|
| 26 |
+
|
| 27 |
+
Our next chart, the stepped perspective chart, is designed to address the issue of scaling large values in our slanted perspective chart, while also improving the readability of small values. In bar charts, space in some parts of the chart are often wasted due to large differences in values or outliers in the dataset. We can reduce the amount of wasted space by visualizing this area in an extreme slant. This puts only the less important range of the data at an extreme angle; each bar's value is still measurable in an area that is perpendicular to the view (see Figure 2, left).
|
| 28 |
+
|
| 29 |
+
This design is intended to resemble a staircase; we can insert multiple bends in the axis in a single chart to compress multiple areas of the chart and eliminate multiple areas of unused space (see Figure 2, right). Since the tops of the bars are not slanted or foreshortened in the stepped perspective chart, the values are emphasized more strongly than in the slanted perspective chart.
|
| 30 |
+
|
| 31 |
+
The stepped perspective chart is conceptually similar to a traditional broken-axis bar chart (see Figure 3), which also addresses the issue of wasted space in areas where there are large gaps in the data. However, since a broken-axis bar chart essentially cuts out a portion of the graph, the ability to visually estimate and compare data is lost, unlike in our stepped perspective chart.
|
| 32 |
+
|
| 33 |
+
Both our slanted perspective chart and stepped perspective chart contain some wasted space around the upper corners of the viewing space. To eliminate areas of unused space wherever possible, we introduce a third type of Perspective Chart, called the circular perspective chart (see Figure 4). Our design for this chart is inspired by the impression of looking up at tall buildings and skyscrapers from a low vantage point. The horizontal axis of the chart is mapped to a circle, with the vertical axis extending away from the reader's view. This chart occupies a consistent viewing space regardless of the scale of the data or the number of entries in the dataset.
|
| 34 |
+
|
| 35 |
+
< g r a p h i c s >
|
| 36 |
+
|
| 37 |
+
Figure 2: Left: A stepped perspective chart. Right: A stepped perspective chart with multiple bends in the axis.
|
| 38 |
+
|
| 39 |
+
< g r a p h i c s >
|
| 40 |
+
|
| 41 |
+
Figure 3: A broken-axis bar chart.
|
| 42 |
+
|
| 43 |
+
The data visualization challenges that we discuss related to readability in datasets with multi-scale variation can be addressed using dynamic visualization methods, such as focus-plus-context; however, we focus on a static method of addressing these issues. We introduce a new class of charts comparable to traditional static bar charts, and note that commonly used interactive techniques for bar charts can also be used with our perspective charts.
|
| 44 |
+
|
| 45 |
+
To evaluate our visualizations, we conducted a user study with twenty-four participants. The study quantitatively measured the speed and accuracy with which users could read data from our charts in comparison to traditional methods, such as a standard bar chart or a broken-axis bar chart, and state-of-the-art methods, such as a scale-stack bar chart. We also performed a qualitative evaluation of our three designs. Participants generally responded favorably to our visualizations, and were able to read data from them as accurately as with the traditional methods in fifteen out of seventeen task types, and performed more strongly than a recent method, scale-stack bar charts [9], in three out of four tasks.
|
| 46 |
+
|
| 47 |
+
< g r a p h i c s >
|
| 48 |
+
|
| 49 |
+
Figure 4: Population of cities in the state of Wisconsin in the United States, in a circular perspective chart. Each line marks an increment of 50,000 .
|
| 50 |
+
|
| 51 |
+
§ 2 BACKGROUND AND RELATED WORK
|
| 52 |
+
|
| 53 |
+
Given the increasing size and complexity of available datasets, finding clear and readable methods of visualization is becoming more challenging [12]. Traditional methods such as bar charts are not always a practical choice when visualizing datasets with a large range with respect to important variations in the data [11]. In this section, we first provide a short review of research on visualizing complex datasets using variations of bar charts. Since we use perspective projection in our charts, we provide a short review of literature on the ways that perspective affects human perception.
|
| 54 |
+
|
| 55 |
+
§ 2.1 VISUALIZING DATA WITH MULTI-SCALE VARIATION
|
| 56 |
+
|
| 57 |
+
When a range of data is mapped to a bar chart, a scaling factor is applied such that all of the data can be represented in the viewing space. In datasets with large outliers, it may not be possible to fit the chart into a limited viewing space without applying scaling that decreases legibility. To address this problem, alternatives to bar charts are used in some applications. Karduni et al. wrap large bars over a certain threshold back over the y-axis in their Du Bois wrapped bar chart [10]; a similar technique is described by Reijner's horizon graphs [19], evaluated by Heer et al. [8] as an effective technique. Hlawatsch et al. compare their scale-stack bar charts with logarithmic and broken bar charts for the visualization of datasets with a large scale [9]. An example of a scale-stack bar chart is shown in Figure 6.
|
| 58 |
+
|
| 59 |
+
< g r a p h i c s >
|
| 60 |
+
|
| 61 |
+
Figure 5: A radial bar chart.
|
| 62 |
+
|
| 63 |
+
One traditional alternative to bar charts for this use case is the broken-axis bar chart. Broken-axis bar charts eliminate areas of unused space between values in a bar chart, visualized as a discrete jump in values in the y-axis. However, truncating the y-axis of a chart in this manner has been shown to negatively affect the perception of scale in datasets [3]. We compare broken-axis bar charts to our stepped perspective chart in our evaluation.
|
| 64 |
+
|
| 65 |
+
Charts scaled with a logarithmic function are also sometimes used to represent datasets with a large range. However, this type of scale is not typically used in bar charts, as it may be difficult to interpret given that it is non-linear [9].
|
| 66 |
+
|
| 67 |
+
§ 2.2 VARIATIONS OF BAR CHARTS
|
| 68 |
+
|
| 69 |
+
There exist several proposed solutions to common problems with bar charts. In cases where a guaranteed $1 : 1$ aspect ratio may be desirable for a visualization, a circular chart such as a radial bar chart may be suitable (see Figure 5). A radial bar chart occupies a fixed viewing space regardless of the scale of its data. Luboschik's work on particle-based map label placement [14] highlights the use of circular charts in geospatial data visualization, where point-based icons are useful. Despite their popularity, circular chart types are generally discouraged by visualization experts, as they tend to be more difficult to read than a traditional bar chart [7].
|
| 70 |
+
|
| 71 |
+
Skau et al. evaluate the impact of visual embellishments in bar charts [21], taking into account human perception and aesthetic factors in their analysis. The results of their evaluation show that simple embellishments like rounded or triangular bars have strong effects on human perception, and in some tasks will negatively affect performance. Their evaluation found that humans rely on strong lines at the ends of the bars to accurately estimate values. In our stepped perspective charts, the tops of the bars remain visible and perpendicular to the view in each cluster of data.
|
| 72 |
+
|
| 73 |
+
< g r a p h i c s >
|
| 74 |
+
|
| 75 |
+
Figure 6: A scale-stack bar chart. The chart represents one dataset at three different scales stacked on top of one another.
|
| 76 |
+
|
| 77 |
+
§ 2.3 HUMAN PERCEPTION
|
| 78 |
+
|
| 79 |
+
Perspective, in combination with lighting, distance and angle, contribute to human perception of information [6]. The visual shape and size of objects changes as the object's distance and orientation changes relative to the viewer [20]. However, the concept of size constancy explains that the perception of an object's size does not change with the object's distance from the viewer [18]. This is true even for two-dimensional representations of three-dimensional scenes (see Figure 7). This is due to humans' natural ability to account for perspective and the reduction of the projected size of an object when estimating its true size [23]. Size constancy is one of the types of natural constancy in human perception of distance and scale of objects. We use this feature of perception in the design of our perspective charts.
|
| 80 |
+
|
| 81 |
+
Mackinlay et al. [15] use perspective projection in their technique called the Perspective Wall. This interactive technique addresses data with "wide aspect ratios" by placing the area of focus on a flat plane, with surrounding contextual data placed on planes slanted away from the viewer. Other aspects of human perception are used in the design of various hierarchical data visualizations, such as those described by Gestalt psychology principles [13] of closure [17] and continuity [5].
|
| 82 |
+
|
| 83 |
+
§ 3 METHODOLOGY
|
| 84 |
+
|
| 85 |
+
We propose the use of perspective as a mapping of bar charts in three different designs: the slanted, stepped, and circular perspective charts. In this section, we present design rationale and methods for creating our three different perspective charts.
|
| 86 |
+
|
| 87 |
+
§ 3.1 SLANTED PERSPECTIVE CHARTS
|
| 88 |
+
|
| 89 |
+
Slanted perspective charts, as shown in Figure 1, are similar to traditional bar charts, but have the vertical axis of the chart slanting away from the viewer. This design is inspired by drawings and images that portray one-point perspective, i.e. images with a single vanishing point, as shown in the photograph in Figure 7. We use a simple 3D environment and set a predefined camera setup to avoid user input for 3D interaction. Slanting the chart brings smaller values closer to the viewer, while moving larger values away.
|
| 90 |
+
|
| 91 |
+
The slant in the vertical axis of the chart can be achieved either by viewing the chart from a lower angle, or by maintaining the same viewpoint and instead slanting the chart plane backwards from the viewer in three-dimensional space. We choose the latter option in order to maintain a consistent viewing space. Slanting the chart moves large values in the chart away from the viewer, and decreases the space between scale lines.
|
| 92 |
+
|
| 93 |
+
< g r a p h i c s >
|
| 94 |
+
|
| 95 |
+
Figure 7: An example of single-point perspective. Due to size constancy, we perceive the height of the statues in the red and blue boxes to be equal. In image space, the statue in the red box is half the height of the statue in the blue box.
|
| 96 |
+
|
| 97 |
+
We restrict the foreshortening ratio in order to limit the compression of large values as they move away from the viewer. The foreshortening ratio measures how objects viewed at an angle appear to be shorter than their true measurement. We slant the y-axis of the chart at a fixed angle $\theta$ , which controls the foreshortening ratio ${f}_{r}$ of the slanted line $L$ compared to the viewing plane $V$ :
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
{f}_{r} = \frac{L}{V} = \sec \left( \theta \right) .
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
For example, when $\theta = {60}^{ \circ }$ , the foreshortening ratio ${f}_{r}$ is 2 . In general, $1 \leq {f}_{r} < \infty$ where $0 \leq \theta < \frac{\pi }{2}$ .
|
| 104 |
+
|
| 105 |
+
We avoid slanting the chart at extreme angles in order to control the foreshortening ratio ${f}_{r}$ and maintain readability of large values. Figure 9 demonstrates how a change in $\theta$ impacts foreshortening.
|
| 106 |
+
|
| 107 |
+
§ 3.2 STEPPED PERSPECTIVE CHARTS
|
| 108 |
+
|
| 109 |
+
Our stepped perspective chart, as seen in Figure 2, resembles a staircase showing multiple "tiers" of data. According to these tiers, we divide the range of the data into subranges ${R}_{1},{T}_{1},{R}_{2},{T}_{2},\ldots ,{R}_{n}$ (see Figure 10) where each ${R}_{i}$ is a cluster of the data and ${T}_{i}$ are transitions. Each of these subranges represents a rectangular region of the chart. To create the stepped perspective chart we use a vertical view plane with a view angle ${\theta }_{v} = {0}^{ \circ }$ for the ${R}_{i}$ , and an extreme slant $\left( {{\theta }_{v} = {60}^{ \circ }}\right.$ for Figure 10) for the transitive regions ${T}_{i}$ .
|
| 110 |
+
|
| 111 |
+
The stepped perspective chart is comparable to traditional broken-axis bar charts, which are also intended to address issues associated with large gaps between values in a dataset. However, broken-axis bar charts have been shown to negatively affect the perception of scale in datasets [3]. In a traditional broken-axis bar chart, without the use of labels, it is impossible to visually compare values on opposing sides of the break in the axis. In the stepped perspective chart, the area within the gap is still visible, as it runs at a different angle rather than being cut out of the chart entirely (see Figure 10). This way, visual estimation is still possible, and the reader is able to perceive the approximate size of the gap.
|
| 112 |
+
|
| 113 |
+
The amount of the transitional region of the chart that is visible can be adjusted. Figure 8 shows varying heights from which the chart can be viewed. While the axes are always bent at an angle of ${90}^{ \circ }$ , the height of the camera affects the view angle. A height that is too low results in a high view angle, with scale lines positioned so closely together that they are no longer readable, while a low view angle lessens the impact of the separate regions of the chart We choose a height that is just high enough to allow the viewer to distinguish between scale lines. The exact appropriate height is dependent on the resolution and size at which the chart is viewed.
|
| 114 |
+
|
| 115 |
+
§ 3.3 CIRCULAR PERSPECTIVE CHARTS
|
| 116 |
+
|
| 117 |
+
As seen in Figure 4, our circular perspective chart is inspired by the perception of tall buildings as viewed from a low vantage point, converging on a singular vanishing point. In this chart, the bars are placed in a closed polygon and extend away from the viewer, converging at a vanishing point at the center of the polygon.
|
| 118 |
+
|
| 119 |
+
We create our circular perspective chart by bending the horizontal axis to remove areas of unused space and accommodate a larger number of values in a limited viewing space. We can imagine that the entire bar chart is divided into multiple smaller sub-charts that are slanted individually (see Figure 11). The slanted sub-charts are then rotated to form a closed polygon. In the extreme case, each sub-chart is allocated to only one single value (see Figure 4). When there is no preferred clustering to create sub-charts, we use this extreme case as the main design of our circular perspective chart. By wrapping the horizontal axis of the chart to form a closed polygon, the chart is contained within a consistent view.
|
| 120 |
+
|
| 121 |
+
The vantage point of the viewer is a potential variable to use in interactive visualization using the circular perspective chart. Figure 12 shows an example of a low and a high viewing height for the chart shown in Figure 4. In the left chart, each scale line represents an increment of 50,000 for a total of twenty-four scale lines. The right chart’s scale lines represent an increment of 150,000 for a total of eight scale lines. To make efficient use of space in the circular perspective chart, the scale should be chosen such that bars are compressed as little as possible while avoiding the issue of closely converging scale lines. The occurrence of this issue is dependent on the size and resolution at which the chart is displayed.
|
| 122 |
+
|
| 123 |
+
The use of circular chart types is generally discouraged by visualization professionals [7]. However, they are frequently used in practice due to their known 1:1 aspect ratio, independent of the number of bars represented. This property is also present in our circular perspective chart. Luboschik demonstrates that circular chart types are useful in geospatial data visualization [14]. The use of bar charts in spatial data visualization may present limitations in the available viewing space, hence it is worth exploring a circular chart type that has a consistent aspect ratio.
|
| 124 |
+
|
| 125 |
+
§ 4 EVALUATION
|
| 126 |
+
|
| 127 |
+
We conducted a within-subjects user study to evaluate the readability and novelty of perspective charts. The study quantitatively measured the accuracy and speed with which users answered a series of questions based on data shown in various charts, and collected participants' opinions of the three types of perspective charts in a qualitative study.
|
| 128 |
+
|
| 129 |
+
In our user study, we evaluate the following hypotheses:
|
| 130 |
+
|
| 131 |
+
H1 For data with important variation at multiple scales, small values are more easily readable in a slanted perspective chart than in a traditional bar chart.
|
| 132 |
+
|
| 133 |
+
H2 The stepped perspective chart allows for faster and easier estimation and comparison of values than broken-axis bar charts and scale-stack bar charts, other axis-breaking methods for visualizing datasets with large outliers.
|
| 134 |
+
|
| 135 |
+
< g r a p h i c s >
|
| 136 |
+
|
| 137 |
+
Figure 8: A stepped perspective chart viewed at three different heights, resulting in varying view angles and amounts of viewing space occupied by the transitional region of the chart. Left: ${\theta }_{v} = {80}^{ \circ }$ . Center: ${\theta }_{v} = {60}^{ \circ }$ . Right: ${\theta }_{v} = {40}^{ \circ }$
|
| 138 |
+
|
| 139 |
+
< g r a p h i c s >
|
| 140 |
+
|
| 141 |
+
Figure 9: A comparison of methods for slanting a traditional bar chart. The left chart's vertical axis is slanted away from the viewer at an angle of ${30}^{ \circ }$ , stretching the axis in the process. The chart on the right is slanted at an angle of ${60}^{ \circ }$ .
|
| 142 |
+
|
| 143 |
+
< g r a p h i c s >
|
| 144 |
+
|
| 145 |
+
Figure 10: An example of a dataset with two clear clusters ${R}_{1}$ and ${R}_{2}$ , and a transitive range ${T}_{1}$ .
|
| 146 |
+
|
| 147 |
+
H3 In a dataset with important variation at multiple scales, small values are more easily readable in a circular perspective chart than in a traditional bar chart, while occupying a consistent viewing space. The ease-of-use of the chart should be comparable to existing fixed-viewing-space visualizations like the radial bar chart.
|
| 148 |
+
|
| 149 |
+
We describe one hypothesis per perspective chart design. The hypotheses are designed based on the concept of size constancy. In each block of tasks, smaller values, or values that appear "closer" to the user should be more easily readable due to the increased scale, without sacrificing readability of larger values, which appear farther away from the user and therefore visually smaller.
|
| 150 |
+
|
| 151 |
+
H1 compares a traditional bar chart to the slanted perspective chart, a simple modification of a traditional visualization that introduces three-dimensional perspective. H2 compares the stepped perspective chart to other chart types that use axis-breaking methods for showing important variation at multiple scales. Since the stepped perspective chart uses a bend in the axis to show a large difference in scale between values, we choose to compare this design to existing chart designs that feature breaks in the y-axis. We evaluate whether the ability to visualize the area within the axis break allows values on either side to be more easily compared. H3 compares the circular perspective chart to a traditional bar chart and a radial bar chart. This is to evaluate the circular perspective chart's performance compared to an existing circular chart type as well as a traditional chart type.
|
| 152 |
+
|
| 153 |
+
§ 4.1 STUDY DESIGN
|
| 154 |
+
|
| 155 |
+
We performed studies on an individual basis over the course of approximately 60 minutes per participant. Each participant was shown a series of various types of charts and answered a list of questions based on the data in these charts. Each participant performed tasks based on common visualization task taxonomies [1, 22]. Participants answered questions based on a traditional bar chart, a radial bar chart, a broken-axis bar chart, a scale-stack bar chart [9], as shown in Figure 6, and our slanted, stepped, and circular perspective charts.
|
| 156 |
+
|
| 157 |
+
§ 4.1.1 TASKS
|
| 158 |
+
|
| 159 |
+
We designed six types of tasks based on visualization taxonomies [1, 22]. The task types used are:
|
| 160 |
+
|
| 161 |
+
* Retrieve Value - "What is the population of Franklin?"
|
| 162 |
+
|
| 163 |
+
* Determine Magnitude Difference - "How much larger is Milwaukee than Oak Creek? (For example, 2x larger, 3.5x larger)"
|
| 164 |
+
|
| 165 |
+
< g r a p h i c s >
|
| 166 |
+
|
| 167 |
+
Figure 11: The circular perspective chart is created by grouping sections of a large chart into several smaller sub-charts. In this example we have five sub-charts. Each of the sub-charts is individually slanted, then rotated to form a closed polygon.
|
| 168 |
+
|
| 169 |
+
< g r a p h i c s >
|
| 170 |
+
|
| 171 |
+
Figure 12: A circular perspective chart showing population of cities in Alberta, with two different scaling factors. In the top chart, each line marks an increment of 50,000 . In the bottom chart, each line marks an increment of 150,000 .
|
| 172 |
+
|
| 173 |
+
* Determine Range - "What is the range of the data? (Smallest population to largest population)"
|
| 174 |
+
|
| 175 |
+
* Find Extremum - "Which city has the smallest population?"
|
| 176 |
+
|
| 177 |
+
* Filter - "List all cities with a population of less than 100,000 ."
|
| 178 |
+
|
| 179 |
+
* Sort - "Sort the cities by population from smallest to largest."
|
| 180 |
+
|
| 181 |
+
§ 4.1.2 METHODOLOGY
|
| 182 |
+
|
| 183 |
+
Each participant completed five task blocks (B1 - B5), during which they completed a set of tasks using one chart type followed by a matching set of tasks using a second chart type:
|
| 184 |
+
|
| 185 |
+
B1: traditional bar chart I slanted perspective chart
|
| 186 |
+
|
| 187 |
+
B2: scale-stack bar chart I stepped perspective chart
|
| 188 |
+
|
| 189 |
+
B3: broken-axis bar chart I stepped perspective chart
|
| 190 |
+
|
| 191 |
+
B4: radial bar chart I circular perspective chart
|
| 192 |
+
|
| 193 |
+
B5: traditional bar chart / circular perspective chart
|
| 194 |
+
|
| 195 |
+
Within each block, each participant first performed a series of either four or five different tasks (see Section 4.1.1) using a chart of the first type, then completed a matching set of tasks using a chart of the second type that visualized a different data set. We maintained the same block, task, and dataset order for all participants, but varied the order of the chart types within each block. For example, in B1 half of the participants completed their first five tasks using a traditional bar chart. The other half started by completing the same set of tasks using a slanted perspective chart that visualized the exact same data.
|
| 196 |
+
|
| 197 |
+
We chose this blocking scheme for the evaluation in order to maintain a pairing between our designs and existing chart types, and tailor task types within the blocks based on the charts used.
|
| 198 |
+
|
| 199 |
+
Each participant completed a total of forty-two tasks - five tasks (determine magnitude difference between a small and a large value, determine magnitude difference between two small values, filter, find extremum, and retrieve small value) for each chart type in B1, and four tasks for each chart type in B2-5. B2-5 had three tasks of the same type (determine magnitude difference between a small and a large value, determine magnitude difference between two small values, and retrieve small value), and one additional task of a varying type. Participants completed a find extremum task for B2, a retrieving large value task for B3, a sorting task for B4 and a filtering task for B5.
|
| 200 |
+
|
| 201 |
+
After completing the main set of tasks, participants answered a short post-study questionnaire evaluating each type of chart used in the study. This was followed by a short verbal interview where we further gathered their opinions on the charts used in the study.
|
| 202 |
+
|
| 203 |
+
§ 4.1.3 PARTICIPANTS
|
| 204 |
+
|
| 205 |
+
We recruited 24 participants (12 female) using posters distributed across our local campus as well as via word of mouth. We based this cohort size on those used in similar evaluations [9] [8]. Twenty-two participants were students at the time of the study; 16 participants studied in STEM fields, while the remaining participants worked or studied in arts, business or social sciences.
|
| 206 |
+
|
| 207 |
+
A post-hoc power analysis was performed on the results of our evaluation for each task type in each block with our sample size of 24 participants, using GPower 3.1. Among these analyses, the lowest reported power was 0.74 . Thus for each task type in our evaluation we have at least a 74% probability of finding true significance, given our sample size of 24 .
|
| 208 |
+
|
| 209 |
+
§ 4.1.4 DATASETS
|
| 210 |
+
|
| 211 |
+
Since tasks were divided into five different blocks for each participant, we used two unique datasets in each block for a total of ten unique datasets. We chose real datasets that satisfied our use case of clusters of data across a large range. Six datasets showed population data across various regions, two represented pollution data, and two represented precipitation data. We used data that was likely to be unfamiliar to the participants to reduce potential bias resulting from preexisting knowledge about the data. We did this by using data from regions that were not geographically close to the location where the study was performed, or by obscuring the names of the locations in the datasets ("City 1" instead of "Vancouver", etc.).
|
| 212 |
+
|
| 213 |
+
§ 4.1.5 TEST ENVIRONMENT
|
| 214 |
+
|
| 215 |
+
Before beginning the study, participants were given an explanation of the consent process, monetary compensation and risks associated with the study, as approved by the Conjoint Faculties Research Ethics Board of the University of Calgary. After indicating their consent, participants completed a short pre-questionnaire, then proceeded to the main portion of the study. Tasks were completed on paper in a prepared booklet provided to participants.
|
| 216 |
+
|
| 217 |
+
< g r a p h i c s >
|
| 218 |
+
|
| 219 |
+
Figure 13: Results showing the total time for participants to complete all tasks in a block of the evaluation. For this and subsequent charts, the bar in the grey box represents the median error rate. The p-value and effect size (r) of each task type is shown in the table. Each box denotes the 95% confidence interval of the median. Each dot represents the task completion time of an individual participant.
|
| 220 |
+
|
| 221 |
+
Each chart presented to participants included scale and axis labels in a consistent font style and size across charts. For the circular perspective chart, the chart's scale was labeled in the top-left corner of the visualization, as in Figure 4. Bars were unshaded to maintain simplicity in the chart designs. The appearance of charts used in the evaluation is comparable to the charts in Figures 1, 5 and 6.
|
| 222 |
+
|
| 223 |
+
§ 4.2 RESULTS
|
| 224 |
+
|
| 225 |
+
We compare task completion time and percentage of error between sets of tasks. The Shapiro-Wilk test indicates that our data does not follow a normal distribution, so we compare methods using a Mann-Whitney U test. As a result, we examine the median error rate for each task. For each test we report effect sizes (r) and p-values. All data is shared on the Open Science Framework (https://osf.io/ w3fce/?view_only=23ff1dded@b74363a68ec86419c9c373).
|
| 226 |
+
|
| 227 |
+
§ 4.2.1 TIME
|
| 228 |
+
|
| 229 |
+
Timing data was measured as the sum of the total amount time each participant took to complete all the tasks for one block. Participants had the same number and type of tasks for each chart in a pairwise comparison. This data is shown in Figure 13.
|
| 230 |
+
|
| 231 |
+
Slanted perspective chart: We did not observe a significant difference in task completion time $\left( {p = {0.687},r = {0.06}}\right)$ between the slanted perspective chart and a traditional bar chart.
|
| 232 |
+
|
| 233 |
+
Stepped perspective chart: We observed that participants' task completion time was significantly faster $\left( {p < {0.001},r = {0.53}}\right)$ using our stepped perspective chart $\left( {\mathrm{{med}} = {186.5}\mathrm{\;s}}\right)$ than a scale-stack bar chart $\left( {\mathrm{{med}} = {313.5}\mathrm{\;s}}\right)$ . We did not observe a difference $(p = {0.988}$ , $r = {0.00}$ ) between our stepped perspective chart and a broken-axis bar chart.
|
| 234 |
+
|
| 235 |
+
Circular perspective chart: We did not observe a significant difference in task completion time $\left( {p = {0.140},r = {0.21}}\right)$ between our circular perspective chart and a traditional bar chart. There was also no significant difference $\left( {p = {0.702},r = {0.06}}\right)$ between our circular perspective chart and a radial bar chart.
|
| 236 |
+
|
| 237 |
+
§ 4.2.2 ACCURACY
|
| 238 |
+
|
| 239 |
+
We examine the absolute percentage of error when determining statistical significance of accuracy results; we compute this for each task by comparing each participant's numerical response with the true correct value.
|
| 240 |
+
|
| 241 |
+
< g r a p h i c s >
|
| 242 |
+
|
| 243 |
+
Figure 14: Accuracy results for block one of our evaluation, comparing the slanted perspective chart (orange) to a traditional bar chart (blue). Results are grouped by task type. Each dot represents the percentage of error of an individual response. For this and subsequent charts, each box denotes the 95% confidence interval of the median.
|
| 244 |
+
|
| 245 |
+
< g r a p h i c s >
|
| 246 |
+
|
| 247 |
+
Figure 15: Accuracy results for block two, evaluating the stepped perspective chart compared to the scale-stack bar chart.
|
| 248 |
+
|
| 249 |
+
For sorting tasks and filtering tasks, which had non-numerical responses, error rate was determined by counting the number of mistakes made compared to the true answer, and deducting "points" for each incorrect response. For example, in a filtering task to identify the number of cities with population below 100,000, if ten cities fulfill this criteria, a response that gives only nine correct cities would result in an error rate of $1/{10} = {10}\%$ .
|
| 250 |
+
|
| 251 |
+
We did not observe a significant trend in the the directionality of error for any task block. For each block of tasks, results were corrected for multiple comparisons to control the false discovery rate. Median results for each block of the evaluation are shown in Figs. 14 to 18.
|
| 252 |
+
|
| 253 |
+
Slanted perspective chart: Between this and a traditional bar chart, no significant difference was shown in the median error rate for any task type $(p > {0.4},r < {0.3}$ for all tasks, see Figure 14).
|
| 254 |
+
|
| 255 |
+
Stepped perspective chart: Between this and a broken-axis bar chart, there was no significant difference in accuracy demonstrated for any task type $(p > {0.6},r < {0.3}$ for all tasks, see Figure 16).
|
| 256 |
+
|
| 257 |
+
The stepped perspective chart significantly outperformed the scale-stack bar chart for tasks related to determining magnitude difference between large and small values (stepped med $= {11.87}$ , scale-stack med $= {48.72},p = {0.047},r = {0.32}$ ), determining magnitude difference between two small values (stepped med $= {10.04}$ , scale-stack med $= {29.64},p = {0.035},r = {0.34}$ ), and finding extremum (stepped med $= {0.53}$ , scale-stack med $= {12.12},p < {0.001}$ , $r = {0.76})$ . We did not observe a significant difference in error rate for tasks related to retrieving small values from the chart $(p = {0.263}$ , $r = {0.17})$ . Results are shown in Figure 15.
|
| 258 |
+
|
| 259 |
+
< g r a p h i c s >
|
| 260 |
+
|
| 261 |
+
Figure 16: Accuracy results for block three, evaluating the stepped perspective chart compared to a broken-axis bar chart.
|
| 262 |
+
|
| 263 |
+
< g r a p h i c s >
|
| 264 |
+
|
| 265 |
+
Figure 17: Accuracy results for block four, evaluating the circular perspective chart compared to a radial bar chart.
|
| 266 |
+
|
| 267 |
+
Circular perspective chart: No significant difference was shown in the median error rate for tasks performed with a traditional bar chart and our circular perspective chart $(p > {0.7},r < {0.2}$ for all task types) (see Figure 18).
|
| 268 |
+
|
| 269 |
+
Compared to the circular perspective chart, participants were significantly more accurate using a radial bar chart for tasks related to retrieving values (circular med $= {5.56}$ , radial med $= {0.00}$ , $p = {0.031},r = {0.35})$ and sorting (circular med $= {8.33}$ , radial ${med} = {0.00},p = {0.031},r = {0.36})$ . No significant difference was demonstrated for tasks determining magnitude difference, either between small and large values $\left( {p = {0.361},r = {0.13}}\right)$ , or two small values $\left( {p = {0.361},r = {0.16}}\right)$ . Results are shown in Figure 17.
|
| 270 |
+
|
| 271 |
+
§ 4.2.3 READABILITY AND NOVELTY
|
| 272 |
+
|
| 273 |
+
We gathered participants' opinions on the different types of visualizations in a post-study questionnaire. Responses are visible in Figure 19. The two most well-known types of visualizations, the traditional bar chart and the broken-axis bar chart, were the most well-received. The other types of visualizations were less familiar to participants and received lower scores, with the circular perspective chart receiving slightly less favourable scores.
|
| 274 |
+
|
| 275 |
+
Slanted perspective chart: According to the post-study questionnaire, the slanted perspective chart was the most well-received of our three designs. Two participants described the chart as "straightforward" (P1, P8). P12 stated that they preferred the slanted perspective chart because it allows the user to "see the whole range of data, unchanged, and still read the smaller values."
|
| 276 |
+
|
| 277 |
+
Stepped perspective chart: Several participants indicated that they felt the design of the scale-stack bar chart was complicated, which made it more difficult to use. P10 felt it was "hard to go back and forth" between scales when using this chart.
|
| 278 |
+
|
| 279 |
+
< g r a p h i c s >
|
| 280 |
+
|
| 281 |
+
Figure 18: Accuracy results for block five, evaluating the circular perspective chart compared to a traditional bar chart.
|
| 282 |
+
|
| 283 |
+
Circular perspective chart: Two participants felt that the circular perspective chart was more suitable as an artistic representation of data. P6 felt it was an effective chart type for "making an impact."
|
| 284 |
+
|
| 285 |
+
The circular perspective chart received the most "very hard to use" scores out of all the types of visualizations; nine out of twenty-four participants found it difficult to retrieve large values from the circular perspective chart. P10 noted that they would often "lose count when the perspective got smaller for the higher numbers." Nine out of twenty-four participants indicated that the effect of the perspective was too extreme in the circular perspective chart.
|
| 286 |
+
|
| 287 |
+
§ 5 DISCUSSION
|
| 288 |
+
|
| 289 |
+
When compared to traditional chart types, we saw similar results for our three different designs; none of these comparisons showed a significant difference in accuracy rate for any of the evaluated task types. This suggests that due to size constancy, the use of perspective did not impact participants' ability to visually interpret values.
|
| 290 |
+
|
| 291 |
+
Slanted perspective chart: While we initially hypothesized that reading small values would be easier for participants using a slanted perspective chart than in a traditional bar chart, the results did not demonstrate a significant difference in the accuracy or completion time of the two charts. Since our introduced charts are unfamiliar visualizations for the general public, it is unsurprising that participants were more comfortable with traditional methods, as indicated by the post-study evaluation and interview.
|
| 292 |
+
|
| 293 |
+
Participants performed similarly with both types of charts. This demonstrates that the use of perspective did not hinder their ability to perform tasks, due to size constancy. However, there was no evidence that participants retrieved small values more accurately with our designs as we hypothesized they would. In fact, median error rate was exactly the same for tasks of this type with both the slanted perspective chart and a traditional bar chart.
|
| 294 |
+
|
| 295 |
+
Stepped perspective chart: As with the slanted perspective chart, the stepped perspective chart showed no significant difference in accuracy or timing compared to the traditional comparison method, in this case the broken-axis bar chart.
|
| 296 |
+
|
| 297 |
+
Participants were able to complete tasks significantly more quickly and accurately with the stepped perspective chart than the scale-stack bar chart, another unfamiliar visualization. This reinforces that axis-breaking methods have a negative affect on visual perception, as suggested by Correll et al. [3]. These results also suggest that the use of perspective allowed for a more intuitive understanding of the visualization than the scale-stack bar chart for representing datasets with important variation at different scales. Charts that utilize perspective and size constancy may be a viable alternative to axis-breaking techniques for data visualization.
|
| 298 |
+
|
| 299 |
+
Circular perspective chart: The circular perspective chart did not show a difference in accuracy compared to a traditional bar chart. However, the comparison fixed-viewing-space visualization, the radial bar chart, showed significantly higher accuracy than the circular perspective chart for sorting and value-retrieving tasks.
|
| 300 |
+
|
| 301 |
+
< g r a p h i c s >
|
| 302 |
+
|
| 303 |
+
Figure 19: Qualitative results of our user study evaluation.
|
| 304 |
+
|
| 305 |
+
A previous evaluation performed by Goldberg and Helfman suggested that task completion speed was significantly lower in circular chart types than in traditional bar charts, due in part to the placement of labels relative to the chart's data [7]. One of the evaluated charts was a radial area graph, which has similar label placement features to the circular perspective chart. However, we have not observed a significant difference in task completion time between the circular perspective chart and either the radial bar chart or a traditional bar chart.
|
| 306 |
+
|
| 307 |
+
Participant feedback about the circular perspective chart was mixed but generally more negative than the other charts shown in the study; some participants felt that the chart was visually interesting but perhaps not suitable for retrieving data in the same way as the other evaluated charts. In the interview portion of the evaluation, several participants indicated that for larger values, the bars became too severely compressed in the circular perspective chart.
|
| 308 |
+
|
| 309 |
+
Based on error rates and participant feedback, it seems that the circular perspective chart design is not often suitable for accurate reading of values, as some participants stated the design was disorienting or confusing. However, participants also felt that the design was impactful, and may be appropriate for artistic visualizations.
|
| 310 |
+
|
| 311 |
+
While it is promising that participants had similar accuracy between our designs and traditional chart types, there were limitations to our evaluation. Further studies could evaluate the potential of size constancy applied to data visualization. Our evaluation has demonstrated that it is a viable solution to certain types of visualization challenges.
|
| 312 |
+
|
| 313 |
+
§ 6 CONCLUSION
|
| 314 |
+
|
| 315 |
+
We have introduced three novel chart designs, called perspective charts, to address limitations of traditional bar charts caused by undesirable scaling factors in a fixed viewing space. Our designs can open up new possibilities for visualizing datasets with multi-scale variation using the natural perception of size constancy. We provide design rationale for our three chart designs and evaluate their usability in a user study.
|
| 316 |
+
|
| 317 |
+
Evaluation showed no significant difference in performance between traditional visualizations, a traditional bar chart and a broken-axis bar chart, and our slanted and stepped perspective charts. This suggests that the use of perspective did not affect participants' ability to perform tasks in these types of charts.
|
| 318 |
+
|
| 319 |
+
Participants performed tasks significantly more quickly and accurately with the stepped perspective chart than with the scale-stack bar chart, another recent visualization design intended to represent datasets with important data at multiple scales, in three out of four task types. The circular perspective chart showed less accurate results than a radial bar chart for some tasks.
|
| 320 |
+
|
| 321 |
+
Our circular perspective chart was generally not well-received by participants. Some participants raised concerns about the compression applied to large values in the chart, and others expressed that it may be more suitable as an artistic data visualization.
|
| 322 |
+
|
| 323 |
+
Results for H1 and H2 indicate that the use of perspective projection in data visualization is worth further examination. The designs of the slanted perspective chart and stepped perspective chart were positively received by participants and performed comparatively to traditional methods in our evaluation. The stepped perspective chart in particular performed well compared to an existing visualization design, the scale-stack bar chart, for our use case of visualizing data with important variation at multiple scales.
|
| 324 |
+
|
| 325 |
+
Further evaluation could reinforce the results observed here. Based on our evaluation results, the slanted and stepped perspective charts particularly merit more in-depth evaluation as alternatives to existing chart types.
|
| 326 |
+
|
| 327 |
+
§ 7 ACKNOWLEDGMENTS
|
| 328 |
+
|
| 329 |
+
The authors would like to thank Lora Oehlberg for her guidance with the analysis of our evaluation results.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# "I want to recycle batteries, but it's inconvenient": A Study of Non-rechargeable Battery Recycle Practices and Challenges
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Non-rechargeable batteries are widely used in electronic devices and can cause environmental issues if not recycled properly. However, little is known about the challenges that people might encounter when they recycle non-rechargeable batteries. We first conducted an online survey with 106 participants to understand their practices and challenges of reusing and recycling non-rechargeable batteries. We then interviewed 12 participants to understand the potential reasons behind their behaviors. Our results show that although it is common to store used batteries temporarily, many eventually do not recycle them for reasons such as the inconvenience of recycling, not knowing how to recycle batteries and high perceived efforts of recycling. Moreover, we highlight the challenges associated with their common battery reuse and recycle strategies. We present design considerations and potential solutions for both individuals and communities to promote sustainable battery recycle behaviors.
|
| 8 |
+
|
| 9 |
+
Index Terms: Human-centered computing-Human-computer interaction-Empirical studies in HCI;
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Non-rechargeable batteries, also known as primary batteries, are commonly used in portable electronic devices (e.g., remote controls, stereo headsets) and are expected to grow at a compound annual growth rate of around 3% to nearly \$19 billion by 2022 [2]. Americans purchased nearly 3 billion primary batteries yearly [1].
|
| 14 |
+
|
| 15 |
+
Although recycling batteries is beneficial to the environment [3] and is encouraged by governments (e.g., [1]), only 36% of used batteries were estimated to be collected and 29% were recycled in the European Union in 2015 [1]. Collectively, each person in the US discards 8 primary batteries per year [1]. By 2025, approximately 1 million metric tons of spent battery waste will be accumulated [12]. However, little is known about how people recycle used batteries and what the potential challenges are. To fill in this gap, We sought to answer the following two research questions(RQs):
|
| 16 |
+
|
| 17 |
+
- RQ1: What are the practices and challenges of reusing and recycling used batteries?
|
| 18 |
+
|
| 19 |
+
- RQ2: What are design opportunities to improve using and recycling used batteries?
|
| 20 |
+
|
| 21 |
+
We first conducted a survey study with 106 participants living in North America to understand their practices and challenges of reusing and recycling used batteries. Informed by the results, we further conducted in-depth interviews to explore people's willingness and barriers to conducting environmental-friendly approaches towards dealing with used batteries and major factors that affect participants' decision-making of reusing and recycling batteries.
|
| 22 |
+
|
| 23 |
+
Our results show that although it is common to store used batteries temporarily, many eventually do not recycle them for reasons such as the inconvenience of recycling, lack of information about recycling batteries, and high perceived efforts of recycling batteries. Moreover, the practices of reusing batteries depend on the financial status, the motivation to conduct environmentally-friendly behavior, and the availability of tools (e.g., battery testers) and instructions. To our knowledge, this is the first study that provides both quantitative and qualitative understanding of battery reuse and recycle practices and challenges from users' perspectives.
|
| 24 |
+
|
| 25 |
+
## 2 BACKGROUND AND RELATED WORK
|
| 26 |
+
|
| 27 |
+
### 2.1 Regulations of Collecting Batteries
|
| 28 |
+
|
| 29 |
+
Previous research showed that people should avoid the simple behavior of discarding household batteries along with municipal solid waste because the collection, separation, and recycling processes are accessible worldwide [4]. For example, Japan, United States, and European countries have equipped the whole countries with official recycling programs with collection centers. End users are the first part of the collection chain who must return spent batteries, while the second ones are distributors and manufacturers who should perform their responsibilities to collect batteries free of charge. Therefore, we decided to explore the individual behaviors of dealing with batteries and how the community or government helped individuals involve in environmental-friendly activities regarding battery recycle.
|
| 30 |
+
|
| 31 |
+
A study [15] found that the impacts on the environment of the collection activities, closely bounded with the transportation, outbalanced the benefit to the environment. To minimize the negative impact of transportation, several countries in Europe applied the method of "integrated waste management" to integrate the collection of batteries and other recyclable material. Take Sweden as an example, battery [15]. So do the trucks that transported both paper and batteries. A project in the Netherlands would extract old batteries from household waste with magnets [3]. We also aimed to study combined with the living environment and people's behavior, how we can practice "integrated waste management" to reduce the side effect generated from battery recycling activities.
|
| 32 |
+
|
| 33 |
+
### 2.2 Individual Battery Recycle Behavior
|
| 34 |
+
|
| 35 |
+
Researchers investigated people's environmental attitudes and opinions and found that there is a positive relationship between general environmental attitudes and recycling actions though it is a fairly tenuous relationship $\left\lbrack {8,{11},{14},{18}}\right\rbrack$ . Several research studies suggested that to increase the citizens' participation in recycling, it is useful to educate the public about the significance of recycling and to inform them of how and where to recycle [16, 17]. Previous research suggested the major reasons why people refuse to recycle used batteries are due to the cost of time and effort and a lack of material reward $\left\lbrack {{13},{19}}\right\rbrack$ . To be more specific, consumers are more willing to recycle objects when it is convenient to access and use the recycling equipment [20]. Moreover, when the disposal method is integrated into everyday life, individuals feel encouraged to take actions that they assume are sustainable [19]. According to interviews with families with children in the Netherlands [7], most families expressed their unsatisfactory that recycle bins are not always available when they recycle household items, like glass or plastic bottles. Xiao et al. [7] conducted a survey that explored certain generation's actions and thoughts of recycling e-waste, as well as the barriers to recycling. The results show that individuals' practices vary largely. In the analysis process, they classified the recycling actions into five categories, including transferring a product to other users, returning it to the manufacturer, and reusing the object. Accordingly, we aimed to further investigate the major reasons and barriers of reusing and recycling batteries in everyday life for North-America residents. This would provide design implications for human-computer interaction researchers to best design tools and methods to assist people with reusing and recycling batteries.
|
| 36 |
+
|
| 37 |
+
## 3 SURVEY
|
| 38 |
+
|
| 39 |
+
### 3.1 Survey Design
|
| 40 |
+
|
| 41 |
+
The survey included 14 multiple-choice, Likert-scale, and short-answer questions, which were organized into themes to elicit data about participants' practices and opinions about the devices with non-rechargeable batteries that they use, reusing batteries, and recycling batteries as well as their knowledge of non-rechargeable batteries regulations which differ from place to place.
|
| 42 |
+
|
| 43 |
+
### 3.2 Procedure and Participants
|
| 44 |
+
|
| 45 |
+
We distributed the survey via email lists from a university and social media platforms, such as Facebook and Slack, between March and November 2020. We received 107 responses, removed one duplicate response, and performed the analyses on the 106 valid responses.
|
| 46 |
+
|
| 47 |
+
${79}\%$ of the participants $\left( {\mathrm{N} = {84}}\right)$ were from the USA and ${21}\%$ (N=22) were from other countries. 54 participants were between 18 and 25,42 were between 26 and 35,7 were between 36 and 50, and 4 were above 50 .
|
| 48 |
+
|
| 49 |
+
### 3.3 Findings
|
| 50 |
+
|
| 51 |
+
#### 3.3.1 Reusing Batteries
|
| 52 |
+
|
| 53 |
+
Participants were presented with a scenario that "the TV remote control uses three single-use batteries, and you find that the batteries cannot provide enough power." and provided their inferences on the batteries' conditions and also their potential solutions.
|
| 54 |
+
|
| 55 |
+
While about a third (32%) of the participants believed that all the batteries were completely drained, the majority (67%) of them believed that some of the batteries were only partially drained. Nonetheless, only ${52}\%$ chose to keep some of the batteries for later reuse, and 41% chose to change the batteries with new ones all at once. This highlights a gap between participants' understanding of the used batteries and their potential actions to deal with them.
|
| 56 |
+
|
| 57 |
+
One major challenge of reusing batteries is to find out how much power is left in a used battery. However, 74% of the participants reported having no or little experience with testing the remaining power of a used battery. Only 2 participants (less than 2%) reported having such experience.
|
| 58 |
+
|
| 59 |
+
#### 3.3.2 Recycling Batteries
|
| 60 |
+
|
| 61 |
+
Participants were asked to report whether and how they might recycle used batteries. ${77}\% \left( {\mathrm{N} = {82}}\right)$ of the participants chose to "store the used batteries temporarily", 32% (N=32) chose to "throw the used batteries into a regular trash can", and only 14% (N=15) chose to "take the used batteries to a recycling center". The most frequently-mentioned barriers of visiting a recycling center were as follows: I do not know where to recycle the batteries $\left( {\mathrm{N} = {69}}\right)$ , I do not collect many batteries $\left( {\mathrm{N} = {48}}\right)$ , it is inconvenient to visit a recycling center $\left( {\mathrm{N} = {43}}\right)$ , and ${Idonothaveincentivestodoso}\left( {\mathrm{N} = {19}}\right)$ .
|
| 62 |
+
|
| 63 |
+
Furthermore, there were challenges for recycling batteries. 70% of the participants did not know the regulations and laws of recycling batteries in their local area. Only ${22}\%$ sought the resources and information about recycling batteries.
|
| 64 |
+
|
| 65 |
+
## 4 INTERVIEWS
|
| 66 |
+
|
| 67 |
+
To further understand the challenges of reusing and recycling used batteries and identify opportunities to improve reusing and recycling practices, we conducted a semi-structured interview study.
|
| 68 |
+
|
| 69 |
+
### 4.1 Interview Design
|
| 70 |
+
|
| 71 |
+
The interview is consists of 4 parts. In part 1, we provided the descriptions about the difference between non-rechargeable batteries and rechargeable batteries to avoid confusion about the concepts. In addition, we asked a kick-off question about the recent experience of using batteries to prepare participants for exploring the problems that they encountered in daily life. Part 2 focused on people's practice and knowledge under two typical scenarios to learn their practice and willingness of replacing and reusing batteries. Part 3 covered previous experience in recycling batteries. Besides, we asked about the experience with other recyclable objectives and looked for good and bad reference of providing convenience to personal recycle activities. Part 4 unveiled our idea that people can donate or receive old batteries from others to make full use of the batteries. We sought participants' opinions on this idea and discovered elements that affects their decision-making.
|
| 72 |
+
|
| 73 |
+
### 4.2 Participants
|
| 74 |
+
|
| 75 |
+
We recruited 12 participants from the survey respondents, social media platforms, and word-of-mouth. 4 participants were identified as males and 7 as females; 7 participants were 18-24 years old, 4 were 25-35 years old, and 1 was 36-50 years old. 11 participants lived in the US and 1 in Canada. 5 participants had recycling experience in more than 1 country and 1 participant had related experience in 2 states in the US. Each participant was compensated with \$10.
|
| 76 |
+
|
| 77 |
+
### 4.3 Procedure
|
| 78 |
+
|
| 79 |
+
The study obtained approval to conduct the interview from the Institutional Review Board of Rochester Institution of Technology. We conducted the study with participants remotely with an online meeting platform, such as Zoom, Google meeting. The interview session lasted for about 3040 minutes. The whole interview sessions were audio-recorded using the Voice Memos application and transcribed the interview content using Otter.ai.
|
| 80 |
+
|
| 81 |
+
### 4.4 Analysis
|
| 82 |
+
|
| 83 |
+
Two authors first performed open coding and discussed about disagreements on coding to gain a consensus. They then performed an affinity diagramming to derive themes emerging from codes.
|
| 84 |
+
|
| 85 |
+
### 4.5 Findings
|
| 86 |
+
|
| 87 |
+
Our analysis revealed the rationales and challenges associated with the practices of reusing and recycling batteries as well as the potential design opportunities.
|
| 88 |
+
|
| 89 |
+
#### 4.5.1 Reusing Batteries
|
| 90 |
+
|
| 91 |
+
Battery usage behaviors vary depending on people's tolerance of the perceived interruptions when products run out of power. Participants tend to change all batteries at once for products that would cause high perceived disruptions to their user experience when running out of power, for example, the controller of a video game console In contrast, they would be more willing to change only one of the batteries for products that would cause low perceived interruptions when running out of power, such as a TV remote controller.
|
| 92 |
+
|
| 93 |
+
Our survey results show that when the batteries cannot serve a product, ${67}\%$ of the respondents believe that the batteries are only partly used. In the interview, we investigated whether they are willing to measure the voltage in the battery.
|
| 94 |
+
|
| 95 |
+
Only two out of the 12 participants indicated that they had battery testers to measure the remaining power or voltage in the batteries and decide whether they would reuse the batteries. All other participants showed little interest in knowing the leftover power in the batteries and indicated that they would replace all batteries together. We found three reasons. First, it was perceived to be time-consuming to test with new batteries and replace all the old batteries one by one Secondly, they did not feel the need to save batteries in particular when did not have many devices using single-use batteries.
|
| 96 |
+
|
| 97 |
+
#### 4.5.2 Recycling Batteries
|
| 98 |
+
|
| 99 |
+
${70}\%$ of the survey respondents were not confident about their knowledge about battery recycling regulations. The interview study further explored people's willingness to learn the related regulations.
|
| 100 |
+
|
| 101 |
+
Willingness to learn about regulations: Seven out of the twelve participants indicated that they would like to learn the regulations about recycling batteries; two participants did not care much about the regulations; three would not actively seek to learn the regulations but would learn when encountering them. P10 mentioned that " $I$ don't actively seek it on my own initiative, but if I accidentally see it, I would click in."
|
| 102 |
+
|
| 103 |
+
There is certain content that people are interested in. 7 of 12 participants want to learn where they could discard or recycle batteries. Except for that, law, specific rules and regulations, and knowledge about processing the batteries.
|
| 104 |
+
|
| 105 |
+
8 out of 12 participants were aware that they should recycle non-rechargeable batteries or they could not discard these batteries in the regular trash. However, only 1 participant knew where to discard the non-rechargeable batteries and regulation in where he/she lived. Only 2 participants had or have experience in recycling non-rechargeable batteries. 4 participants indicated that they knew where to discard the non-rechargeable battery in other countries or districts, including China, Canada, Taiwan, and Turkey.
|
| 106 |
+
|
| 107 |
+
Challenge. In the survey, only ${16}\%$ of participants had experience in visiting recycle places to recycle batteries. In the interview, we tried to figure out the reason why they did not go to the recycle places (willingness) and in which way they considered it an easy way to recycle (challenge).
|
| 108 |
+
|
| 109 |
+
All the participants in the interview attempted to dispose of rechargeable batteries in the right place now or before to some extent. P5 said that " I know that they should be recycled in some way but I don't know where. So I threw it in the trash. " However, only P8 had the habit of recycling batteries because his working space had a disposing location.
|
| 110 |
+
|
| 111 |
+
P4 used to recycle batteries but felt it was hard to recycle batteries after she moved to a new place 6 years ago, and she mentioned her emotions: "Once we (our family) were very distressed about disposing of the battery, but we didn't do deep search on the Internet. I feel this is a very simple thing, but so hard to find one. ... We used to live in a small town. There was a university near where we lived, and there were battery recycling places in it. "
|
| 112 |
+
|
| 113 |
+
Two participants indicated that it was inconvenient to recycle batteries because they did not use a lot of batteries because it would be too much work. The other two participants thought that recycling batteries was part of the state laws and regulations. Although P11 was unfamiliar with the regulations and laws, he felt it was reasonable for residents to recycle batteries because "this is how we move the societies forward by being strict on these environmentally friendly things that aren't too difficult to do." P3 felt that the current law was not strict enough to regulate residents' behavior and people are less likely to care about it. He mentioned that "People are not just willing to push this into law so they are less likely to care about this. ...I don't care as much because it's not a law. "
|
| 114 |
+
|
| 115 |
+
#### 4.5.3 Design Opportunities
|
| 116 |
+
|
| 117 |
+
Our interviews also revealed three design opportunities.
|
| 118 |
+
|
| 119 |
+
Making recycling convenient to people. Locations, where participants and their family members visited to recycle batteries, include convenience stores, universities, apartment leasing office, electronic retailer stores, and their workplaces. One common characteristic of these locations was that they were all convenient for participants to visit. In particular, when they had to run an errand near these recycle locations, they would be more willing to visit the location.
|
| 120 |
+
|
| 121 |
+
Participants proposed several places to position battery recycling facilities (e.g., a bin): 1) places near where people live, for example, next to regular trash drop-off locations in a residential community; 2)locations near or in the stores that people visit regularly, for example, grocery stores, wholesales stores, or convenience stores; 3) libraries: P4 and P12 both mentioned that libraries are places that families with children and students often visit, and 4) recreational centers where people do sports and attend recreational classes.
|
| 122 |
+
|
| 123 |
+
Learning from practices of recycling other items. We asked participants about other items that they recycled in their daily lives and why they were able to recycle them. The frequently recycled items included paper and newspaper, cardboard, and plastic bottles and cans. The main reason why these items got recycled often was that participants could simply put these items next to their regular trash bins and wait for waste management to collect them. This finding shows again that convenience is key to recycling. Moreover, small rewards (e.g., store credits) were given by certain grocery stores to encourage people to recycle plastic bottles and cans. However, P5 felt that rewards might not work for recycling batteries because the tedious process of collecting and bringing batteries to certain locations as well as the hygienic issues associated with used batteries outweigh the small rewards grocery stores could provide.
|
| 124 |
+
|
| 125 |
+
Making information about how and where to recycle batteries easy to access. ${65}\%$ of the survey participants did not know where they should send batteries to. Thus, we investigated the reasons in the interviews. Results show that participants sought out battery recycle regulations and locations primarily from their social circles, such as their parents, spouses, friends, and landlords. Surprisingly, few participants used searching online (e.g., Google) as a way to find out battery recycle regulations and locations in their local areas. "Because we cannot find it via the internet, at least we tried to google...but we couldn't find it."-P5
|
| 126 |
+
|
| 127 |
+
Participants proposed six approaches to delivering battery recycle information that would be convenient for them to spot: 1) on the packages of the devices that use batteries, which show information about how to recycle batteries or a QR code that can be easily scanned by a smartphone; 2) on the battery brands' websites. An important consideration is to make sure such websites would pop up on the top of the search results list; 3)local governments: Local governments could inform their residences of relevant information via text messages, emails, news reports, or bulletins; 4) landlords, housing agents or dorm managers: They could be helpful for people who recently moved to the area and are not familiar with local battery recycle regulations and guidelines; 5) non-profit organizations: non-profit organizations could use public promotion activity and educational videos to show people the alarming consequences of discarding batteries without properly recycling them; and 6) waste management companies: waste management companies could also help residents recycle batteries, such as setting up a hot-line.
|
| 128 |
+
|
| 129 |
+
The preferred formats to deliver battery recycling regulations and guidelines were info-graphics, short videos, social media posts, or advertisements. Info-graphics could be displayed on a battery's packaging or on the packaging of a product that uses batteries.
|
| 130 |
+
|
| 131 |
+
## 5 DISCUSSION
|
| 132 |
+
|
| 133 |
+
Informed by the findings of both the survey and interview studies, we present design considerations (DCs) for designers and researchers to consider when helping people better reuse and recycle batteries.
|
| 134 |
+
|
| 135 |
+
DC1: Help Users Understand the State of Used Batteries and How They Could Reuse them. Our studies found that two prominent challenges of reusing batteries were: 1) it was unknown whether a battery was fully drained or there was some power left; 2) what other products they could reuse the batteries for. Reasons for these challenges included lacking tools and knowledge about how to test the power of a used battery and having no easy access to information about possible products that could take used batteries.
|
| 136 |
+
|
| 137 |
+
To help implement this design consideration, we propose the conceptual designs of a portable battery tester and its companion mobile app to illustrate how to lower the barrier for the general public to test used batteries and find information about how to reuse and recycle them. Figure 1 (a) and (b) show two views of the battery tester that contains three slots to place three common types of non-rechargeable batters. The battery tester could connect with a smartphone via Bluetooth and send the test results to be displayed in the mobile companion app. Figure 1 (c) and (d) show the UIs of the app, which show the battery test results and the recommended actions for two batteries in good and bad conditions respectively. Making all of the information needed for reusing and recycling batteries in one app would save users efforts and time to perform random searches online, which were reported to be less effective by our participants and also prior studies [16,17].
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+
Figure 1: Battery Tester Prototype and companion Mobile App: (a) A side view of the prototype; (b) a top view of it; (c) App UI that shows that the tested AAA battery is in a good condition and offers a list of products that it could be reused in; (d) App UI that shows the tested AA battery was ready to be disposed or recycled and offers information about how it could be properly handled.
|
| 142 |
+
|
| 143 |
+
DC2: Make recycling batteries integrated into people's routine lives. Our studies show that inconvenience was one key barrier to recycling batteries. This finding corroborates the findings that showed that the cost of time and effort and lack of material rewards hinder people from recycling batteries [13, 19]. Our participants who did recycle batteries often had relatively easy access to recycling locations, such as their workplace, nearby stores, universities, and the community and homeowner's associations. This finding echos the suggestion of previous research [19].
|
| 144 |
+
|
| 145 |
+
To help implement this design consideration, we recommend HCI designers and researchers consider the successful practices of recycling other materials, such as cardboard, cans, and plastic bottles, and make recycling batteries embedded in people's everyday lives [9]. For example, waste management companies would take the recyclable materials weekly along with the trash and the community would set a separate space next to the regular trash for residents to drop off recyclable materials. Furthermore, some grocery stores have recycle facilities for people to recycle bottles and cans and even offer small rewards. These potential example solutions align with a concept of "integrated waste management" [3] and should be considered when integrating the collection of batteries with other waste streams to minimize not only individual's recycling efforts but also the negative impact associated with the transportation of batteries.
|
| 146 |
+
|
| 147 |
+
DC3: Build community supports to help people recycle batteries. Our studies also found that for those who did manage to recycle batteries, they received some community supports. For example, one interviewee mentioned that her previous community manager would notify the residents to carry used batteries to the community office once or twice a year.
|
| 148 |
+
|
| 149 |
+
To help implement this design consideration, we propose to build online community platforms for people to share information about and also exchange used batteries. A successful example platform for people to exchange used items is Cragslist [6]. There are several challenges to overcome. First, it remains unclear how to motivate people to participate in such platforms. One approach might be to gamify the process to make it fun and rewarding to participate. Second, unlike other used items, used batteries may cause hazardous concerns if not handled properly during the sharing process. Similar to our proposed design in Figure 1, it is worth designing simple approaches to checking batteries' conditions.
|
| 150 |
+
|
| 151 |
+
## 6 LIMITATIONS AND FUTURE WORK
|
| 152 |
+
|
| 153 |
+
First, this short paper focused on understanding the practices and challenges of reusing and recycling batteries and deriving design considerations. Although we also offered potential solutions (e.g., Figure 1), they are yet to be fully implemented and evaluated with users. Second, the individual's practice of recycling batteries may vary [7]. In our interviews, we also noticed differences between participants who lived alone and those who lived with their families or roommates. More research is needed to investigate how battery reuse and recycle practices might be affected by social factors, such as the number of people that they live with. Lastly, our analysis highlights the importance of informing individuals of recycling locations and times in their local areas. However, our participants often had difficulty in finding such information. Although many online resources (e.g., earth911 [10] and call2recycle [5]) provide such information, future research should further investigate the barriers in users' information searching process and design tools to help users easily find such information.
|
| 154 |
+
|
| 155 |
+
## 7 CONCLUSION
|
| 156 |
+
|
| 157 |
+
We have conducted a survey study and an interview study to understand the practices and challenges of reusing and recycling batteries. Our results found various barriers in the way of reusing and recycling batteries. First, due to the lack of information, people do not know the efficient ways to recycle batteries. Even though some may have the good intention of conducting environmental-friendly practices, people are unwilling to invest too much time and effort if they could easily access necessary information. Regarding reusing or making full use of batteries, people have difficulties in figuring out the remaining power of batteries efficiently. As a result, many would discard batteries that still have power left, which leads to the waste of energy. Our analysis also uncovered opportunities to lower the barriers to reuse and recycle batteries. Finally, we present three design considerations and discuss potential solutions.
|
| 158 |
+
|
| 159 |
+
## REFERENCES
|
| 160 |
+
|
| 161 |
+
[1] Universal waste, Oct 2020.
|
| 162 |
+
|
| 163 |
+
[2] Primary batteries market global opportunities and strategies to 2022, Jan 2021.
|
| 164 |
+
|
| 165 |
+
[3] A. Bernardes, D. C. R. Espinosa, and J. S. Tenório. Recycling of batteries: a review of current processes and technologies. Journal of Power sources, 130(1-2):291-298, 2004.
|
| 166 |
+
|
| 167 |
+
[4] A. M. Bernardes, D. C. R. Espinosa, and J. A. S. Tenório. Collection and recycling of portable batteries: a worldwide overview compared to the brazilian situation. Journal of power sources, 124(2):586-592, 2003.
|
| 168 |
+
|
| 169 |
+
[5] Call2Recycle. Call2recycle - united states. https://www.call2recycle.org/, 2021.
|
| 170 |
+
|
| 171 |
+
[6] craigslist. craigslist: jobs, apartments, for sale, services, community, and events. https://craigslist.org/, 2021.
|
| 172 |
+
|
| 173 |
+
[7] P. de Kruyff, A. Steentjes, and S. Shahid. The alkaline arcade: a child-friendly fun machine for battery recycling. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology, pp. 1-2, 2011.
|
| 174 |
+
|
| 175 |
+
[8] T. Domina and K. Koch. Convenience and frequency of recycling: implications for including textiles in curbside recycling programs. Environment and behavior, 34(2):216-238, 2002.
|
| 176 |
+
|
| 177 |
+
[9] P. Dourish. Hci and environmental sustainability: the politics of design and the design of politics. In Proceedings of the 8th ACM conference on designing interactive systems, pp. 1-10,2010.
|
| 178 |
+
|
| 179 |
+
[10] Earth911. Earth911 - more ideas, less waste. https://earth911.com/, 2021.
|
| 180 |
+
|
| 181 |
+
[11] R. J. Gamba and S. Oskamp. Factors influencing community residents' participation in commingled curbside recycling programs. Environment and behavior, 26(5):587-612, 1994.
|
| 182 |
+
|
| 183 |
+
[12] A. Garg, L. Wei, A. Goyal, X. Cui, and L. Gao. Evaluation of batteries residual energy for battery pack recycling: Proposition of stack stress-coupled-ai approach. Journal of Energy Storage, 26:101001, 2019.
|
| 184 |
+
|
| 185 |
+
[13] R. Hansmann, P. Bernasconi, T. Smieszek, P. Loukopoulos, and R. W. Scholz. Justifications and self-organization as determinants of recycling behavior: The case of used batteries. Resources, Conservation and Recycling, 47(2):133-159, 2006.
|
| 186 |
+
|
| 187 |
+
[14] J. Hornik, J. Cherian, M. Madansky, and C. Narayana. Determinants of recycling behavior: A synthesis of research results. The Journal of Socio-Economics, 24(1):105-127, 1995.
|
| 188 |
+
|
| 189 |
+
[15] D. Lisbona and T. Snee. A review of hazards associated with primary lithium and lithium-ion batteries. Process safety and environmental protection, 89(6):434-442, 2011.
|
| 190 |
+
|
| 191 |
+
[16] N. Mee. A communications strategy for kerbside recycling. Journal of Marketing Communications, 11(4):297-308, 2005.
|
| 192 |
+
|
| 193 |
+
[17] P. O. D. Valle, E. Reis, J. Menezes, and E. Rebelo. Behavioral determinants of household recycling participation the- portuguese case. Environ. Behav., 36(4):505-540, 2004.
|
| 194 |
+
|
| 195 |
+
[18] J. Vining and A. Ebreo. What makes a recycler? a comparison of recyclers and nonrecyclers. Environment and behavior, 22(1):55-73, 1990.
|
| 196 |
+
|
| 197 |
+
[19] L. Wagner. Overview of energy storage technologies. In Future Energy, pp. 613-631. Elsevier, 2014.
|
| 198 |
+
|
| 199 |
+
[20] X. Zhang and R. Wakkary. Design analysis: understanding e-waste recycling by generation y. In Proceedings of the 2011 Conference on Designing Pleasurable Products and Interfaces, pp. 1-8, 2011.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/o18PAn04GD/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ "I WANT TO RECYCLE BATTERIES, BUT IT'S INCONVENIENT": A STUDY OF NON-RECHARGEABLE BATTERY RECYCLE PRACTICES AND CHALLENGES
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Non-rechargeable batteries are widely used in electronic devices and can cause environmental issues if not recycled properly. However, little is known about the challenges that people might encounter when they recycle non-rechargeable batteries. We first conducted an online survey with 106 participants to understand their practices and challenges of reusing and recycling non-rechargeable batteries. We then interviewed 12 participants to understand the potential reasons behind their behaviors. Our results show that although it is common to store used batteries temporarily, many eventually do not recycle them for reasons such as the inconvenience of recycling, not knowing how to recycle batteries and high perceived efforts of recycling. Moreover, we highlight the challenges associated with their common battery reuse and recycle strategies. We present design considerations and potential solutions for both individuals and communities to promote sustainable battery recycle behaviors.
|
| 8 |
+
|
| 9 |
+
Index Terms: Human-centered computing-Human-computer interaction-Empirical studies in HCI;
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Non-rechargeable batteries, also known as primary batteries, are commonly used in portable electronic devices (e.g., remote controls, stereo headsets) and are expected to grow at a compound annual growth rate of around 3% to nearly $19 billion by 2022 [2]. Americans purchased nearly 3 billion primary batteries yearly [1].
|
| 14 |
+
|
| 15 |
+
Although recycling batteries is beneficial to the environment [3] and is encouraged by governments (e.g., [1]), only 36% of used batteries were estimated to be collected and 29% were recycled in the European Union in 2015 [1]. Collectively, each person in the US discards 8 primary batteries per year [1]. By 2025, approximately 1 million metric tons of spent battery waste will be accumulated [12]. However, little is known about how people recycle used batteries and what the potential challenges are. To fill in this gap, We sought to answer the following two research questions(RQs):
|
| 16 |
+
|
| 17 |
+
* RQ1: What are the practices and challenges of reusing and recycling used batteries?
|
| 18 |
+
|
| 19 |
+
* RQ2: What are design opportunities to improve using and recycling used batteries?
|
| 20 |
+
|
| 21 |
+
We first conducted a survey study with 106 participants living in North America to understand their practices and challenges of reusing and recycling used batteries. Informed by the results, we further conducted in-depth interviews to explore people's willingness and barriers to conducting environmental-friendly approaches towards dealing with used batteries and major factors that affect participants' decision-making of reusing and recycling batteries.
|
| 22 |
+
|
| 23 |
+
Our results show that although it is common to store used batteries temporarily, many eventually do not recycle them for reasons such as the inconvenience of recycling, lack of information about recycling batteries, and high perceived efforts of recycling batteries. Moreover, the practices of reusing batteries depend on the financial status, the motivation to conduct environmentally-friendly behavior, and the availability of tools (e.g., battery testers) and instructions. To our knowledge, this is the first study that provides both quantitative and qualitative understanding of battery reuse and recycle practices and challenges from users' perspectives.
|
| 24 |
+
|
| 25 |
+
§ 2 BACKGROUND AND RELATED WORK
|
| 26 |
+
|
| 27 |
+
§ 2.1 REGULATIONS OF COLLECTING BATTERIES
|
| 28 |
+
|
| 29 |
+
Previous research showed that people should avoid the simple behavior of discarding household batteries along with municipal solid waste because the collection, separation, and recycling processes are accessible worldwide [4]. For example, Japan, United States, and European countries have equipped the whole countries with official recycling programs with collection centers. End users are the first part of the collection chain who must return spent batteries, while the second ones are distributors and manufacturers who should perform their responsibilities to collect batteries free of charge. Therefore, we decided to explore the individual behaviors of dealing with batteries and how the community or government helped individuals involve in environmental-friendly activities regarding battery recycle.
|
| 30 |
+
|
| 31 |
+
A study [15] found that the impacts on the environment of the collection activities, closely bounded with the transportation, outbalanced the benefit to the environment. To minimize the negative impact of transportation, several countries in Europe applied the method of "integrated waste management" to integrate the collection of batteries and other recyclable material. Take Sweden as an example, battery [15]. So do the trucks that transported both paper and batteries. A project in the Netherlands would extract old batteries from household waste with magnets [3]. We also aimed to study combined with the living environment and people's behavior, how we can practice "integrated waste management" to reduce the side effect generated from battery recycling activities.
|
| 32 |
+
|
| 33 |
+
§ 2.2 INDIVIDUAL BATTERY RECYCLE BEHAVIOR
|
| 34 |
+
|
| 35 |
+
Researchers investigated people's environmental attitudes and opinions and found that there is a positive relationship between general environmental attitudes and recycling actions though it is a fairly tenuous relationship $\left\lbrack {8,{11},{14},{18}}\right\rbrack$ . Several research studies suggested that to increase the citizens' participation in recycling, it is useful to educate the public about the significance of recycling and to inform them of how and where to recycle [16, 17]. Previous research suggested the major reasons why people refuse to recycle used batteries are due to the cost of time and effort and a lack of material reward $\left\lbrack {{13},{19}}\right\rbrack$ . To be more specific, consumers are more willing to recycle objects when it is convenient to access and use the recycling equipment [20]. Moreover, when the disposal method is integrated into everyday life, individuals feel encouraged to take actions that they assume are sustainable [19]. According to interviews with families with children in the Netherlands [7], most families expressed their unsatisfactory that recycle bins are not always available when they recycle household items, like glass or plastic bottles. Xiao et al. [7] conducted a survey that explored certain generation's actions and thoughts of recycling e-waste, as well as the barriers to recycling. The results show that individuals' practices vary largely. In the analysis process, they classified the recycling actions into five categories, including transferring a product to other users, returning it to the manufacturer, and reusing the object. Accordingly, we aimed to further investigate the major reasons and barriers of reusing and recycling batteries in everyday life for North-America residents. This would provide design implications for human-computer interaction researchers to best design tools and methods to assist people with reusing and recycling batteries.
|
| 36 |
+
|
| 37 |
+
§ 3 SURVEY
|
| 38 |
+
|
| 39 |
+
§ 3.1 SURVEY DESIGN
|
| 40 |
+
|
| 41 |
+
The survey included 14 multiple-choice, Likert-scale, and short-answer questions, which were organized into themes to elicit data about participants' practices and opinions about the devices with non-rechargeable batteries that they use, reusing batteries, and recycling batteries as well as their knowledge of non-rechargeable batteries regulations which differ from place to place.
|
| 42 |
+
|
| 43 |
+
§ 3.2 PROCEDURE AND PARTICIPANTS
|
| 44 |
+
|
| 45 |
+
We distributed the survey via email lists from a university and social media platforms, such as Facebook and Slack, between March and November 2020. We received 107 responses, removed one duplicate response, and performed the analyses on the 106 valid responses.
|
| 46 |
+
|
| 47 |
+
${79}\%$ of the participants $\left( {\mathrm{N} = {84}}\right)$ were from the USA and ${21}\%$ (N=22) were from other countries. 54 participants were between 18 and 25,42 were between 26 and 35,7 were between 36 and 50, and 4 were above 50 .
|
| 48 |
+
|
| 49 |
+
§ 3.3 FINDINGS
|
| 50 |
+
|
| 51 |
+
§ 3.3.1 REUSING BATTERIES
|
| 52 |
+
|
| 53 |
+
Participants were presented with a scenario that "the TV remote control uses three single-use batteries, and you find that the batteries cannot provide enough power." and provided their inferences on the batteries' conditions and also their potential solutions.
|
| 54 |
+
|
| 55 |
+
While about a third (32%) of the participants believed that all the batteries were completely drained, the majority (67%) of them believed that some of the batteries were only partially drained. Nonetheless, only ${52}\%$ chose to keep some of the batteries for later reuse, and 41% chose to change the batteries with new ones all at once. This highlights a gap between participants' understanding of the used batteries and their potential actions to deal with them.
|
| 56 |
+
|
| 57 |
+
One major challenge of reusing batteries is to find out how much power is left in a used battery. However, 74% of the participants reported having no or little experience with testing the remaining power of a used battery. Only 2 participants (less than 2%) reported having such experience.
|
| 58 |
+
|
| 59 |
+
§ 3.3.2 RECYCLING BATTERIES
|
| 60 |
+
|
| 61 |
+
Participants were asked to report whether and how they might recycle used batteries. ${77}\% \left( {\mathrm{N} = {82}}\right)$ of the participants chose to "store the used batteries temporarily", 32% (N=32) chose to "throw the used batteries into a regular trash can", and only 14% (N=15) chose to "take the used batteries to a recycling center". The most frequently-mentioned barriers of visiting a recycling center were as follows: I do not know where to recycle the batteries $\left( {\mathrm{N} = {69}}\right)$ , I do not collect many batteries $\left( {\mathrm{N} = {48}}\right)$ , it is inconvenient to visit a recycling center $\left( {\mathrm{N} = {43}}\right)$ , and ${Idonothaveincentivestodoso}\left( {\mathrm{N} = {19}}\right)$ .
|
| 62 |
+
|
| 63 |
+
Furthermore, there were challenges for recycling batteries. 70% of the participants did not know the regulations and laws of recycling batteries in their local area. Only ${22}\%$ sought the resources and information about recycling batteries.
|
| 64 |
+
|
| 65 |
+
§ 4 INTERVIEWS
|
| 66 |
+
|
| 67 |
+
To further understand the challenges of reusing and recycling used batteries and identify opportunities to improve reusing and recycling practices, we conducted a semi-structured interview study.
|
| 68 |
+
|
| 69 |
+
§ 4.1 INTERVIEW DESIGN
|
| 70 |
+
|
| 71 |
+
The interview is consists of 4 parts. In part 1, we provided the descriptions about the difference between non-rechargeable batteries and rechargeable batteries to avoid confusion about the concepts. In addition, we asked a kick-off question about the recent experience of using batteries to prepare participants for exploring the problems that they encountered in daily life. Part 2 focused on people's practice and knowledge under two typical scenarios to learn their practice and willingness of replacing and reusing batteries. Part 3 covered previous experience in recycling batteries. Besides, we asked about the experience with other recyclable objectives and looked for good and bad reference of providing convenience to personal recycle activities. Part 4 unveiled our idea that people can donate or receive old batteries from others to make full use of the batteries. We sought participants' opinions on this idea and discovered elements that affects their decision-making.
|
| 72 |
+
|
| 73 |
+
§ 4.2 PARTICIPANTS
|
| 74 |
+
|
| 75 |
+
We recruited 12 participants from the survey respondents, social media platforms, and word-of-mouth. 4 participants were identified as males and 7 as females; 7 participants were 18-24 years old, 4 were 25-35 years old, and 1 was 36-50 years old. 11 participants lived in the US and 1 in Canada. 5 participants had recycling experience in more than 1 country and 1 participant had related experience in 2 states in the US. Each participant was compensated with $10.
|
| 76 |
+
|
| 77 |
+
§ 4.3 PROCEDURE
|
| 78 |
+
|
| 79 |
+
The study obtained approval to conduct the interview from the Institutional Review Board of Rochester Institution of Technology. We conducted the study with participants remotely with an online meeting platform, such as Zoom, Google meeting. The interview session lasted for about 3040 minutes. The whole interview sessions were audio-recorded using the Voice Memos application and transcribed the interview content using Otter.ai.
|
| 80 |
+
|
| 81 |
+
§ 4.4 ANALYSIS
|
| 82 |
+
|
| 83 |
+
Two authors first performed open coding and discussed about disagreements on coding to gain a consensus. They then performed an affinity diagramming to derive themes emerging from codes.
|
| 84 |
+
|
| 85 |
+
§ 4.5 FINDINGS
|
| 86 |
+
|
| 87 |
+
Our analysis revealed the rationales and challenges associated with the practices of reusing and recycling batteries as well as the potential design opportunities.
|
| 88 |
+
|
| 89 |
+
§ 4.5.1 REUSING BATTERIES
|
| 90 |
+
|
| 91 |
+
Battery usage behaviors vary depending on people's tolerance of the perceived interruptions when products run out of power. Participants tend to change all batteries at once for products that would cause high perceived disruptions to their user experience when running out of power, for example, the controller of a video game console In contrast, they would be more willing to change only one of the batteries for products that would cause low perceived interruptions when running out of power, such as a TV remote controller.
|
| 92 |
+
|
| 93 |
+
Our survey results show that when the batteries cannot serve a product, ${67}\%$ of the respondents believe that the batteries are only partly used. In the interview, we investigated whether they are willing to measure the voltage in the battery.
|
| 94 |
+
|
| 95 |
+
Only two out of the 12 participants indicated that they had battery testers to measure the remaining power or voltage in the batteries and decide whether they would reuse the batteries. All other participants showed little interest in knowing the leftover power in the batteries and indicated that they would replace all batteries together. We found three reasons. First, it was perceived to be time-consuming to test with new batteries and replace all the old batteries one by one Secondly, they did not feel the need to save batteries in particular when did not have many devices using single-use batteries.
|
| 96 |
+
|
| 97 |
+
§ 4.5.2 RECYCLING BATTERIES
|
| 98 |
+
|
| 99 |
+
${70}\%$ of the survey respondents were not confident about their knowledge about battery recycling regulations. The interview study further explored people's willingness to learn the related regulations.
|
| 100 |
+
|
| 101 |
+
Willingness to learn about regulations: Seven out of the twelve participants indicated that they would like to learn the regulations about recycling batteries; two participants did not care much about the regulations; three would not actively seek to learn the regulations but would learn when encountering them. P10 mentioned that " $I$ don't actively seek it on my own initiative, but if I accidentally see it, I would click in."
|
| 102 |
+
|
| 103 |
+
There is certain content that people are interested in. 7 of 12 participants want to learn where they could discard or recycle batteries. Except for that, law, specific rules and regulations, and knowledge about processing the batteries.
|
| 104 |
+
|
| 105 |
+
8 out of 12 participants were aware that they should recycle non-rechargeable batteries or they could not discard these batteries in the regular trash. However, only 1 participant knew where to discard the non-rechargeable batteries and regulation in where he/she lived. Only 2 participants had or have experience in recycling non-rechargeable batteries. 4 participants indicated that they knew where to discard the non-rechargeable battery in other countries or districts, including China, Canada, Taiwan, and Turkey.
|
| 106 |
+
|
| 107 |
+
Challenge. In the survey, only ${16}\%$ of participants had experience in visiting recycle places to recycle batteries. In the interview, we tried to figure out the reason why they did not go to the recycle places (willingness) and in which way they considered it an easy way to recycle (challenge).
|
| 108 |
+
|
| 109 |
+
All the participants in the interview attempted to dispose of rechargeable batteries in the right place now or before to some extent. P5 said that " I know that they should be recycled in some way but I don't know where. So I threw it in the trash. " However, only P8 had the habit of recycling batteries because his working space had a disposing location.
|
| 110 |
+
|
| 111 |
+
P4 used to recycle batteries but felt it was hard to recycle batteries after she moved to a new place 6 years ago, and she mentioned her emotions: "Once we (our family) were very distressed about disposing of the battery, but we didn't do deep search on the Internet. I feel this is a very simple thing, but so hard to find one. ... We used to live in a small town. There was a university near where we lived, and there were battery recycling places in it. "
|
| 112 |
+
|
| 113 |
+
Two participants indicated that it was inconvenient to recycle batteries because they did not use a lot of batteries because it would be too much work. The other two participants thought that recycling batteries was part of the state laws and regulations. Although P11 was unfamiliar with the regulations and laws, he felt it was reasonable for residents to recycle batteries because "this is how we move the societies forward by being strict on these environmentally friendly things that aren't too difficult to do." P3 felt that the current law was not strict enough to regulate residents' behavior and people are less likely to care about it. He mentioned that "People are not just willing to push this into law so they are less likely to care about this. ...I don't care as much because it's not a law. "
|
| 114 |
+
|
| 115 |
+
§ 4.5.3 DESIGN OPPORTUNITIES
|
| 116 |
+
|
| 117 |
+
Our interviews also revealed three design opportunities.
|
| 118 |
+
|
| 119 |
+
Making recycling convenient to people. Locations, where participants and their family members visited to recycle batteries, include convenience stores, universities, apartment leasing office, electronic retailer stores, and their workplaces. One common characteristic of these locations was that they were all convenient for participants to visit. In particular, when they had to run an errand near these recycle locations, they would be more willing to visit the location.
|
| 120 |
+
|
| 121 |
+
Participants proposed several places to position battery recycling facilities (e.g., a bin): 1) places near where people live, for example, next to regular trash drop-off locations in a residential community; 2)locations near or in the stores that people visit regularly, for example, grocery stores, wholesales stores, or convenience stores; 3) libraries: P4 and P12 both mentioned that libraries are places that families with children and students often visit, and 4) recreational centers where people do sports and attend recreational classes.
|
| 122 |
+
|
| 123 |
+
Learning from practices of recycling other items. We asked participants about other items that they recycled in their daily lives and why they were able to recycle them. The frequently recycled items included paper and newspaper, cardboard, and plastic bottles and cans. The main reason why these items got recycled often was that participants could simply put these items next to their regular trash bins and wait for waste management to collect them. This finding shows again that convenience is key to recycling. Moreover, small rewards (e.g., store credits) were given by certain grocery stores to encourage people to recycle plastic bottles and cans. However, P5 felt that rewards might not work for recycling batteries because the tedious process of collecting and bringing batteries to certain locations as well as the hygienic issues associated with used batteries outweigh the small rewards grocery stores could provide.
|
| 124 |
+
|
| 125 |
+
Making information about how and where to recycle batteries easy to access. ${65}\%$ of the survey participants did not know where they should send batteries to. Thus, we investigated the reasons in the interviews. Results show that participants sought out battery recycle regulations and locations primarily from their social circles, such as their parents, spouses, friends, and landlords. Surprisingly, few participants used searching online (e.g., Google) as a way to find out battery recycle regulations and locations in their local areas. "Because we cannot find it via the internet, at least we tried to google...but we couldn't find it."-P5
|
| 126 |
+
|
| 127 |
+
Participants proposed six approaches to delivering battery recycle information that would be convenient for them to spot: 1) on the packages of the devices that use batteries, which show information about how to recycle batteries or a QR code that can be easily scanned by a smartphone; 2) on the battery brands' websites. An important consideration is to make sure such websites would pop up on the top of the search results list; 3)local governments: Local governments could inform their residences of relevant information via text messages, emails, news reports, or bulletins; 4) landlords, housing agents or dorm managers: They could be helpful for people who recently moved to the area and are not familiar with local battery recycle regulations and guidelines; 5) non-profit organizations: non-profit organizations could use public promotion activity and educational videos to show people the alarming consequences of discarding batteries without properly recycling them; and 6) waste management companies: waste management companies could also help residents recycle batteries, such as setting up a hot-line.
|
| 128 |
+
|
| 129 |
+
The preferred formats to deliver battery recycling regulations and guidelines were info-graphics, short videos, social media posts, or advertisements. Info-graphics could be displayed on a battery's packaging or on the packaging of a product that uses batteries.
|
| 130 |
+
|
| 131 |
+
§ 5 DISCUSSION
|
| 132 |
+
|
| 133 |
+
Informed by the findings of both the survey and interview studies, we present design considerations (DCs) for designers and researchers to consider when helping people better reuse and recycle batteries.
|
| 134 |
+
|
| 135 |
+
DC1: Help Users Understand the State of Used Batteries and How They Could Reuse them. Our studies found that two prominent challenges of reusing batteries were: 1) it was unknown whether a battery was fully drained or there was some power left; 2) what other products they could reuse the batteries for. Reasons for these challenges included lacking tools and knowledge about how to test the power of a used battery and having no easy access to information about possible products that could take used batteries.
|
| 136 |
+
|
| 137 |
+
To help implement this design consideration, we propose the conceptual designs of a portable battery tester and its companion mobile app to illustrate how to lower the barrier for the general public to test used batteries and find information about how to reuse and recycle them. Figure 1 (a) and (b) show two views of the battery tester that contains three slots to place three common types of non-rechargeable batters. The battery tester could connect with a smartphone via Bluetooth and send the test results to be displayed in the mobile companion app. Figure 1 (c) and (d) show the UIs of the app, which show the battery test results and the recommended actions for two batteries in good and bad conditions respectively. Making all of the information needed for reusing and recycling batteries in one app would save users efforts and time to perform random searches online, which were reported to be less effective by our participants and also prior studies [16,17].
|
| 138 |
+
|
| 139 |
+
< g r a p h i c s >
|
| 140 |
+
|
| 141 |
+
Figure 1: Battery Tester Prototype and companion Mobile App: (a) A side view of the prototype; (b) a top view of it; (c) App UI that shows that the tested AAA battery is in a good condition and offers a list of products that it could be reused in; (d) App UI that shows the tested AA battery was ready to be disposed or recycled and offers information about how it could be properly handled.
|
| 142 |
+
|
| 143 |
+
DC2: Make recycling batteries integrated into people's routine lives. Our studies show that inconvenience was one key barrier to recycling batteries. This finding corroborates the findings that showed that the cost of time and effort and lack of material rewards hinder people from recycling batteries [13, 19]. Our participants who did recycle batteries often had relatively easy access to recycling locations, such as their workplace, nearby stores, universities, and the community and homeowner's associations. This finding echos the suggestion of previous research [19].
|
| 144 |
+
|
| 145 |
+
To help implement this design consideration, we recommend HCI designers and researchers consider the successful practices of recycling other materials, such as cardboard, cans, and plastic bottles, and make recycling batteries embedded in people's everyday lives [9]. For example, waste management companies would take the recyclable materials weekly along with the trash and the community would set a separate space next to the regular trash for residents to drop off recyclable materials. Furthermore, some grocery stores have recycle facilities for people to recycle bottles and cans and even offer small rewards. These potential example solutions align with a concept of "integrated waste management" [3] and should be considered when integrating the collection of batteries with other waste streams to minimize not only individual's recycling efforts but also the negative impact associated with the transportation of batteries.
|
| 146 |
+
|
| 147 |
+
DC3: Build community supports to help people recycle batteries. Our studies also found that for those who did manage to recycle batteries, they received some community supports. For example, one interviewee mentioned that her previous community manager would notify the residents to carry used batteries to the community office once or twice a year.
|
| 148 |
+
|
| 149 |
+
To help implement this design consideration, we propose to build online community platforms for people to share information about and also exchange used batteries. A successful example platform for people to exchange used items is Cragslist [6]. There are several challenges to overcome. First, it remains unclear how to motivate people to participate in such platforms. One approach might be to gamify the process to make it fun and rewarding to participate. Second, unlike other used items, used batteries may cause hazardous concerns if not handled properly during the sharing process. Similar to our proposed design in Figure 1, it is worth designing simple approaches to checking batteries' conditions.
|
| 150 |
+
|
| 151 |
+
§ 6 LIMITATIONS AND FUTURE WORK
|
| 152 |
+
|
| 153 |
+
First, this short paper focused on understanding the practices and challenges of reusing and recycling batteries and deriving design considerations. Although we also offered potential solutions (e.g., Figure 1), they are yet to be fully implemented and evaluated with users. Second, the individual's practice of recycling batteries may vary [7]. In our interviews, we also noticed differences between participants who lived alone and those who lived with their families or roommates. More research is needed to investigate how battery reuse and recycle practices might be affected by social factors, such as the number of people that they live with. Lastly, our analysis highlights the importance of informing individuals of recycling locations and times in their local areas. However, our participants often had difficulty in finding such information. Although many online resources (e.g., earth911 [10] and call2recycle [5]) provide such information, future research should further investigate the barriers in users' information searching process and design tools to help users easily find such information.
|
| 154 |
+
|
| 155 |
+
§ 7 CONCLUSION
|
| 156 |
+
|
| 157 |
+
We have conducted a survey study and an interview study to understand the practices and challenges of reusing and recycling batteries. Our results found various barriers in the way of reusing and recycling batteries. First, due to the lack of information, people do not know the efficient ways to recycle batteries. Even though some may have the good intention of conducting environmental-friendly practices, people are unwilling to invest too much time and effort if they could easily access necessary information. Regarding reusing or making full use of batteries, people have difficulties in figuring out the remaining power of batteries efficiently. As a result, many would discard batteries that still have power left, which leads to the waste of energy. Our analysis also uncovered opportunities to lower the barriers to reuse and recycle batteries. Finally, we present three design considerations and discuss potential solutions.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,445 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BayesGaze: A Bayesian Approach to Eye-Gaze Based Target Selection
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Selecting targets accurately and quickly with eye-gaze input remains an open research question. In this paper, we introduce BayesGaze, a Bayesian approach of determining the selected target given an eye-gaze trajectory. This approach views each sampling point in an eye-gaze trajectory as a signal for selecting a target. It then uses the Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by sampling interval to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping (CM) method [4]. Our research shows that both accumulating posterior and incorporating the prior are effective in improving the performance of eye-gaze based target selection.
|
| 8 |
+
|
| 9 |
+
Index Terms: Human-centered computing-Human computer interaction (HCI); Human-centered computing-Human computer interaction (HCI)—Interaction techniques; Human-centered computing-Human computer interaction (HCI)-HCI design and evaluation methods-User studies;
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Selecting a target with the gaze remains a central problem of eye-based interaction. Two factors make this problem challenging [18]. First, gaze input is noisy because of both inadvertent eye movements and inevitable noise in the tracking device [49]. Therefore, it is difficult for a user to move their gaze to a particular position and stabilize it for an extended period of time. Second, unlike using a mouse, where a user can confirm the selection by clicking a button, gaze-based interaction lacks an easy-to-use approach to confirm the selection, adding a layer of difficulty to the design of a selection technique [42]. Although previous research has explored target selection using dwell $\left\lbrack {{15},{17}}\right\rbrack$ , motion correlation [44] and dynamic user interfaces $\left\lbrack {{26},{29},{39}}\right\rbrack$ , quickly and accurately selecting a target with gaze input remains an open research question.
|
| 14 |
+
|
| 15 |
+
Inspired by the literature showing that Bayes' theorem is a promising principle for handling uncertainty and noise in input signals (e.g., $\left\lbrack {4,{51}}\right\rbrack$ ), we investigate how to apply a Bayesian perspective to determining the selected target given a gaze trajectory. Applying Bayes' theorem to gaze-based target selection raises two main challenges. First, it is not clear how to obtain the likelihood function for a gaze trajectory that contains a sequence of input signals (gaze points), i.e. the probability of observing a gaze trajectory given the target. Second, unlike touch or mouse input, which have a clear definitions of the terminal moment of the input, e.g. lifting the finger from the touch screen or mouse button, gaze input lacks a clear delimiter of the completion of a selection action. It is therefore necessary to design a method to determine when the selection action is completed.
|
| 16 |
+
|
| 17 |
+
To address these challenges, we introduce BayesGaze (Figure 1), a Bayesian approach for determining the selected target given a gaze trajectory. This approach first views each sampling point in a gaze trajectory as a signal for selecting a target, and then uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point. The likelihood of a target being selected is based on the distance between the sampling point and the target center, and the prior probability of a target being selected is modeled by a categorical distribution and updated after a selection action. BayesGaze then accumulates the posterior probabilities over all sampling points, weighted by the sampling interval, to determine the selected target. BayesGaze advances the Center of Gravity Mapping (CM) [4] by modeling the prior and incorporating it into the process of determining the selected target. This contribution is key to improving the performance of gaze-based target selection.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Figure 1: An overview of how BayesGaze works. Given a gaze position ${s}_{i}$ sampled at time $i$ in a gaze trajectory, BayesGaze updates the accumulated interest of selecting target $t$ , denoted by ${I}_{i}\left( t\right)$ , by adding $P\left( {t \mid {s}_{i}}\right)$ weighted by the sampling interval ${\Delta \tau }$ to ${I}_{i - 1}\left( t\right) .P\left( {t \mid {s}_{i}}\right)$ is the posterior probability of selecting $t$ given ${s}_{i}$ , which is calculated based on Bayes’ theorem. If the accumulated interest ${I}_{i}\left( t\right)$ exceeds a threshold $\theta$ , the target $t$ is selected. BayesGaze then updates the prior probability $P\left( t\right)$ accordingly.
|
| 22 |
+
|
| 23 |
+
We report on a controlled experiment showing that BayesGaze improves target selection accuracy (from 82.1% to 88.3%) and speed (from 2.49 seconds per selection to 2.23 seconds) over a dwell-based selection method. BayesGaze also outperforms the CM method [4]. Overall, our investigation shows that accumulating the posterior probability and incorporating the prior are effective in improving the performance of gaze-based target selection.
|
| 24 |
+
|
| 25 |
+
## 2 RELATED WORK
|
| 26 |
+
|
| 27 |
+
BayesGaze builds on previous work on gaze input and on Bayesian approaches. Here we review related work in gaze-based target selection techniques, Bayesian approaches to gaze input, and gaze-tracking technology.
|
| 28 |
+
|
| 29 |
+
### 2.1 Gaze Based Target Selection
|
| 30 |
+
|
| 31 |
+
Gaze-based target selection is a key technique for supporting a number of gaze interaction technologies such as gaze-based text input [35], gaming [16] or smart device control [36]. Dwell-based target selection (Dwell) $\left\lbrack {{15},{17},{50}}\right\rbrack$ is the most well-known and most widely used target selection method. It requires a user to dwell their gaze on a target for a specific uninterrupted period of time (usually several hundred milliseconds to 1 or 2 seconds) to select it. Such a highly concentrated action often results in eye fatigue [33]. Many works have been devoted to improving the Dwell technique by enabling a shorter dwell time and to finding other gaze-based target selection methods. For example, letting a user adjust the dwell time manually can lead to a shorter dwell time, from 876 ms to ${282}\mathrm{\;{ms}}$ [27]. Previous research [15] used Fitts’ law to model gaze input and suggested selecting the target once the user's gaze fixates the target. Other works have explored adjusting the dwell time based on how likely the target will be selected $\left\lbrack {{31},{34}}\right\rbrack$ .
|
| 32 |
+
|
| 33 |
+
In addition to dwell-based methods, researchers have proposed alternatives to improve gaze-based target selection from the two perspectives: handling the noisy gaze input and designing new selection action $\left\lbrack {{18},{49}}\right\rbrack$ . To accommodate the inaccuracy of eye-gaze input, some works used dynamic expansion/zooming of the display [29,39] or new UIs, e.g. Actigaze [26] used a set of confirmation buttons to make gaze target selection easier. Other works investigated error-aware gaze target selection so that the inaccuracy of target selection can be tracked and the system can provide design guidelines for UIs $\left\lbrack {3,{11}}\right\rbrack$ . Gaze target selection actions are also well explored. For example, motion correlation between the target movement and gaze trajectory has been proposed to determine the selected target [44]. Actions such as blinking [7] and gaze gesture [9] have also been explored for target selection. Previous research has also used multimodal input to get rid of the dwelling action. For example, once the user gazes at the target, a separate device, such as a keyboard [22] or hand-held touchscreen [42], can be employed to perform the selection action.
|
| 34 |
+
|
| 35 |
+
### 2.2 Bayesian Approaches to Target Selection
|
| 36 |
+
|
| 37 |
+
There is a growing interest in applying a Bayesian perspective to handle uncertainty in target selection. Some of this research is related to gaze input. For example, previous research has proposed probabilistic frameworks to deal with uncertainty in the input process, such as handling the uncertainty of touch actions on mobile devices $\left\lbrack {6,{45}}\right\rbrack$ and touchscreens $\left\lbrack {51}\right\rbrack$ , and also handling uncertainty in gaze-based interactions [4,32].
|
| 38 |
+
|
| 39 |
+
Our work is related to the recent work BayesianCommand, which uses Bayes' theorem to handle uncertainty in touch target selection and word-gesture input [51]. The fundamental difference between our work and BayesianCommand is that in our work, gaze input does not have well-defined starting or ending moments, but touch input does (i.e., landing a finger on screen to start input, and taking finger off to end the input). Therefore, BayesianCommand cannot be applied to gaze input directly.
|
| 40 |
+
|
| 41 |
+
Our research is also related to previous work on using a Bayesian perspective to address the gaze-to-object mapping problem, i.e. the Center of Gravity Mapping method (CM) [4]. CM is an improved version of the FM algorithm [47], which performed the best among 9 extant gaze-to-object mapping algorithms [40]. The main difference between our work and CM is that CM does not model nor update the prior, while our approach incorporates the prior into the process of deciding the selected target, which turns out to be the primary reason why BayesGaze improves target selection accuracy and reduces selection time. Furthermore, BayesGaze is designed for the gaze target selection problem while the CM is designed for gaze-to-object mapping problem. The gaze-based target selection is a different problem than gaze-to-object mapping $\left\lbrack {4,{40}}\right\rbrack$ because the former requires a mechanism to commit the selection while the latter does not.
|
| 42 |
+
|
| 43 |
+
### 2.3 Gaze Tracking Technology
|
| 44 |
+
|
| 45 |
+
Gaze tracking technology is becoming increasingly mature and available. For example, a number of professional gaze trackers are available, including Tobii 4C [23], SMI REDn [20] or Eyelink 1000 plus [37], that cost several hundreds up to a few thousand dollars. Previous research has also enabled gaze tracking with off-the-shelf cameras by using a fisheye camera [2], the front-facing RGB camera of a tablet [46], or by leveraging the glint of the screen on the user's cornea [14]. Deep learning techniques have also been used to predict gaze position using Convolutional Neural Networks [19, 48].
|
| 46 |
+
|
| 47 |
+
Unlike the above approaches, we enabled gaze tracking with an off-the-shelf and widely used iPad Pro equipped with a true depth camera and powered by Apple's ARKit.
|
| 48 |
+
|
| 49 |
+
## 3 BAYESGAZE: A BAYESIAN PERSPECTIVE ON GAZE TAR- GET SELECTION
|
| 50 |
+
|
| 51 |
+
### 3.1 A Formal Description of the Gaze Based Target Se- lection Problem
|
| 52 |
+
|
| 53 |
+
The gaze-based target selection problem can be formally described as the following research question. Given a gaze trajectory, which one is the intended target among a set of candidates denoted by $T = \left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{N}}\right\}$
|
| 54 |
+
|
| 55 |
+
As shown in previous research $\left\lbrack {4,{40},{47}}\right\rbrack$ , the existing algorithms for solving the gaze-based target selection problem can be described through an interest accumulation framework: each target candidate (denoted by $t$ ) accumulates a certain amount of "time" or "interest" from gaze input, until one of them reaches a threshold (denoted by $\theta$ ) for being selected. Under this framework, the widely adopted dwell-based target selection method can be expressed as follow.
|
| 56 |
+
|
| 57 |
+
Dwell-based Target Selection Method. Assuming that the gaze trajectory is denoted by $S = \left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{K}}\right\}$ where ${s}_{i}$ is a sampling point along the gaze trajectory at time $i$ , the accumulated "interest" for a target candidate $t$ at time $i$ , denoted by ${I}_{i}\left( t\right)$ , is calculated as:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{I}_{i}\left( t\right) = \left\{ \begin{array}{ll} {I}_{i - 1}\left( t\right) + {\Delta \tau }, & \text{ if }{s}_{i}\text{ is within the target }t \\ 0, & \text{ otherwise } \end{array}\right. \tag{1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where ${s}_{i}$ is the gaze position at time $i$ , and ${\Delta \tau }$ is the sampling interval. ${I}_{i}\left( t\right)$ represents the duration during which the gaze position stayed continuously within the target candidate $t$ . If the gaze position moves outside the target, it resets ${I}_{i}\left( t\right)$ to 0 . To select a target, the eye-gaze position needs to continuously stay within a target for a period of $\theta$ . In other words, the selected target is the one (denoted by ${t}^{ * }$ ) whose accumulated selection interest ${I}_{i}\left( {t}^{ * }\right)$ first reaches $\theta$ (i.e., ${I}_{i}\left( {t}^{ * }\right) \geq \theta$ ).
|
| 64 |
+
|
| 65 |
+
### 3.2 The BayesGaze Algorithm
|
| 66 |
+
|
| 67 |
+
Under the framework of "accumulating selection interest", we propose BayesGaze, a Bayesian perspective for gaze-based target selection. It views each sampling point in a gaze trajectory as a signal for selecting a target, and then uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point. BayesGaze then accumulates the posterior probabilities over all sampling points weighted by the sampling interval, as accumulated interest of selecting a target. A target candidate will be selected once the accumulated interest reaches a threshold $\theta$ . Formally, the accumulated interest of selecting a target $t$ is calculated as follows, given the sampling point ${s}_{i}$ :
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{I}_{i}\left( t\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot P\left( {t \mid {s}_{i}}\right) . \tag{2}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
The posterior $P\left( {t \mid {s}_{i}}\right)$ can be estimated according to Bayes’ theorem, assuming there are $N$ target candidates:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
P\left( {t \mid {s}_{i}}\right) = \frac{P\left( {{s}_{i} \mid t}\right) P\left( t\right) }{P\left( {s}_{i}\right) } = \frac{P\left( {{s}_{i} \mid t}\right) P\left( t\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}P\left( {{s}_{i} \mid {t}_{j}}\right) P\left( {t}_{j}\right) }, \tag{3}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $P\left( t\right)$ is the prior probability of target $t$ being the intended target without observing the current gaze input trajectory, and $P\left( {{s}_{i} \mid t}\right)$ is the probability of ${s}_{i}$ if the intended target is $t$ (the likelihood).
|
| 80 |
+
|
| 81 |
+
BayesGaze has the following characteristics. First, BayesGaze resumes the accumulation of selection interest from where it left if the gaze trajectory accidentally leaves a target but returns to it later. It address a problem of dwell-based method (Equation 1) that if the eye-gaze position moves outside a target, the accumulated interest for selecting such a target is reset to 0 . Second, it weights the accumulated interest with the distance between the gaze point and the target center, through the likelihood function $P\left( {{s}_{i} \mid t}\right)$ . The closer a gaze point is to the target center, the more "interest" such a point will contribute to the target selection. Third, it updates the prior distribution of targets $\left( {P\left( t\right) }\right)$ and incorporate it into the procedure of deciding the selected target.
|
| 82 |
+
|
| 83 |
+
In the following part, we introduce how to estimate the prior distribution $P\left( t\right)$ and the likelihood $P\left( {s \mid t}\right)$ , which are keys for applying BayesGaze.
|
| 84 |
+
|
| 85 |
+
#### 3.2.1 Prior Probability Model
|
| 86 |
+
|
| 87 |
+
This part introduces a frequency model to estimate the prior distribution $P\left( t\right)$ based on the observable target selection history. We assume that the user does not select targets randomly and the target selection follows some distribution, e.g. Zipf's Law. This assumption is made based on the selection patterns in menu selection $\left\lbrack {8,{25},{51}}\right\rbrack$ , smartphone APP launching [30], and command triggering [1, 10, 51]. All of them are tasks that gaze target selection can support.
|
| 88 |
+
|
| 89 |
+
We model the prior distribution (i.e., a target candidate being selected prior to observing the current gaze trajectory) as a categorical distribution. More specifically, the outcome of a gaze-based selection trial that results in a selected target is viewed as a random variable $x$ whose value is one of $N$ categories (the $N$ target candidates). The core parameter of this random variable $x$ is the parameter vector $\mathbf{p} = \left( {P\left( {t}_{1}\right) , P\left( {t}_{2}\right) ,\ldots , P\left( {t}_{N}\right) }\right)$ , which describes the probability of each category. As a common practice in Bayesian inference, we also view this parameter vector $\mathbf{p}$ as a random variable and give it a prior distribution, using the Dirichlet distribution.
|
| 90 |
+
|
| 91 |
+
According to the properties of Dirichlet distributions, after each target selection trial we can update the expected value of the posterior
|
| 92 |
+
|
| 93 |
+
$p$ as follows:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
P\left( {t}_{i}\right) = \frac{k + {c}_{i}}{k \cdot N + \mathop{\sum }\limits_{{j = 1}}^{N}{c}_{j}}, \tag{4}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where $N$ is the number of candidate targets (e.g., the number of menu items), ${c}_{i}$ is the number of times we have observed target ${t}_{i}$ being selected, and $k$ is the pseudocount of the Dirichlet prior, a hyper-parameter of the distribution. The parameter $k$ can also be viewed as the update rate, which is a positive constant that controls how quickly the $P\left( {t}_{i}\right)$ are updated. Note that the prior updating model (Equation 4) is the same as the model proposed by Zhu et al. [51], although these authors do not describe it under the paradigm of categorical-Dirichlet distributions. We use the expected value of $\mathbf{p}$ (Equation 4) as the prior model in BayesGaze (Equation 3).
|
| 100 |
+
|
| 101 |
+
This prior model matches our expectations well. When there is no target selection observed, the probability $P\left( {t}_{i}\right)$ is $\frac{k}{k \cdot N} = \frac{1}{N}$ , which means that all candidate targets have equal probability. Whereas when there are enough target selections observed, i.e. ${c}_{i} \gg k$ , we have $P\left( {t}_{i}\right) \approx \frac{{c}_{i}}{\mathop{\sum }\limits_{j}{c}_{j}}$ , which means that $P\left( {t}_{i}\right)$ can be estimated based on the frequency of ${t}_{i}$ having been selected before.
|
| 102 |
+
|
| 103 |
+
By setting different $k$ , we can balance $P\left( {t}_{i}\right)$ between two extreme cases: 1) when $k \rightarrow + \infty$ , we have $P\left( {t}_{i}\right) \approx \frac{1}{N}$ , that is, the prior probabilities of all candidate targets are the equal. 2) when $k = 0$ , we have $P\left( {t}_{i}\right) = \frac{{c}_{i}}{\mathop{\sum }\limits_{j}{c}_{j}}$ , which means that the prior probability is only based on the history selection frequency. We later use empirical data to determine an optimal value for $k$ .
|
| 104 |
+
|
| 105 |
+
#### 3.2.2 Likelihood Model
|
| 106 |
+
|
| 107 |
+
The goal of this step is to estimate $P\left( {{s}_{i} \mid t}\right)$ , the likelihood of observing ${s}_{i}$ if $t$ is the intended target. Since ${s}_{i}$ is a single gaze position, a reasonable assumption is that $P\left( {{s}_{i} \mid t}\right)$ is higher if ${s}_{i}$ is closer to the center of $t$ . We follow Bernard et al. [4] and use a Gaussian density function to describe the likelihood of observing ${s}_{i}$ , a common method for modeling likelihood for a single-point target selection:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
P\left( {{s}_{i} \mid t}\right) = \frac{1}{\sqrt{{2\pi }{\sigma }^{2}}}\exp \left( {-\frac{{\begin{Vmatrix}{s}_{i} - {c}_{t}\end{Vmatrix}}^{2}}{2{\sigma }^{2}}}\right) , \tag{5}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where ${c}_{t}$ is the center of target $t$ , the term $\begin{Vmatrix}{{s}_{i} - {c}_{t}}\end{Vmatrix}$ is the ${L}^{2}$ Euclidean norm of the vector ${s}_{i} - {c}_{t},\sigma$ is an empirical parameter defining how concentrated should the gaze points be. The parameter $\sigma$ controls how much interest can be accumulated at a certain distance. If $\sigma$ is too small, a target accumulates high interest only when the gaze point is close to the target center, which could make the target hard to select. On the other hand, if $\sigma$ is too large, the accumulated interests for neighboring targets could become large and cause mis-selections. We estimate an optimal $\sigma$ from real data in the next section.
|
| 114 |
+
|
| 115 |
+
After obtaining both the prior probability and the likelihood, we can use BayesGaze to perform target selection. The BayesGaze algorithm is summarized in Algorithm 1. Note that the algorithm can be run online, i.e. when a gaze point ${s}_{i}$ is sampled by the gaze tracker, the top-level for-loop can be executed to check if a target is selected.
|
| 116 |
+
|
| 117 |
+
Algorithm 1 BayesGaze Algorithm
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
Input: Target set: $T = \left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{N}}\right\}$ , Gaze trajectory: $S =$
|
| 122 |
+
|
| 123 |
+
$\left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{K}}\right\}$ , Threshold: $\theta$
|
| 124 |
+
|
| 125 |
+
Output: Selected target $t$ , Selection time: ${\tau }_{\text{sel }}$
|
| 126 |
+
|
| 127 |
+
for ${s}_{i}$ in $S$ do
|
| 128 |
+
|
| 129 |
+
for ${t}_{i}$ in $T$ do
|
| 130 |
+
|
| 131 |
+
Obtain prior probability $P\left( {t}_{j}\right)$ and compute likelihood
|
| 132 |
+
|
| 133 |
+
$P\left( {{s}_{i} \mid {t}_{j}}\right)$ using Equation 5;
|
| 134 |
+
|
| 135 |
+
Compute accumulated interest ${I}_{i}\left( {t}_{j}\right)$ from Equation 2;
|
| 136 |
+
|
| 137 |
+
if ${I}_{i}\left( {t}_{j}\right) > \theta$ then
|
| 138 |
+
|
| 139 |
+
Update prior probability $P\left( {t}_{m}\right)$ for each ${t}_{m} \in T$ given
|
| 140 |
+
|
| 141 |
+
that ${t}_{j}$ is selected using Equation 4;
|
| 142 |
+
|
| 143 |
+
return ${t}_{j}, i \cdot {\Delta \tau }$
|
| 144 |
+
|
| 145 |
+
end if
|
| 146 |
+
|
| 147 |
+
end for
|
| 148 |
+
|
| 149 |
+
end for
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
#### 3.2.3 BayesGaze without Prior
|
| 154 |
+
|
| 155 |
+
If we consider the prior to be Uniform distribution before every trial (i.e. $\forall {t}_{i} \in T, P\left( {t}_{i}\right) = 1/N$ ), BayesGaze will be identical to the Center of Gravity Mapping (CM) algorithm [4] (referred to as the CM method hereafter), a previously proposed method for deciding a target for a gaze-to-object mapping task. Under this special condition, the accumulated interest of the CM method can be calculated by Equation 2 with the prior $P\left( t\right) = 1/N$ , that is:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{I}_{i}\left( t\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot P\left( {t \mid {s}_{i}}\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot \frac{P\left( {{s}_{i} \mid t}\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}P\left( {{s}_{i} \mid {t}_{j}}\right) }, \tag{6}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
where $P\left( {{s}_{i} \mid t}\right)$ is calculated by Equation 5. Therefore, we view BayesGaze as an improvement over the CM method that updates and incorporates the prior in the target selection process. The CM method is also very similar to the previously proposed Fractional Mapping method $\left\lbrack {{40},{47}}\right\rbrack$ . We later compare BayesGaze with the CM method to examine to what degree incorporating the prior can improve gaze target selection performance.
|
| 162 |
+
|
| 163 |
+
In order to successfully apply the BayesGaze algorithm, we need to obtain the values of three parameters, denoted as a 3-tuple $\left\lbrack {k,\mathbf{\sigma },\mathbf{\theta }}\right\rbrack$ , where $k$ is part of the prior probability model (Equation 4), $\sigma$ is part of the likelihood model (Equation 5), and $\theta$ is the threshold of the accumulated interest for committing a selection. We carried out a study to collect gaze data for target selection and determine the optimal parameter values from that data.
|
| 164 |
+
|
| 165 |
+
## 4 Parameter Determination
|
| 166 |
+
|
| 167 |
+
We adopted a data-driven simulation approach to search for the optimal parameter values for the BayesGaze algorithm. The procedure consists of two phases. In Phase 1, we carried out a Wizard-of-Oz study to collect gaze input data for selecting a target. In Phase 2, we fed the collected data to the BayesGaze algorithm to search for the optimal parameter values. We also searched for the optimal parameter for the Dwell method (Equation 1) and for the CM method (Equation 6).
|
| 168 |
+
|
| 169 |
+
### 4.1 Phase 1: Collecting Gaze Input Data via a Wizard-of- Oz Study
|
| 170 |
+
|
| 171 |
+
We first carried out a Wizard-of-Oz study to collect gaze input data for selecting a target. We focused on a 1-dimensional target selection task, where the target is a horizontal bar and gaze motion is vertical. We picked this task because 1-dimensional pointing is a typical target selection task, and horizontal bars are widely used UI elements on mobile computing devices such as smartphones and tablets.
|
| 172 |
+
|
| 173 |
+
#### 4.1.1 Participants
|
| 174 |
+
|
| 175 |
+
Twelve users ( 4 female) between 23 and 31 years old (average ${27.25} \pm {2.22})$ participated in the experiment. All of them had normal or correct-to-normal sight and none of them was color blind. None of them had the experience of using gaze tracking devices or applications.
|
| 176 |
+
|
| 177 |
+
#### 4.1.2 Apparatus
|
| 178 |
+
|
| 179 |
+
We used the 11-in iPad Pro for gaze tracking and to run the experiment because it can be widely and conveniently accessed. The gaze tracking was implemented based on the iPad's true depth camera and Apple’s ARKit library, and the sampling rate was ${60}\mathrm{\;{Hz}}$ . Specifically, we used the leftEyeTransform and rightEyeTransform provided by ARKit library and performed a hitTestWithSegment call to obtain the raw gaze position. Based on the recommendation of [11], we used the Outlier Correction filter with a triangle kernel [21] to obtain smooth gaze tracking. The filter contains a saccade/fixation detection module so that it can apply sliding windows of different lengths separately for saccades and fixations. The thresholds for the $x$ and $y$ axis to detect a saccade were both set to ${0.5}^{ \circ }$ (calculated based on the estimated face-screen distance). For fixations, the sliding window size of the filter was set to 40 as suggested by [11]. For the saccade, the sliding window size of the filter was set to 10 , rather than using the raw position directly, to increase gaze tracking stability. We also followed the findings of previous works $\left\lbrack {{24},{38},{41}}\right\rbrack$ that allow head movements to improve target selection performance. We used a gazing task where the user gazes at 40 different points on the screen with a cursor showing where the user is looking at to test the gazing accuracy. The result showed a ${0.67}^{ \circ }$ with a standard deviation of 0.85 , which means the user may accurately control the gaze to select targets.
|
| 180 |
+
|
| 181 |
+
#### 4.1.3 Procedure
|
| 182 |
+
|
| 183 |
+
During the experiment the participant sat in front of a desk where an iPad Pro running the experiment was placed on a phone holder. The participant can freely adjust the iPad position, and was instructed to keep the distance between their eyes and the iPad at around ${40}\mathrm{\;{cm}}$ .
|
| 184 |
+
|
| 185 |
+
The study includes multiple target selection trials. In each trial, a horizontal bar in blue was displayed on the screen as the target and the participant was instructed to select it via gaze input. Fig. 2 shows the setup. Before each trial, the participant first moved the gaze-controlled cursor in the starting gray bar. After 3 seconds, the starting bar turned green, signaling the start of the trial. The participant was then instructed to move the cursor with their gaze to select a target of width(W)at a distance $D$ from the starting bar. We collected gaze input data for 5 seconds after a trial started. We assumed that 5 seconds was long enough for the participants to select a target. Each participant took a break after 15 trials. In total the experiment lasted around 15 minutes per participant.
|
| 186 |
+
|
| 187 |
+
We adopted a within-participant $3 \times 4 \times 2$ design with three levels of target width $W : 2\mathrm{\;{cm}}\left( {{2.86}^{ \circ }\text{calculated based on a participant-}}\right.$ screen distance of ${40}\mathrm{\;{cm}}$ ), $3\mathrm{\;{cm}}\left( {4.29}^{ \circ }\right)$ , and $4\mathrm{\;{cm}}\left( {5.76}^{ \circ }\right)$ , four levels of distance $D : 6\mathrm{\;{cm}}\left( {8.53}^{ \circ }\right) ,8\mathrm{\;{cm}}\left( {11.31}^{ \circ }\right) ,{10}\mathrm{\;{cm}}\left( {14.04}^{ \circ }\right)$ , and ${12}\mathrm{\;{cm}}\left( {16.70}^{ \circ }\right)$ , and two levels of gaze motion direction: up or down from the starting bar. We counterbalanced the factors by randomizing the trials in the experiment.
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
|
| 191 |
+
Figure 2: A screenshot of the Wizard-of-Oz study. The green button is the starting bar, and the target is shown as a blue bar. There is a red cursor indicating where the participant is looking.
|
| 192 |
+
|
| 193 |
+
In total, the study resulted in 12 participant $\times 3$ target sizes $\times 4$ distances $\times 2$ directions $\times 2$ repetitions = 576 trials.
|
| 194 |
+
|
| 195 |
+
### 4.2 Phase 2: Determining Parameters from the Col- lected Data
|
| 196 |
+
|
| 197 |
+
We created a set of gaze-based target selection tasks, simulated gaze input based on the data collected in Phase 1, and searched for the parameter values for the BayesGaze, CM, and Dwell that led to high input accuracy and fast input speed.
|
| 198 |
+
|
| 199 |
+
#### 4.2.1 Simulating Eye-Gaze Target Selection Tasks
|
| 200 |
+
|
| 201 |
+
We first created a set of target selection tasks in which a user is supposed to control their gaze to select a target among $N$ candidates. These $N$ candidates are stacked together with no gap between them to simulate the common vertical list or vertical menu design of mobile devices (e.g., settings menus in iOS). We included the same 3 target sizes in the simulation as in the data collection study $(2,3$ , and $4\mathrm{\;{cm}}$ ) and set $N = 5$ . The gaze trajectories for selecting a target are obtained from the collected data, according to the target sizes. Fig. 3 shows examples of simulated gaze trajectories for selecting different targets on the screen.
|
| 202 |
+
|
| 203 |
+
Since previous research has shown that the distribution of menu items being selected follows Zipf’s distribution $\left\lbrack {1,8,{10},{25},{30},{51}}\right\rbrack$ , we assumed that the frequency of each candidate being the target follows Zipf's Law:
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
f\left( {l;\alpha , N}\right) = \frac{1/{l}^{\alpha }}{\mathop{\sum }\limits_{{n = 1}}^{N}\left( {1/{n}^{\alpha }}\right) }, \tag{7}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
where $N$ is the number of candidate targets (in the simulation, $N = 5$ ), $l \in \{ 1,2,\ldots , N\} , n$ is the rank of each target, and $\alpha$ is the value of the exponent characterizing the distribution. We include ${4\alpha }$ values (0.5,1,2,3)in the simulation.
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+
Figure 3: An example of using the same gaze trajectory to simulate selecting a target (the blue one) at different indices among the five horizontal bars. The three red line shows the same gaze trajectory collected in the Wizard-of-Oz experiment. The red dot indicates the start of the trajectory. A simulated user is selecting the $2\mathrm{{nd}}\left( \mathrm{a}\right)$ , the $3\mathrm{{rd}}\left( \mathrm{b}\right)$ , and the $4\mathrm{{th}}\left( \mathrm{c}\right)$ target among5target candidates, with the same gaze trajectory.
|
| 214 |
+
|
| 215 |
+
For each target size, we had 192 collected trajectories. Among the $N$ candidates, we randomly assigned the frequencies. For example, when $N = 5$ and $\alpha = 1$ , the generated frequencies can be $\lbrack {28},{84}$ , ${21},{42},{17}\rbrack$ , which means that the first target among 5 candidates will be selected 28 times, the second 84 times, etc. We randomly selected trajectories (without repetition) to simulate selecting targets at different indices given the generated frequencies.
|
| 216 |
+
|
| 217 |
+
#### 4.2.2 Searching for the Parameter Values
|
| 218 |
+
|
| 219 |
+
Given a particular parameter tuple $\left\lbrack {k,\sigma ,\theta }\right\rbrack$ , we ran the BayesGaze algorithm to determine the selected target in the simulated target selection tasks. We viewed the process of searching for the optimal parameter values as an optimization problem: determining parameter values that optimizes target selection performance, measured in terms of success rate and selection time.
|
| 220 |
+
|
| 221 |
+
We performed a grid search to search for optimal parameter values for $k,\sigma$ and $\theta$ . In the grid search, $k$ ranges from 0.5 to 5 by steps of ${0.5},\sigma$ ranges from ${0.14}\mathrm{\;{cm}}\left( {0.2}^{ \circ }\right)$ to ${1.4}\mathrm{\;{cm}}\left( {2}^{ \circ }\right)$ by steps of ${0.14}\mathrm{\;{cm}},\theta$ ranges from 0.2 seconds to 2 seconds by steps of 0.1 seconds. The simulation results showed that different values for $k$ do not influence performance. We chose ${k}^{ * } = 1$ , as in [51]. When $k = 1$ , the Dirichlet prior of the Categorical distribution, without observing any selection results, becomes a Uniform distribution, i.e. an equally distributed prior. The best parameters for $\sigma$ were from 0.28 cm to ${0.56}\mathrm{\;{cm}}$ for BayesGaze. We chose ${\sigma }^{ * } = {0.28}\mathrm{\;{cm}}$ to reduce the chance of mis-selections.
|
| 222 |
+
|
| 223 |
+
Because we want to improve two objectives, success rate and selection time, we adopted a Pareto optimization process to find the optimal $\theta$ .The process generates a set of parameter values, called the Pareto-optimal set or Pareto front. Each parameter in the set is Pareto-optimal, which means that none of the two metrics (success rate or selection time) can be improved without hurting the other metric. We plot the Pareto front of BayesGaze in Fig. 4a. We followed the exact same optimization process to search for the optimal parameter values for the CM and Dwell methods, and generated the corresponding Pareto fronts in Fig. 4b and 4c. For the CM method, the parameters are a 2-tuple $\left\lbrack {\sigma ,\theta }\right\rbrack$ , as it does not incorporate the prior into the accumulated interest. For the Dwell method, the parameter is $\theta$ , the threshold for deciding whether a target is selected based on the accumulated selection interest.
|
| 224 |
+
|
| 225 |
+
To balance the success rate and selection time, we assigned equal weights to success rate and selection time. We first normalized the success rate and selection time to the range $\left\lbrack {0,1}\right\rbrack$ . We picked a parameter value ${\theta }^{ * }$ that leads to the best overall score $S$ , which is defined as:
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
S = {0.5} \times \text{SuccessRate} - {0.5} \times \text{SelectionTime,} \tag{8}
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
where SuccessRate and SelectionTime are the normalized values between 0 and 1 , according to the highest and lowest values displayed in Fig. 4. The coefficient of SelectionTime is -0.5 because the lower the selection time, the higher the selection performance. The optimal parameters for different $\alpha$ values are the same and are summarized in Table 1.
|
| 232 |
+
|
| 233 |
+
<table><tr><td>Target Selection Method</td><td>${k}^{ * }$</td><td>${\sigma }^{ * }$</td><td>${\theta }^{ * }$</td></tr><tr><td>BayesGaze</td><td>1</td><td>0.28 cm</td><td>0.9</td></tr><tr><td>CM</td><td>-</td><td>0.28 cm</td><td>0.9</td></tr><tr><td>Dwell</td><td>-</td><td>-</td><td>0.8</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 1: Optimal parameters (same for different $\alpha$ in Zipf’s Law) selected on the Pareto front for three target selection methods
|
| 236 |
+
|
| 237 |
+
## 5 A TARGET SELECTION EXPERIMENT
|
| 238 |
+
|
| 239 |
+
To empirically evaluate BayesGaze, we conducted an 1D gaze-based target selection study using the parameters from the simulations. We included CM and Dwell as baselines in our study because (1) Dwell was a widely adopted target selection method and CM was one of the best-performed algorithms from the literature, and (2) CM can be viewed as BayesGaze without prior. Including these two methods in comparison allowed us to evaluate whether BayesGaze improved the performance with extant algorithms, and to understand how the two components of BayesGaze (likelihood function and prior) would contribute the target selection performance improvement.
|
| 240 |
+
|
| 241 |
+
### 5.1 Participants and Apparatus
|
| 242 |
+
|
| 243 |
+
Eighteen adults ( 5 female) between 24 and 31 years old (average ${27.2} \pm {2.1}$ ) participated in the study. All of them had normal sight or correct-to-normal sight and none of them reported himself/herself as color blind.
|
| 244 |
+
|
| 245 |
+
The apparatus was the same as that used in the Wizard-of-Oz study (Section 4.1.2), so was the eye-gaze tracking technology: we used an iPad Pro with true-depth camera; the eye-gaze tracking technology was implemented with the ARKit library, as previously described.
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+
Figure 4: The Pareto front of different parameter combinations for 3 target selection methods under $\alpha = 1$ in Zipf’s Law. The enlarged dots represent the selected parameter settings for three methods, respectively. These settings have the most balanced performance according to Equation 8.
|
| 250 |
+
|
| 251 |
+
### 5.2 Design
|
| 252 |
+
|
| 253 |
+
We adopted a $\left\lbrack {3 \times 2 \times 2}\right\rbrack$ within-participant design. The three independent variables were: (1) the target selection method with 3 levels (BayesGaze, CM, Dwell), (2) the target size with 2 levels (1 $\mathrm{{cm}}$ or ${1.43}^{ \circ }$ , and $2\mathrm{\;{cm}}$ or ${2.86}^{ \circ }$ ), and (3) the $\alpha$ value of the Zipf’s distribution with 2 levels $\left( {\alpha = 1\text{, and}\alpha = 2}\right)$ . The Zipf’s distribution controls the distribution of the intended targets among the candidates.
|
| 254 |
+
|
| 255 |
+
For each selection method $\times$ target size $\times$ Zipf’s law $\alpha$ combination, each participant performed 24 trials. When $\alpha = 1$ , the frequencies of the 5 target candidates being the intended targets were11,5,4,3,1; when $\alpha = 2$ , these frequencies were 16,4,2,1,1. We included two $\alpha$ values to evaluate whether the skewness of the target distribution affects selection performance. Among a set of 24 trials, the distance between the target and the starting bar was either $4\mathrm{\;{cm}}$ or $5\mathrm{\;{cm}}$ with ${50}\%$ probability for each distance, and the target was either above or below the starting bar, also with ${50}\%$ probability for each option.
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+
Figure 5: The controlled 1D gaze target selection experiment
|
| 260 |
+
|
| 261 |
+
### 5.3 Procedure
|
| 262 |
+
|
| 263 |
+
For each trial, the participant was instructed to select one of the five adjacent horizontal bars displayed on the iPad screen via eye-gaze. The tracked gaze position was rendered as a cross-hair cursor on the display, as shown in Figure 5. The target to be selected was shown in blue and other targets in cyan. A starting bar was also displayed, which served as the starting position for the gaze input. Prior to starting a trial, the participant was asked to move the cursor into the starting bar which was initially displayed in gray. The bar turned to green after three seconds, signaling the start of a trial. The participant then moved the cursor to select the target bar on the screen. The selected target then turned dark. If the user selected the wrong target, or did not select any target after 5 seconds after the beginning of the trial, it was considered a miss. The participant moved to the next trial regardless of the outcome of the trial. To alleviate eye fatigue, the participant was allowed to take a break no longer than 2 minutes every 15 trials. Fig. 5 shows a screenshot of the experiment and a participant performing a trial.
|
| 264 |
+
|
| 265 |
+
After each trial, BayesGaze updated the prior probability for each target candidate. We assumed that each condition corresponds to a particular interface, and when the experimental condition changes (e.g., target size, or $\alpha$ value in Zipf’s distribution), we reset all the prior information.
|
| 266 |
+
|
| 267 |
+
The participants were guided to select the target as accurately and quickly as possible. At the end of the study, participants were asked to rate their preference over the three methods on a scale of 1 to 5 (1: dislike, 5: like very much). They also answered a subset of NASA-TLX [12] questions to measure the workload of the gaze target selection task, including about mental and physical demand. The rating of the workload was from 1 to 10 , from least to most demanding. The experiment lasted about 50 minutes.
|
| 268 |
+
|
| 269 |
+
To counterbalance the independent variables, the methods were fully balanced based on all 6 possible orders. For half of the users, $\alpha$ was set to 1 for the first half of the trials, and to 2 for the other half. For the other half of the users, it was the opposite order. Other factors were randomized. In total, we collected 18 users $\times 3$ methods $\times 2$ target sizes $\times {2\alpha } \times {24}$ trials $= {5184}$ trials.
|
| 270 |
+
|
| 271 |
+
### 5.4 Results
|
| 272 |
+
|
| 273 |
+
We evaluate the performance of the BayesGaze, CM, and Dwell by the success rate and selection time.
|
| 274 |
+
|
| 275 |
+
#### 5.4.1 Success Rate
|
| 276 |
+
|
| 277 |
+
The success rate measures the ratio of correct selections over the total number of trials. The results (Fig. 6a) show that: 1) BayesGaze always has the highest success rate and Dwell has the lowest success rate, which confirms the effectiveness of Bayesian approach and the benefit of using the prior. 2) Large targets(2cm)have higher Online Submission ID: 26 success rate than small targets(1cm), because it is much easier to move one's gaze into a large target.
|
| 278 |
+
|
| 279 |
+
<table><tr><td rowspan="2">Target Selection Method</td><td colspan="5">Frequencies when $\alpha = 1$</td><td colspan="5">Frequencies when $\alpha = 2$</td></tr><tr><td>11</td><td>5</td><td>4</td><td>3</td><td>1</td><td>16</td><td>4</td><td>2</td><td>1</td><td>1</td></tr><tr><td>BayesGaze</td><td>88.1</td><td>86.1</td><td>88.9</td><td>84.3</td><td>77.8</td><td>90.6</td><td>87.5</td><td>90.3</td><td>88.9</td><td>83.3</td></tr><tr><td>CM</td><td>85.6</td><td>82.8</td><td>84.0</td><td>86.1</td><td>86.1</td><td>85.9</td><td>87.5</td><td>93.1</td><td>88.9</td><td>88.9</td></tr><tr><td>Dwell</td><td>83.1</td><td>85.6</td><td>75.7</td><td>84.3</td><td>88.9</td><td>79.9</td><td>85.4</td><td>83.3</td><td>86.1</td><td>83.3</td></tr></table>
|
| 280 |
+
|
| 281 |
+
Table 2: The success rate (%) for different target selection frequencies (the lowest success rate is marked in bold)
|
| 282 |
+
|
| 283 |
+
A repeated measures ANOVA on success rate shows two significant main effects: target selection method $\left( {{F}_{2,{34}} = {11.45}, p < {0.001}}\right)$ and target size $\left( {{F}_{1,{17}} = {30.76}, p < {0.001}}\right)$ . The test does not show a significant main effect of Zipf’s Law’s $\alpha \left( {{F}_{1,{17}} = {1.722}, p = {0.207}}\right)$ . There is no significant interaction effect. Pairwise comparisons with Holm adjustment [13] on the success rate show significant differences between BayesGaze vs. Dwell $\left( {p < {0.01}}\right) ,\mathrm{{CM}}$ vs. Dwell $\left( {p < {0.05}}\right)$ , and BayesGaze vs. CM $\left( {p < {0.05}}\right)$ .
|
| 284 |
+
|
| 285 |
+
The overall mean $\pm {95}\%$ confidence interval (CI) of success rate among all target sizes and $\alpha$ is ${88.3}\% \pm {3.6}$ for BayesGaze, ${85.9}\% \pm {4.3}$ for CM, and ${82.1}\% \pm {5.2}$ for Dwell. In total, BayesGaze improves the success rate by ${6.2}\%$ over Dwell, and by ${2.4}\%$ over CM.
|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
|
| 289 |
+
(b) Decomposition of the error rate for target size $\times$ Zipf’s Law’s $\alpha$
|
| 290 |
+
|
| 291 |
+
Figure 6: The average success rate with ${95}\% \mathrm{{CI}}$ and the decomposition of the error rate (Mis-Selection (MS) and Non-Selection (NS))
|
| 292 |
+
|
| 293 |
+
In addition to the success rate, we also look into the error rate, which measures the ratio of the cases where the right target is not selected. There are two types of errors: (1) Mis-Selection (MS), where a wrong target is selected, and (2) Non-Selection (NS), where no target is selected. We examine the error rates of these two types of errors separately. Fig. 6b shows the decomposition of the error rate. The major part of the error rate of BayesGaze and CM comes from mis-selection, and the same for Dwell when the target size is $2\mathrm{\;{cm}}$ . However, when the target size is $1\mathrm{\;{cm}}$ , Dwell suffers from not selecting any target. The result implies that using a Bayesian framework can alleviate the problem of not being able to select target.
|
| 294 |
+
|
| 295 |
+
With BayesGaze, a potential side effect of incorporating the prior might be that less frequent targets are more difficult to select. Table 2 shows the success rates by target frequency. Although the success rates for items with a frequency of 1 are lower than for the high frequency items, they are still near ${80}\%$ . A repeated measures ANOVA does not show significant main effects of frequency on success rate for BayesGaze $\left( {{F}_{9.153} = {0.776}, p = {0.639}}\right) ,\mathrm{{CM}}$ $\left( {{F}_{9,{153}} = {1.248}, p = {0.27}}\right)$ , or Dwell $\left( {{F}_{9,{153}} = {0.669}, p = {0.736}}\right)$ , indicating that this potential side effect is minor.
|
| 296 |
+
|
| 297 |
+
#### 5.4.2 Selection Time
|
| 298 |
+
|
| 299 |
+
Fig. 7 shows the results for selection time, which measures the time to select the target from the start of the trial. As with the success rate, we observe that: 1) BayesGaze has the lowest selection time, and Dwell has the longest one; 2) Small targets(1cm)take longer to select than large ones (2cm), especially for Dwell.
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 7: The average selection time (with ${95}\%$ CI) by target size $\times$ Zipf’s Law’s $\alpha$
|
| 304 |
+
|
| 305 |
+
A repeated measures ANOVA on selection time shows two significant main effects: target selection method $\left( {{F}_{2.34} = {21.19}, p < {0.001}}\right)$ and target size $\left( {{F}_{1.17} = {116.9}, p < {0.001}}\right)$ . The test does not show a significant main effect of Zipf’s Law’s $\alpha \left( {{F}_{1,{17}} = {1.685}, p = {0.212}}\right)$ . The only significant interaction effect is target size $\times$ target selection method $\left( {{F}_{2,{34}} = {31.81}, p < {0.001}}\right)$ . Pairwise comparisons with Holm adjustment on selection time show significant differences for BayesGaze vs. Dwell $\left( {p < {0.001}}\right)$ and CM vs. Dwell $\left( {p < {0.01}}\right)$ . The pairwise comparisions does not show a significant difference for BayesGaze vs. CM $\left( {p = {0.09}}\right)$ .
|
| 306 |
+
|
| 307 |
+
The overall mean $\pm {95}\%$ CI selection time among all target sizes and $\sigma$ is ${2.23} \pm {0.15}$ seconds for BayesGaze, ${2.30} \pm {0.15}$ seconds for $\mathrm{{CM}}$ , and ${2.49} \pm {0.18}$ seconds for Dwell. In total, BayesGaze can save ${10.4}\%$ selection time over Dwell, and 3% over CM.
|
| 308 |
+
|
| 309 |
+
#### 5.4.3 Subjective Feedback
|
| 310 |
+
|
| 311 |
+
The result of subjective feedback is shown in Fig. 8. For overall preference, the median ratings for BayesGaze, CM and Dwell are 4, 3.5 and 3 respectively. BayesGaze has the highest median rating. For mental and physical demand, the medians are 6.5 and 5.5 for BayesGaze, 6 and 6 for CM, and 7.5 and 7.5 for Dwell. Nonparametric Friedman tests do not show significant main effects of selection method on three metrics: overall preference $\left( {{X}_{r}^{2}\left( 2\right) = }\right.$ ${1.11}, p = {0.57})$ , physical demand $\left( {{X}_{r}^{2}\left( 2\right) = {2.93}, p = {0.085}}\right)$ , and mental demand $\left( {{X}_{r}^{2}\left( 2\right) = {5.24}, p = {0.073}}\right)$ . The $p$ values for physical and mental demanding are approaching statistical significance.
|
| 312 |
+
|
| 313 |
+

|
| 314 |
+
|
| 315 |
+
Figure 8: The median of subjective ratings of overall preference, mental demand and physical demand. For overall preference, higher ratings are better. For mental and physical demand, lower ratings are better.
|
| 316 |
+
|
| 317 |
+
### 5.5 Discussion
|
| 318 |
+
|
| 319 |
+
Performance. The experiment results show that BayesGaze outperformed both the Dwell and CM methods, in both selection accuracy and speed. BayesGaze improved the success rate of Dwell from 82.1% to 88.3%, i.e. a 6.2% increase, and reduced selection time from 2.49 seconds to 2.23 seconds, i.e. a 10.4% deduction. BayesGaze also improved the success rate of CM by ${2.4}\%$ , and reduced the selection time by $3\%$ . Pairwise comparisons with Holm adjustment showed all these differences to be significant $\left( {p < {0.05}}\right)$ , except for selection time between BayesGaze vs. CM $\left( {p = {0.09}}\right)$ .
|
| 320 |
+
|
| 321 |
+
The promising performance of BayesGaze first shows that incorporating the prior significantly improves target selection performance. Compared with CM, which can be viewed as BayesGaze without prior, BayesGaze performed better in both accuracy and speed across all conditions. This suggests that incorporating the prior distribution of targets is effective in improving the performance of gaze-based target selection tasks. Second, both BayesGaze and CM outperformed Dwell, indicating that accumulating the interest, which is represented by the posterior in BayesGaze and by the likelihood in CM, is also effective for gaze-based target selection.
|
| 322 |
+
|
| 323 |
+
Prior. Incorporating the prior might make less frequent targets more difficult to select, even though we did not observe it in our experiment, as shown in Table 2. There are several ways to prevent this potential problem: (1) Set a lower bound for the target frequency so that no target will become hard to select. (2) In real-world applications, leverage user actions to address the problem. For example, if the previous selection is incorrect (back/cancel action is performed immediately), reduce the probability of the incorrect target. (3) Similar to what we do in this paper, use a small $\sigma$ for the likelihood model in order to decrease interference between neighboring targets.
|
| 324 |
+
|
| 325 |
+
Midas-Touch Problem. Our method can also work with existing approaches to solve the Midas-Touch problem in gaze target selection, For example: (1) We can use methods like [5, 43] to infer whether a user is reading content on the UI or controlling their gaze to select a target. These methods will classify gaze positions into content reading phase and target selection phase. BayesGaze can discard the gaze positions in the content reading phase, and use only the gaze positions in the target selection phase to decide the target. (2) We can increase the threshold of accumulated posterior for selection to mitigate the Midas-Touch problem. Reading content on UI tends to take a shorter period of time than controlling gaze to select a target. Increasing the threshold could prevent falsely activating the selection, and the actual threshold should be set based on specific scenarios. This approach is also adopted by dwell-based methods (e.g., [28]) to mitigate the Midas-Touch problem.
|
| 326 |
+
|
| 327 |
+
Target Dimension. This paper considers 1D targets to show that Bayes' theorem can be adopted to improve the performance of gaze-based target selection. In real applications, there are many linear menus on computers and smartphones where our method can be directly applied. However, the underlying principle (Eq. 2 - 5) is not tied to a specific type of target and works for both 1D and 2D targets. The main difference between 1D and 2D target selection lies in the likelihood function (Eq. 5). For 1D targets, we adopted a 1D Gaussian; for 2D targets it should be replaced by a 2D Gaussian distribution. The rest of the method, including updating priors, accumulating weighted posterior, and using Pareto optimization to balance accuracy and selection time will remain the same.
|
| 328 |
+
|
| 329 |
+
Scalability. BayesGaze uses a gaze position buffer to store the gaze trajectory and empties it after each selection action. Our study (Fig. 7) shows that most selections happen within 3 seconds, which takes only a small amount of memory to store gaze data. In real-world applications, we may set a rolling-window with a size of 3 seconds to store gaze position. It can then scale up and handle long gaze-based input.
|
| 330 |
+
|
| 331 |
+
## 6 CONCLUSION
|
| 332 |
+
|
| 333 |
+
In this paper, we introduced BayesGaze, a Bayesian approach to determining the selected target given an eye-gaze trajectory. This approach views each sampling point in a gaze trajectory as a signal for selecting a target, uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by the sampling interval over all sampling points to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution with a Dirichlet prior. Our controlled experiment showed that BayesGaze improves target selection accuracy from 82.1% to 88.3% and selection time from 2.49 seconds per selection to 2.23 seconds over the widely adopted dwell-based selection method. It also improves selection accuracy and selection time over the CM method [4] (85.9%, 2.3 seconds per selection), a high-performance gaze target selection algorithm. Overall, our research shows that both incorporating the prior and accumulating the posterior are effective in improving the performance of gaze-based target selection.
|
| 334 |
+
|
| 335 |
+
## REFERENCES
|
| 336 |
+
|
| 337 |
+
[1] C. Appert and S. Zhai. Using strokes as command shortcuts: cognitive benefits and toolkit support. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2289-2298, 2009.
|
| 338 |
+
|
| 339 |
+
[2] M. Ashmore, A. T. Duchowski, and G. Shoemaker. Efficient eye pointing with a fisheye lens. In Proceedings of Graphics interface 2005, pp. 203-210. Citeseer, 2005.
|
| 340 |
+
|
| 341 |
+
[3] M. Barz, F. Daiber, D. Sonntag, and A. Bulling. Error-aware gaze-based interfaces for robust mobile gaze interaction. In Proceedings of
|
| 342 |
+
|
| 343 |
+
the 2018 ACM Symposium on Eye Tracking Research & Applications, pp. 1-10, 2018.
|
| 344 |
+
|
| 345 |
+
[4] M. Bernhard, E. Stavrakis, M. Hecher, and M. Wimmer. Gaze-to-
|
| 346 |
+
|
| 347 |
+
object mapping during visual search in $3\mathrm{\;d}$ virtual environments. ${ACM}$ Transactions on Applied Perception (TAP), 11(3):1-17, 2014.
|
| 348 |
+
|
| 349 |
+
[5] P. Biswas and P. Langdon. A new interaction technique involving eye gaze tracker and scanning system. In Proceedings of the 2013 conference on eye tracking South Africa, pp. 67-70, 2013.
|
| 350 |
+
|
| 351 |
+
[6] D. Buschek and F. Alt. Touchml: A machine learning toolkit for modelling spatial touch targeting behaviour. In Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 110-114, 2015.
|
| 352 |
+
|
| 353 |
+
[7] I. Chatterjee, R. Xiao, and C. Harrison. Gaze+ gesture: Expressive, precise and targeted free-space interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 131-138, 2015.
|
| 354 |
+
|
| 355 |
+
[8] A. Cockburn, C. Gutwin, and S. Greenberg. A predictive model of menu performance. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 627-636, 2007.
|
| 356 |
+
|
| 357 |
+
[9] A. De Luca, R. Weiss, and H. Drewes. Evaluation of eye-gaze interaction methods for security enhanced pin-entry. In Proceedings of the 19th australasian conference on computer-human interaction: Entertaining user interfaces, pp. 199-202, 2007.
|
| 358 |
+
|
| 359 |
+
[10] S. R. Ellis and R. J. Hitchcock. The emergence of zipf's law: Spontaneous encoding optimization by users of a command language. IEEE transactions on systems, man, and cybernetics, 16(3):423-427, 1986.
|
| 360 |
+
|
| 361 |
+
[11] A. M. Feit, S. Williams, A. Toledo, A. Paradiso, H. Kulkarni, S. Kane, and M. R. Morris. Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design. In Proceedings of the 2017 Chi conference on human factors in computing systems, pp. 1118-1130, 2017.
|
| 362 |
+
|
| 363 |
+
[12] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in psychology, vol. 52, pp. 139-183. Elsevier, 1988.
|
| 364 |
+
|
| 365 |
+
[13] S. Holm. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics, pp. 65-70, 1979.
|
| 366 |
+
|
| 367 |
+
[14] M. X. Huang, J. Li, G. Ngai, and H. V. Leong. Screenglint: Practical, in-situ gaze estimation on smartphones. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2546-2557, 2017.
|
| 368 |
+
|
| 369 |
+
[15] T. Isomoto, T. Ando, B. Shizuki, and S. Takahashi. Dwell time reduction technique using fitts' law for gaze-based target acquisition. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, pp. 1-7, 2018.
|
| 370 |
+
|
| 371 |
+
[16] H. Istance, A. Hyrskykari, L. Immonen, S. Mansikkamaa, and S. Vickers. Designing gaze gestures for gaming: an investigation of performance. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pp. 323-330, 2010.
|
| 372 |
+
|
| 373 |
+
[17] R. J. Jacob. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems (TOIS), 9(2):152-169, 1991.
|
| 374 |
+
|
| 375 |
+
[18] R. J. Jacob and K. S. Karn. Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. In The mind's eye, pp. 573-605. Elsevier, 2003.
|
| 376 |
+
|
| 377 |
+
[19] L. Jigang, B. S. L. Francis, and D. Rajan. Free-head appearance-based eye gaze estimation on mobile devices. In 2019 International Conference on Artificial Intelligence in Information and Communication (ICAHC), pp. 232-237. IEEE, 2019.
|
| 378 |
+
|
| 379 |
+
[20] C. Kumar, R. Hedeshy, I. S. MacKenzie, and S. Staab. Tagswipe: Touch assisted gaze swipe for text entry. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2020.
|
| 380 |
+
|
| 381 |
+
[21] M. Kumar, J. Klingner, R. Puranik, T. Winograd, and A. Paepcke. Improving the accuracy of gaze input for interaction. In Proceedings of the 2008 symposium on Eye tracking research & applications, pp. 65-68, 2008.
|
| 382 |
+
|
| 383 |
+
[22] M. Kumar, A. Paepcke, and T. Winograd. Eyepoint: practical pointing and selection using gaze and keyboard. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 421-430, 2007.
|
| 384 |
+
|
| 385 |
+
[23] G. H. Kütt, K. Lee, E. Hardacre, and A. Papoutsaki. Eye-write: Gaze sharing for collaborative writing. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2019.
|
| 386 |
+
|
| 387 |
+
[24] M. Kytö, B. Ens, T. Piumsomboon, G. A. Lee, and M. Billinghurst.
|
| 388 |
+
|
| 389 |
+
Pinpointing: Precise head-and eye-based target selection for augmented reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2018.
|
| 390 |
+
|
| 391 |
+
[25] W. Liu, G. Bailly, and A. Howes. Effects of frequency distribution on linear menu performance. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 1307-1312, 2017.
|
| 392 |
+
|
| 393 |
+
[26] C. Lutteroth, M. Penkar, and G. Weber. Gaze vs. mouse: A fast and accurate gaze-only click alternative. In Proceedings of the 28th annual ACM symposium on user interface software & technology, pp. 385-394, 2015.
|
| 394 |
+
|
| 395 |
+
[27] P. Majaranta, U.-K. Ahola, and O. Špakov. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 357-360, 2009.
|
| 396 |
+
|
| 397 |
+
[28] P. Majaranta and K.-J. Räihä. Text entry by gaze: Utilizing eye-tracking. Text entry systems: Mobility, accessibility, universality, pp. 175-187, 2007.
|
| 398 |
+
|
| 399 |
+
[29] D. Miniotas, O. Špakov, and I. S. MacKenzie. Eye gaze interaction with expanding targets. In CHI'04 extended abstracts on Human factors in computing systems, pp. 1255-1258, 2004.
|
| 400 |
+
|
| 401 |
+
[30] A. Morrison, X. Xiong, M. Higgs, M. Bell, and M. Chalmers. A large-scale study of iphone app launch behaviour. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2018.
|
| 402 |
+
|
| 403 |
+
[31] M. E. Mott, S. Williams, J. O. Wobbrock, and M. R. Morris. Improving dwell-based gaze typing with dynamic, cascading dwell times. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2558-2570, 2017.
|
| 404 |
+
|
| 405 |
+
[32] A. Nayyar, U. Dwivedi, K. Ahuja, N. Rajput, S. Nagar, and K. Dey. Optidwell: intelligent adjustment of dwell click time. In Proceedings of the 22nd international conference on intelligent user interfaces, pp. 193-204, 2017.
|
| 406 |
+
|
| 407 |
+
[33] M. Parisay, C. Poullis, and M. Kersten-Oertel. Felix: Fixation-based eye fatigue load index a multi-factor measure for gaze-based interactions. In 2020 13th International Conference on Human System Interaction (HSI), pp. 74-81. IEEE, 2020.
|
| 408 |
+
|
| 409 |
+
[34] J. Pi and B. E. Shi. Probabilistic adjustment of dwell time for eye typing. In 2017 10th International Conference on Human System Interactions (HSI), pp. 251-257. IEEE, 2017.
|
| 410 |
+
|
| 411 |
+
[35] K.-J. Räihä and S. Ovaska. An exploratory study of eye typing fundamentals: dwell time, text entry rate, errors, and workload. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 3001-3010, 2012.
|
| 412 |
+
|
| 413 |
+
[36] D. Rozado, T. Moreno, J. San Agustin, F. Rodriguez, and P. Varona. Controlling a smartphone using gaze gestures as the input mechanism. Human-Computer Interaction, 30(1):34-63, 2015.
|
| 414 |
+
|
| 415 |
+
[37] I. Schuetz, T. S. Murdison, K. J. MacKenzie, and M. Zannoli. An explanation of fitts' law-like performance in gaze-based selection tasks using a psychophysics approach. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2019.
|
| 416 |
+
|
| 417 |
+
[38] L. Sidenmark and H. Gellersen. Eye&head: Synergetic eye and head movement for gaze pointing and selection. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, pp. 1161-1174, 2019.
|
| 418 |
+
|
| 419 |
+
[39] H. Skovsgaard, J. C. Mateo, J. M. Flach, and J. P. Hansen. Small-target selection with gaze alone. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pp. 145-148, 2010.
|
| 420 |
+
|
| 421 |
+
[40] O. Spakov. Comparison of gaze-to-objects mapping algorithms. In Proceedings of the 1st Conference on Novel Gaze-Controlled Applications, pp. 1-8, 2011.
|
| 422 |
+
|
| 423 |
+
[41] O. Špakov and P. Majaranta. Enhanced gaze interaction using simple head gestures. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pp. 705-710, 2012.
|
| 424 |
+
|
| 425 |
+
[42] S. Stellmach and R. Dachselt. Look & touch: gaze-supported target acquisition. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 2981-2990, 2012.
|
| 426 |
+
|
| 427 |
+
[43] B. B. Velichkovsky, M. A. Rumyantsev, and M. A. Morozov. New
|
| 428 |
+
|
| 429 |
+
solution to the midas touch problem: Identification of visual commands via extraction of focal fixations. procedia computer science, 39:75-82, 2014.
|
| 430 |
+
|
| 431 |
+
[44] E. Velloso, M. Carter, J. Newn, A. Esteves, C. Clarke, and H. Gellersen. Motion correlation: Selecting objects by matching their movement. ACM Transactions on Computer-Human Interaction (TOCHI), 24(3):1- 35, 2017.
|
| 432 |
+
|
| 433 |
+
[45] D. Weir, S. Rogers, R. Murray-Smith, and M. Löchtefeld. A user-specific machine learning approach for improving touch accuracy on mobile devices. In Proceedings of the 25th annual ACM symposium on User interface software and technology, pp. 465-476, 2012.
|
| 434 |
+
|
| 435 |
+
[46] E. Wood and A. Bulling. Eyetab: Model-based gaze estimation on unmodified tablet computers. In Proceedings of the Symposium on Eye Tracking Research and Applications, pp. 207-210, 2014.
|
| 436 |
+
|
| 437 |
+
[47] S. Xu, H. Jiang, and F. C. Lau. Personalized online document, image and video recommendation via commodity eye-tracking. In Proceedings of the 2008 ACM conference on Recommender systems, pp. 83-90, 2008.
|
| 438 |
+
|
| 439 |
+
[48] X. Zhang, M. X. Huang, Y. Sugano, and A. Bulling. Training person-specific gaze estimators from user interactions with multiple devices. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2018.
|
| 440 |
+
|
| 441 |
+
[49] X. Zhang, X. Ren, and H. Zha. Improving eye cursor's stability for eye pointing tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 525-534, 2008.
|
| 442 |
+
|
| 443 |
+
[50] X. Zhang, X. Ren, and H. Zha. Modeling dwell-based eye pointing target acquisition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2083-2092, 2010.
|
| 444 |
+
|
| 445 |
+
[51] S. Zhu, Y. Kim, J. Zheng, J. Y. Luo, R. Qin, L. Wang, X. Fan, F. Tian, and X. Bi. Using bayes' theorem for command input: Principle, models, and applications. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-15, 2020.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/r6Z8apiZQt/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,358 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ BAYESGAZE: A BAYESIAN APPROACH TO EYE-GAZE BASED TARGET SELECTION
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Selecting targets accurately and quickly with eye-gaze input remains an open research question. In this paper, we introduce BayesGaze, a Bayesian approach of determining the selected target given an eye-gaze trajectory. This approach views each sampling point in an eye-gaze trajectory as a signal for selecting a target. It then uses the Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by sampling interval to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution. Our investigation shows that BayesGaze improves target selection accuracy and speed over a dwell-based selection method, and the Center of Gravity Mapping (CM) method [4]. Our research shows that both accumulating posterior and incorporating the prior are effective in improving the performance of eye-gaze based target selection.
|
| 8 |
+
|
| 9 |
+
Index Terms: Human-centered computing-Human computer interaction (HCI); Human-centered computing-Human computer interaction (HCI)—Interaction techniques; Human-centered computing-Human computer interaction (HCI)-HCI design and evaluation methods-User studies;
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Selecting a target with the gaze remains a central problem of eye-based interaction. Two factors make this problem challenging [18]. First, gaze input is noisy because of both inadvertent eye movements and inevitable noise in the tracking device [49]. Therefore, it is difficult for a user to move their gaze to a particular position and stabilize it for an extended period of time. Second, unlike using a mouse, where a user can confirm the selection by clicking a button, gaze-based interaction lacks an easy-to-use approach to confirm the selection, adding a layer of difficulty to the design of a selection technique [42]. Although previous research has explored target selection using dwell $\left\lbrack {{15},{17}}\right\rbrack$ , motion correlation [44] and dynamic user interfaces $\left\lbrack {{26},{29},{39}}\right\rbrack$ , quickly and accurately selecting a target with gaze input remains an open research question.
|
| 14 |
+
|
| 15 |
+
Inspired by the literature showing that Bayes' theorem is a promising principle for handling uncertainty and noise in input signals (e.g., $\left\lbrack {4,{51}}\right\rbrack$ ), we investigate how to apply a Bayesian perspective to determining the selected target given a gaze trajectory. Applying Bayes' theorem to gaze-based target selection raises two main challenges. First, it is not clear how to obtain the likelihood function for a gaze trajectory that contains a sequence of input signals (gaze points), i.e. the probability of observing a gaze trajectory given the target. Second, unlike touch or mouse input, which have a clear definitions of the terminal moment of the input, e.g. lifting the finger from the touch screen or mouse button, gaze input lacks a clear delimiter of the completion of a selection action. It is therefore necessary to design a method to determine when the selection action is completed.
|
| 16 |
+
|
| 17 |
+
To address these challenges, we introduce BayesGaze (Figure 1), a Bayesian approach for determining the selected target given a gaze trajectory. This approach first views each sampling point in a gaze trajectory as a signal for selecting a target, and then uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point. The likelihood of a target being selected is based on the distance between the sampling point and the target center, and the prior probability of a target being selected is modeled by a categorical distribution and updated after a selection action. BayesGaze then accumulates the posterior probabilities over all sampling points, weighted by the sampling interval, to determine the selected target. BayesGaze advances the Center of Gravity Mapping (CM) [4] by modeling the prior and incorporating it into the process of determining the selected target. This contribution is key to improving the performance of gaze-based target selection.
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Figure 1: An overview of how BayesGaze works. Given a gaze position ${s}_{i}$ sampled at time $i$ in a gaze trajectory, BayesGaze updates the accumulated interest of selecting target $t$ , denoted by ${I}_{i}\left( t\right)$ , by adding $P\left( {t \mid {s}_{i}}\right)$ weighted by the sampling interval ${\Delta \tau }$ to ${I}_{i - 1}\left( t\right) .P\left( {t \mid {s}_{i}}\right)$ is the posterior probability of selecting $t$ given ${s}_{i}$ , which is calculated based on Bayes’ theorem. If the accumulated interest ${I}_{i}\left( t\right)$ exceeds a threshold $\theta$ , the target $t$ is selected. BayesGaze then updates the prior probability $P\left( t\right)$ accordingly.
|
| 22 |
+
|
| 23 |
+
We report on a controlled experiment showing that BayesGaze improves target selection accuracy (from 82.1% to 88.3%) and speed (from 2.49 seconds per selection to 2.23 seconds) over a dwell-based selection method. BayesGaze also outperforms the CM method [4]. Overall, our investigation shows that accumulating the posterior probability and incorporating the prior are effective in improving the performance of gaze-based target selection.
|
| 24 |
+
|
| 25 |
+
§ 2 RELATED WORK
|
| 26 |
+
|
| 27 |
+
BayesGaze builds on previous work on gaze input and on Bayesian approaches. Here we review related work in gaze-based target selection techniques, Bayesian approaches to gaze input, and gaze-tracking technology.
|
| 28 |
+
|
| 29 |
+
§ 2.1 GAZE BASED TARGET SELECTION
|
| 30 |
+
|
| 31 |
+
Gaze-based target selection is a key technique for supporting a number of gaze interaction technologies such as gaze-based text input [35], gaming [16] or smart device control [36]. Dwell-based target selection (Dwell) $\left\lbrack {{15},{17},{50}}\right\rbrack$ is the most well-known and most widely used target selection method. It requires a user to dwell their gaze on a target for a specific uninterrupted period of time (usually several hundred milliseconds to 1 or 2 seconds) to select it. Such a highly concentrated action often results in eye fatigue [33]. Many works have been devoted to improving the Dwell technique by enabling a shorter dwell time and to finding other gaze-based target selection methods. For example, letting a user adjust the dwell time manually can lead to a shorter dwell time, from 876 ms to ${282}\mathrm{\;{ms}}$ [27]. Previous research [15] used Fitts’ law to model gaze input and suggested selecting the target once the user's gaze fixates the target. Other works have explored adjusting the dwell time based on how likely the target will be selected $\left\lbrack {{31},{34}}\right\rbrack$ .
|
| 32 |
+
|
| 33 |
+
In addition to dwell-based methods, researchers have proposed alternatives to improve gaze-based target selection from the two perspectives: handling the noisy gaze input and designing new selection action $\left\lbrack {{18},{49}}\right\rbrack$ . To accommodate the inaccuracy of eye-gaze input, some works used dynamic expansion/zooming of the display [29,39] or new UIs, e.g. Actigaze [26] used a set of confirmation buttons to make gaze target selection easier. Other works investigated error-aware gaze target selection so that the inaccuracy of target selection can be tracked and the system can provide design guidelines for UIs $\left\lbrack {3,{11}}\right\rbrack$ . Gaze target selection actions are also well explored. For example, motion correlation between the target movement and gaze trajectory has been proposed to determine the selected target [44]. Actions such as blinking [7] and gaze gesture [9] have also been explored for target selection. Previous research has also used multimodal input to get rid of the dwelling action. For example, once the user gazes at the target, a separate device, such as a keyboard [22] or hand-held touchscreen [42], can be employed to perform the selection action.
|
| 34 |
+
|
| 35 |
+
§ 2.2 BAYESIAN APPROACHES TO TARGET SELECTION
|
| 36 |
+
|
| 37 |
+
There is a growing interest in applying a Bayesian perspective to handle uncertainty in target selection. Some of this research is related to gaze input. For example, previous research has proposed probabilistic frameworks to deal with uncertainty in the input process, such as handling the uncertainty of touch actions on mobile devices $\left\lbrack {6,{45}}\right\rbrack$ and touchscreens $\left\lbrack {51}\right\rbrack$ , and also handling uncertainty in gaze-based interactions [4,32].
|
| 38 |
+
|
| 39 |
+
Our work is related to the recent work BayesianCommand, which uses Bayes' theorem to handle uncertainty in touch target selection and word-gesture input [51]. The fundamental difference between our work and BayesianCommand is that in our work, gaze input does not have well-defined starting or ending moments, but touch input does (i.e., landing a finger on screen to start input, and taking finger off to end the input). Therefore, BayesianCommand cannot be applied to gaze input directly.
|
| 40 |
+
|
| 41 |
+
Our research is also related to previous work on using a Bayesian perspective to address the gaze-to-object mapping problem, i.e. the Center of Gravity Mapping method (CM) [4]. CM is an improved version of the FM algorithm [47], which performed the best among 9 extant gaze-to-object mapping algorithms [40]. The main difference between our work and CM is that CM does not model nor update the prior, while our approach incorporates the prior into the process of deciding the selected target, which turns out to be the primary reason why BayesGaze improves target selection accuracy and reduces selection time. Furthermore, BayesGaze is designed for the gaze target selection problem while the CM is designed for gaze-to-object mapping problem. The gaze-based target selection is a different problem than gaze-to-object mapping $\left\lbrack {4,{40}}\right\rbrack$ because the former requires a mechanism to commit the selection while the latter does not.
|
| 42 |
+
|
| 43 |
+
§ 2.3 GAZE TRACKING TECHNOLOGY
|
| 44 |
+
|
| 45 |
+
Gaze tracking technology is becoming increasingly mature and available. For example, a number of professional gaze trackers are available, including Tobii 4C [23], SMI REDn [20] or Eyelink 1000 plus [37], that cost several hundreds up to a few thousand dollars. Previous research has also enabled gaze tracking with off-the-shelf cameras by using a fisheye camera [2], the front-facing RGB camera of a tablet [46], or by leveraging the glint of the screen on the user's cornea [14]. Deep learning techniques have also been used to predict gaze position using Convolutional Neural Networks [19, 48].
|
| 46 |
+
|
| 47 |
+
Unlike the above approaches, we enabled gaze tracking with an off-the-shelf and widely used iPad Pro equipped with a true depth camera and powered by Apple's ARKit.
|
| 48 |
+
|
| 49 |
+
§ 3 BAYESGAZE: A BAYESIAN PERSPECTIVE ON GAZE TAR- GET SELECTION
|
| 50 |
+
|
| 51 |
+
§ 3.1 A FORMAL DESCRIPTION OF THE GAZE BASED TARGET SE- LECTION PROBLEM
|
| 52 |
+
|
| 53 |
+
The gaze-based target selection problem can be formally described as the following research question. Given a gaze trajectory, which one is the intended target among a set of candidates denoted by $T = \left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{N}}\right\}$
|
| 54 |
+
|
| 55 |
+
As shown in previous research $\left\lbrack {4,{40},{47}}\right\rbrack$ , the existing algorithms for solving the gaze-based target selection problem can be described through an interest accumulation framework: each target candidate (denoted by $t$ ) accumulates a certain amount of "time" or "interest" from gaze input, until one of them reaches a threshold (denoted by $\theta$ ) for being selected. Under this framework, the widely adopted dwell-based target selection method can be expressed as follow.
|
| 56 |
+
|
| 57 |
+
Dwell-based Target Selection Method. Assuming that the gaze trajectory is denoted by $S = \left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{K}}\right\}$ where ${s}_{i}$ is a sampling point along the gaze trajectory at time $i$ , the accumulated "interest" for a target candidate $t$ at time $i$ , denoted by ${I}_{i}\left( t\right)$ , is calculated as:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{I}_{i}\left( t\right) = \left\{ \begin{array}{ll} {I}_{i - 1}\left( t\right) + {\Delta \tau }, & \text{ if }{s}_{i}\text{ is within the target }t \\ 0, & \text{ otherwise } \end{array}\right. \tag{1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where ${s}_{i}$ is the gaze position at time $i$ , and ${\Delta \tau }$ is the sampling interval. ${I}_{i}\left( t\right)$ represents the duration during which the gaze position stayed continuously within the target candidate $t$ . If the gaze position moves outside the target, it resets ${I}_{i}\left( t\right)$ to 0 . To select a target, the eye-gaze position needs to continuously stay within a target for a period of $\theta$ . In other words, the selected target is the one (denoted by ${t}^{ * }$ ) whose accumulated selection interest ${I}_{i}\left( {t}^{ * }\right)$ first reaches $\theta$ (i.e., ${I}_{i}\left( {t}^{ * }\right) \geq \theta$ ).
|
| 64 |
+
|
| 65 |
+
§ 3.2 THE BAYESGAZE ALGORITHM
|
| 66 |
+
|
| 67 |
+
Under the framework of "accumulating selection interest", we propose BayesGaze, a Bayesian perspective for gaze-based target selection. It views each sampling point in a gaze trajectory as a signal for selecting a target, and then uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point. BayesGaze then accumulates the posterior probabilities over all sampling points weighted by the sampling interval, as accumulated interest of selecting a target. A target candidate will be selected once the accumulated interest reaches a threshold $\theta$ . Formally, the accumulated interest of selecting a target $t$ is calculated as follows, given the sampling point ${s}_{i}$ :
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{I}_{i}\left( t\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot P\left( {t \mid {s}_{i}}\right) . \tag{2}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
The posterior $P\left( {t \mid {s}_{i}}\right)$ can be estimated according to Bayes’ theorem, assuming there are $N$ target candidates:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
P\left( {t \mid {s}_{i}}\right) = \frac{P\left( {{s}_{i} \mid t}\right) P\left( t\right) }{P\left( {s}_{i}\right) } = \frac{P\left( {{s}_{i} \mid t}\right) P\left( t\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}P\left( {{s}_{i} \mid {t}_{j}}\right) P\left( {t}_{j}\right) }, \tag{3}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where $P\left( t\right)$ is the prior probability of target $t$ being the intended target without observing the current gaze input trajectory, and $P\left( {{s}_{i} \mid t}\right)$ is the probability of ${s}_{i}$ if the intended target is $t$ (the likelihood).
|
| 80 |
+
|
| 81 |
+
BayesGaze has the following characteristics. First, BayesGaze resumes the accumulation of selection interest from where it left if the gaze trajectory accidentally leaves a target but returns to it later. It address a problem of dwell-based method (Equation 1) that if the eye-gaze position moves outside a target, the accumulated interest for selecting such a target is reset to 0 . Second, it weights the accumulated interest with the distance between the gaze point and the target center, through the likelihood function $P\left( {{s}_{i} \mid t}\right)$ . The closer a gaze point is to the target center, the more "interest" such a point will contribute to the target selection. Third, it updates the prior distribution of targets $\left( {P\left( t\right) }\right)$ and incorporate it into the procedure of deciding the selected target.
|
| 82 |
+
|
| 83 |
+
In the following part, we introduce how to estimate the prior distribution $P\left( t\right)$ and the likelihood $P\left( {s \mid t}\right)$ , which are keys for applying BayesGaze.
|
| 84 |
+
|
| 85 |
+
§ 3.2.1 PRIOR PROBABILITY MODEL
|
| 86 |
+
|
| 87 |
+
This part introduces a frequency model to estimate the prior distribution $P\left( t\right)$ based on the observable target selection history. We assume that the user does not select targets randomly and the target selection follows some distribution, e.g. Zipf's Law. This assumption is made based on the selection patterns in menu selection $\left\lbrack {8,{25},{51}}\right\rbrack$ , smartphone APP launching [30], and command triggering [1, 10, 51]. All of them are tasks that gaze target selection can support.
|
| 88 |
+
|
| 89 |
+
We model the prior distribution (i.e., a target candidate being selected prior to observing the current gaze trajectory) as a categorical distribution. More specifically, the outcome of a gaze-based selection trial that results in a selected target is viewed as a random variable $x$ whose value is one of $N$ categories (the $N$ target candidates). The core parameter of this random variable $x$ is the parameter vector $\mathbf{p} = \left( {P\left( {t}_{1}\right) ,P\left( {t}_{2}\right) ,\ldots ,P\left( {t}_{N}\right) }\right)$ , which describes the probability of each category. As a common practice in Bayesian inference, we also view this parameter vector $\mathbf{p}$ as a random variable and give it a prior distribution, using the Dirichlet distribution.
|
| 90 |
+
|
| 91 |
+
According to the properties of Dirichlet distributions, after each target selection trial we can update the expected value of the posterior
|
| 92 |
+
|
| 93 |
+
$p$ as follows:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
P\left( {t}_{i}\right) = \frac{k + {c}_{i}}{k \cdot N + \mathop{\sum }\limits_{{j = 1}}^{N}{c}_{j}}, \tag{4}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where $N$ is the number of candidate targets (e.g., the number of menu items), ${c}_{i}$ is the number of times we have observed target ${t}_{i}$ being selected, and $k$ is the pseudocount of the Dirichlet prior, a hyper-parameter of the distribution. The parameter $k$ can also be viewed as the update rate, which is a positive constant that controls how quickly the $P\left( {t}_{i}\right)$ are updated. Note that the prior updating model (Equation 4) is the same as the model proposed by Zhu et al. [51], although these authors do not describe it under the paradigm of categorical-Dirichlet distributions. We use the expected value of $\mathbf{p}$ (Equation 4) as the prior model in BayesGaze (Equation 3).
|
| 100 |
+
|
| 101 |
+
This prior model matches our expectations well. When there is no target selection observed, the probability $P\left( {t}_{i}\right)$ is $\frac{k}{k \cdot N} = \frac{1}{N}$ , which means that all candidate targets have equal probability. Whereas when there are enough target selections observed, i.e. ${c}_{i} \gg k$ , we have $P\left( {t}_{i}\right) \approx \frac{{c}_{i}}{\mathop{\sum }\limits_{j}{c}_{j}}$ , which means that $P\left( {t}_{i}\right)$ can be estimated based on the frequency of ${t}_{i}$ having been selected before.
|
| 102 |
+
|
| 103 |
+
By setting different $k$ , we can balance $P\left( {t}_{i}\right)$ between two extreme cases: 1) when $k \rightarrow + \infty$ , we have $P\left( {t}_{i}\right) \approx \frac{1}{N}$ , that is, the prior probabilities of all candidate targets are the equal. 2) when $k = 0$ , we have $P\left( {t}_{i}\right) = \frac{{c}_{i}}{\mathop{\sum }\limits_{j}{c}_{j}}$ , which means that the prior probability is only based on the history selection frequency. We later use empirical data to determine an optimal value for $k$ .
|
| 104 |
+
|
| 105 |
+
§ 3.2.2 LIKELIHOOD MODEL
|
| 106 |
+
|
| 107 |
+
The goal of this step is to estimate $P\left( {{s}_{i} \mid t}\right)$ , the likelihood of observing ${s}_{i}$ if $t$ is the intended target. Since ${s}_{i}$ is a single gaze position, a reasonable assumption is that $P\left( {{s}_{i} \mid t}\right)$ is higher if ${s}_{i}$ is closer to the center of $t$ . We follow Bernard et al. [4] and use a Gaussian density function to describe the likelihood of observing ${s}_{i}$ , a common method for modeling likelihood for a single-point target selection:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
P\left( {{s}_{i} \mid t}\right) = \frac{1}{\sqrt{{2\pi }{\sigma }^{2}}}\exp \left( {-\frac{{\begin{Vmatrix}{s}_{i} - {c}_{t}\end{Vmatrix}}^{2}}{2{\sigma }^{2}}}\right) , \tag{5}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where ${c}_{t}$ is the center of target $t$ , the term $\begin{Vmatrix}{{s}_{i} - {c}_{t}}\end{Vmatrix}$ is the ${L}^{2}$ Euclidean norm of the vector ${s}_{i} - {c}_{t},\sigma$ is an empirical parameter defining how concentrated should the gaze points be. The parameter $\sigma$ controls how much interest can be accumulated at a certain distance. If $\sigma$ is too small, a target accumulates high interest only when the gaze point is close to the target center, which could make the target hard to select. On the other hand, if $\sigma$ is too large, the accumulated interests for neighboring targets could become large and cause mis-selections. We estimate an optimal $\sigma$ from real data in the next section.
|
| 114 |
+
|
| 115 |
+
After obtaining both the prior probability and the likelihood, we can use BayesGaze to perform target selection. The BayesGaze algorithm is summarized in Algorithm 1. Note that the algorithm can be run online, i.e. when a gaze point ${s}_{i}$ is sampled by the gaze tracker, the top-level for-loop can be executed to check if a target is selected.
|
| 116 |
+
|
| 117 |
+
Algorithm 1 BayesGaze Algorithm
|
| 118 |
+
|
| 119 |
+
Input: Target set: $T = \left\{ {{t}_{1},{t}_{2},\ldots ,{t}_{N}}\right\}$ , Gaze trajectory: $S =$
|
| 120 |
+
|
| 121 |
+
$\left\{ {{s}_{1},{s}_{2},\ldots ,{s}_{K}}\right\}$ , Threshold: $\theta$
|
| 122 |
+
|
| 123 |
+
Output: Selected target $t$ , Selection time: ${\tau }_{\text{ sel }}$
|
| 124 |
+
|
| 125 |
+
for ${s}_{i}$ in $S$ do
|
| 126 |
+
|
| 127 |
+
for ${t}_{i}$ in $T$ do
|
| 128 |
+
|
| 129 |
+
Obtain prior probability $P\left( {t}_{j}\right)$ and compute likelihood
|
| 130 |
+
|
| 131 |
+
$P\left( {{s}_{i} \mid {t}_{j}}\right)$ using Equation 5;
|
| 132 |
+
|
| 133 |
+
Compute accumulated interest ${I}_{i}\left( {t}_{j}\right)$ from Equation 2;
|
| 134 |
+
|
| 135 |
+
if ${I}_{i}\left( {t}_{j}\right) > \theta$ then
|
| 136 |
+
|
| 137 |
+
Update prior probability $P\left( {t}_{m}\right)$ for each ${t}_{m} \in T$ given
|
| 138 |
+
|
| 139 |
+
that ${t}_{j}$ is selected using Equation 4;
|
| 140 |
+
|
| 141 |
+
return ${t}_{j},i \cdot {\Delta \tau }$
|
| 142 |
+
|
| 143 |
+
end if
|
| 144 |
+
|
| 145 |
+
end for
|
| 146 |
+
|
| 147 |
+
end for
|
| 148 |
+
|
| 149 |
+
§ 3.2.3 BAYESGAZE WITHOUT PRIOR
|
| 150 |
+
|
| 151 |
+
If we consider the prior to be Uniform distribution before every trial (i.e. $\forall {t}_{i} \in T,P\left( {t}_{i}\right) = 1/N$ ), BayesGaze will be identical to the Center of Gravity Mapping (CM) algorithm [4] (referred to as the CM method hereafter), a previously proposed method for deciding a target for a gaze-to-object mapping task. Under this special condition, the accumulated interest of the CM method can be calculated by Equation 2 with the prior $P\left( t\right) = 1/N$ , that is:
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
{I}_{i}\left( t\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot P\left( {t \mid {s}_{i}}\right) = {I}_{i - 1}\left( t\right) + {\Delta \tau } \cdot \frac{P\left( {{s}_{i} \mid t}\right) }{\mathop{\sum }\limits_{{j = 1}}^{N}P\left( {{s}_{i} \mid {t}_{j}}\right) }, \tag{6}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
where $P\left( {{s}_{i} \mid t}\right)$ is calculated by Equation 5. Therefore, we view BayesGaze as an improvement over the CM method that updates and incorporates the prior in the target selection process. The CM method is also very similar to the previously proposed Fractional Mapping method $\left\lbrack {{40},{47}}\right\rbrack$ . We later compare BayesGaze with the CM method to examine to what degree incorporating the prior can improve gaze target selection performance.
|
| 158 |
+
|
| 159 |
+
In order to successfully apply the BayesGaze algorithm, we need to obtain the values of three parameters, denoted as a 3-tuple $\left\lbrack {k,\mathbf{\sigma },\mathbf{\theta }}\right\rbrack$ , where $k$ is part of the prior probability model (Equation 4), $\sigma$ is part of the likelihood model (Equation 5), and $\theta$ is the threshold of the accumulated interest for committing a selection. We carried out a study to collect gaze data for target selection and determine the optimal parameter values from that data.
|
| 160 |
+
|
| 161 |
+
§ 4 PARAMETER DETERMINATION
|
| 162 |
+
|
| 163 |
+
We adopted a data-driven simulation approach to search for the optimal parameter values for the BayesGaze algorithm. The procedure consists of two phases. In Phase 1, we carried out a Wizard-of-Oz study to collect gaze input data for selecting a target. In Phase 2, we fed the collected data to the BayesGaze algorithm to search for the optimal parameter values. We also searched for the optimal parameter for the Dwell method (Equation 1) and for the CM method (Equation 6).
|
| 164 |
+
|
| 165 |
+
§ 4.1 PHASE 1: COLLECTING GAZE INPUT DATA VIA A WIZARD-OF- OZ STUDY
|
| 166 |
+
|
| 167 |
+
We first carried out a Wizard-of-Oz study to collect gaze input data for selecting a target. We focused on a 1-dimensional target selection task, where the target is a horizontal bar and gaze motion is vertical. We picked this task because 1-dimensional pointing is a typical target selection task, and horizontal bars are widely used UI elements on mobile computing devices such as smartphones and tablets.
|
| 168 |
+
|
| 169 |
+
§ 4.1.1 PARTICIPANTS
|
| 170 |
+
|
| 171 |
+
Twelve users ( 4 female) between 23 and 31 years old (average ${27.25} \pm {2.22})$ participated in the experiment. All of them had normal or correct-to-normal sight and none of them was color blind. None of them had the experience of using gaze tracking devices or applications.
|
| 172 |
+
|
| 173 |
+
§ 4.1.2 APPARATUS
|
| 174 |
+
|
| 175 |
+
We used the 11-in iPad Pro for gaze tracking and to run the experiment because it can be widely and conveniently accessed. The gaze tracking was implemented based on the iPad's true depth camera and Apple’s ARKit library, and the sampling rate was ${60}\mathrm{\;{Hz}}$ . Specifically, we used the leftEyeTransform and rightEyeTransform provided by ARKit library and performed a hitTestWithSegment call to obtain the raw gaze position. Based on the recommendation of [11], we used the Outlier Correction filter with a triangle kernel [21] to obtain smooth gaze tracking. The filter contains a saccade/fixation detection module so that it can apply sliding windows of different lengths separately for saccades and fixations. The thresholds for the $x$ and $y$ axis to detect a saccade were both set to ${0.5}^{ \circ }$ (calculated based on the estimated face-screen distance). For fixations, the sliding window size of the filter was set to 40 as suggested by [11]. For the saccade, the sliding window size of the filter was set to 10, rather than using the raw position directly, to increase gaze tracking stability. We also followed the findings of previous works $\left\lbrack {{24},{38},{41}}\right\rbrack$ that allow head movements to improve target selection performance. We used a gazing task where the user gazes at 40 different points on the screen with a cursor showing where the user is looking at to test the gazing accuracy. The result showed a ${0.67}^{ \circ }$ with a standard deviation of 0.85, which means the user may accurately control the gaze to select targets.
|
| 176 |
+
|
| 177 |
+
§ 4.1.3 PROCEDURE
|
| 178 |
+
|
| 179 |
+
During the experiment the participant sat in front of a desk where an iPad Pro running the experiment was placed on a phone holder. The participant can freely adjust the iPad position, and was instructed to keep the distance between their eyes and the iPad at around ${40}\mathrm{\;{cm}}$ .
|
| 180 |
+
|
| 181 |
+
The study includes multiple target selection trials. In each trial, a horizontal bar in blue was displayed on the screen as the target and the participant was instructed to select it via gaze input. Fig. 2 shows the setup. Before each trial, the participant first moved the gaze-controlled cursor in the starting gray bar. After 3 seconds, the starting bar turned green, signaling the start of the trial. The participant was then instructed to move the cursor with their gaze to select a target of width(W)at a distance $D$ from the starting bar. We collected gaze input data for 5 seconds after a trial started. We assumed that 5 seconds was long enough for the participants to select a target. Each participant took a break after 15 trials. In total the experiment lasted around 15 minutes per participant.
|
| 182 |
+
|
| 183 |
+
We adopted a within-participant $3 \times 4 \times 2$ design with three levels of target width $W : 2\mathrm{\;{cm}}\left( {{2.86}^{ \circ }\text{ calculated based on a participant- }}\right.$ screen distance of ${40}\mathrm{\;{cm}}$ ), $3\mathrm{\;{cm}}\left( {4.29}^{ \circ }\right)$ , and $4\mathrm{\;{cm}}\left( {5.76}^{ \circ }\right)$ , four levels of distance $D : 6\mathrm{\;{cm}}\left( {8.53}^{ \circ }\right) ,8\mathrm{\;{cm}}\left( {11.31}^{ \circ }\right) ,{10}\mathrm{\;{cm}}\left( {14.04}^{ \circ }\right)$ , and ${12}\mathrm{\;{cm}}\left( {16.70}^{ \circ }\right)$ , and two levels of gaze motion direction: up or down from the starting bar. We counterbalanced the factors by randomizing the trials in the experiment.
|
| 184 |
+
|
| 185 |
+
< g r a p h i c s >
|
| 186 |
+
|
| 187 |
+
Figure 2: A screenshot of the Wizard-of-Oz study. The green button is the starting bar, and the target is shown as a blue bar. There is a red cursor indicating where the participant is looking.
|
| 188 |
+
|
| 189 |
+
In total, the study resulted in 12 participant $\times 3$ target sizes $\times 4$ distances $\times 2$ directions $\times 2$ repetitions = 576 trials.
|
| 190 |
+
|
| 191 |
+
§ 4.2 PHASE 2: DETERMINING PARAMETERS FROM THE COL- LECTED DATA
|
| 192 |
+
|
| 193 |
+
We created a set of gaze-based target selection tasks, simulated gaze input based on the data collected in Phase 1, and searched for the parameter values for the BayesGaze, CM, and Dwell that led to high input accuracy and fast input speed.
|
| 194 |
+
|
| 195 |
+
§ 4.2.1 SIMULATING EYE-GAZE TARGET SELECTION TASKS
|
| 196 |
+
|
| 197 |
+
We first created a set of target selection tasks in which a user is supposed to control their gaze to select a target among $N$ candidates. These $N$ candidates are stacked together with no gap between them to simulate the common vertical list or vertical menu design of mobile devices (e.g., settings menus in iOS). We included the same 3 target sizes in the simulation as in the data collection study $(2,3$ , and $4\mathrm{\;{cm}}$ ) and set $N = 5$ . The gaze trajectories for selecting a target are obtained from the collected data, according to the target sizes. Fig. 3 shows examples of simulated gaze trajectories for selecting different targets on the screen.
|
| 198 |
+
|
| 199 |
+
Since previous research has shown that the distribution of menu items being selected follows Zipf’s distribution $\left\lbrack {1,8,{10},{25},{30},{51}}\right\rbrack$ , we assumed that the frequency of each candidate being the target follows Zipf's Law:
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
f\left( {l;\alpha ,N}\right) = \frac{1/{l}^{\alpha }}{\mathop{\sum }\limits_{{n = 1}}^{N}\left( {1/{n}^{\alpha }}\right) }, \tag{7}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
where $N$ is the number of candidate targets (in the simulation, $N = 5$ ), $l \in \{ 1,2,\ldots ,N\} ,n$ is the rank of each target, and $\alpha$ is the value of the exponent characterizing the distribution. We include ${4\alpha }$ values (0.5,1,2,3)in the simulation.
|
| 206 |
+
|
| 207 |
+
< g r a p h i c s >
|
| 208 |
+
|
| 209 |
+
Figure 3: An example of using the same gaze trajectory to simulate selecting a target (the blue one) at different indices among the five horizontal bars. The three red line shows the same gaze trajectory collected in the Wizard-of-Oz experiment. The red dot indicates the start of the trajectory. A simulated user is selecting the $2\mathrm{{nd}}\left( \mathrm{a}\right)$ , the $3\mathrm{{rd}}\left( \mathrm{b}\right)$ , and the $4\mathrm{{th}}\left( \mathrm{c}\right)$ target among5target candidates, with the same gaze trajectory.
|
| 210 |
+
|
| 211 |
+
For each target size, we had 192 collected trajectories. Among the $N$ candidates, we randomly assigned the frequencies. For example, when $N = 5$ and $\alpha = 1$ , the generated frequencies can be $\lbrack {28},{84}$ , ${21},{42},{17}\rbrack$ , which means that the first target among 5 candidates will be selected 28 times, the second 84 times, etc. We randomly selected trajectories (without repetition) to simulate selecting targets at different indices given the generated frequencies.
|
| 212 |
+
|
| 213 |
+
§ 4.2.2 SEARCHING FOR THE PARAMETER VALUES
|
| 214 |
+
|
| 215 |
+
Given a particular parameter tuple $\left\lbrack {k,\sigma ,\theta }\right\rbrack$ , we ran the BayesGaze algorithm to determine the selected target in the simulated target selection tasks. We viewed the process of searching for the optimal parameter values as an optimization problem: determining parameter values that optimizes target selection performance, measured in terms of success rate and selection time.
|
| 216 |
+
|
| 217 |
+
We performed a grid search to search for optimal parameter values for $k,\sigma$ and $\theta$ . In the grid search, $k$ ranges from 0.5 to 5 by steps of ${0.5},\sigma$ ranges from ${0.14}\mathrm{\;{cm}}\left( {0.2}^{ \circ }\right)$ to ${1.4}\mathrm{\;{cm}}\left( {2}^{ \circ }\right)$ by steps of ${0.14}\mathrm{\;{cm}},\theta$ ranges from 0.2 seconds to 2 seconds by steps of 0.1 seconds. The simulation results showed that different values for $k$ do not influence performance. We chose ${k}^{ * } = 1$ , as in [51]. When $k = 1$ , the Dirichlet prior of the Categorical distribution, without observing any selection results, becomes a Uniform distribution, i.e. an equally distributed prior. The best parameters for $\sigma$ were from 0.28 cm to ${0.56}\mathrm{\;{cm}}$ for BayesGaze. We chose ${\sigma }^{ * } = {0.28}\mathrm{\;{cm}}$ to reduce the chance of mis-selections.
|
| 218 |
+
|
| 219 |
+
Because we want to improve two objectives, success rate and selection time, we adopted a Pareto optimization process to find the optimal $\theta$ .The process generates a set of parameter values, called the Pareto-optimal set or Pareto front. Each parameter in the set is Pareto-optimal, which means that none of the two metrics (success rate or selection time) can be improved without hurting the other metric. We plot the Pareto front of BayesGaze in Fig. 4a. We followed the exact same optimization process to search for the optimal parameter values for the CM and Dwell methods, and generated the corresponding Pareto fronts in Fig. 4b and 4c. For the CM method, the parameters are a 2-tuple $\left\lbrack {\sigma ,\theta }\right\rbrack$ , as it does not incorporate the prior into the accumulated interest. For the Dwell method, the parameter is $\theta$ , the threshold for deciding whether a target is selected based on the accumulated selection interest.
|
| 220 |
+
|
| 221 |
+
To balance the success rate and selection time, we assigned equal weights to success rate and selection time. We first normalized the success rate and selection time to the range $\left\lbrack {0,1}\right\rbrack$ . We picked a parameter value ${\theta }^{ * }$ that leads to the best overall score $S$ , which is defined as:
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
S = {0.5} \times \text{ SuccessRate } - {0.5} \times \text{ SelectionTime, } \tag{8}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
where SuccessRate and SelectionTime are the normalized values between 0 and 1, according to the highest and lowest values displayed in Fig. 4. The coefficient of SelectionTime is -0.5 because the lower the selection time, the higher the selection performance. The optimal parameters for different $\alpha$ values are the same and are summarized in Table 1.
|
| 228 |
+
|
| 229 |
+
max width=
|
| 230 |
+
|
| 231 |
+
Target Selection Method ${k}^{ * }$ ${\sigma }^{ * }$ ${\theta }^{ * }$
|
| 232 |
+
|
| 233 |
+
1-4
|
| 234 |
+
BayesGaze 1 0.28 cm 0.9
|
| 235 |
+
|
| 236 |
+
1-4
|
| 237 |
+
CM - 0.28 cm 0.9
|
| 238 |
+
|
| 239 |
+
1-4
|
| 240 |
+
Dwell - - 0.8
|
| 241 |
+
|
| 242 |
+
1-4
|
| 243 |
+
|
| 244 |
+
Table 1: Optimal parameters (same for different $\alpha$ in Zipf’s Law) selected on the Pareto front for three target selection methods
|
| 245 |
+
|
| 246 |
+
§ 5 A TARGET SELECTION EXPERIMENT
|
| 247 |
+
|
| 248 |
+
To empirically evaluate BayesGaze, we conducted an 1D gaze-based target selection study using the parameters from the simulations. We included CM and Dwell as baselines in our study because (1) Dwell was a widely adopted target selection method and CM was one of the best-performed algorithms from the literature, and (2) CM can be viewed as BayesGaze without prior. Including these two methods in comparison allowed us to evaluate whether BayesGaze improved the performance with extant algorithms, and to understand how the two components of BayesGaze (likelihood function and prior) would contribute the target selection performance improvement.
|
| 249 |
+
|
| 250 |
+
§ 5.1 PARTICIPANTS AND APPARATUS
|
| 251 |
+
|
| 252 |
+
Eighteen adults ( 5 female) between 24 and 31 years old (average ${27.2} \pm {2.1}$ ) participated in the study. All of them had normal sight or correct-to-normal sight and none of them reported himself/herself as color blind.
|
| 253 |
+
|
| 254 |
+
The apparatus was the same as that used in the Wizard-of-Oz study (Section 4.1.2), so was the eye-gaze tracking technology: we used an iPad Pro with true-depth camera; the eye-gaze tracking technology was implemented with the ARKit library, as previously described.
|
| 255 |
+
|
| 256 |
+
< g r a p h i c s >
|
| 257 |
+
|
| 258 |
+
Figure 4: The Pareto front of different parameter combinations for 3 target selection methods under $\alpha = 1$ in Zipf’s Law. The enlarged dots represent the selected parameter settings for three methods, respectively. These settings have the most balanced performance according to Equation 8.
|
| 259 |
+
|
| 260 |
+
§ 5.2 DESIGN
|
| 261 |
+
|
| 262 |
+
We adopted a $\left\lbrack {3 \times 2 \times 2}\right\rbrack$ within-participant design. The three independent variables were: (1) the target selection method with 3 levels (BayesGaze, CM, Dwell), (2) the target size with 2 levels (1 $\mathrm{{cm}}$ or ${1.43}^{ \circ }$ , and $2\mathrm{\;{cm}}$ or ${2.86}^{ \circ }$ ), and (3) the $\alpha$ value of the Zipf’s distribution with 2 levels $\left( {\alpha = 1\text{ , and }\alpha = 2}\right)$ . The Zipf’s distribution controls the distribution of the intended targets among the candidates.
|
| 263 |
+
|
| 264 |
+
For each selection method $\times$ target size $\times$ Zipf’s law $\alpha$ combination, each participant performed 24 trials. When $\alpha = 1$ , the frequencies of the 5 target candidates being the intended targets were11,5,4,3,1; when $\alpha = 2$ , these frequencies were 16,4,2,1,1. We included two $\alpha$ values to evaluate whether the skewness of the target distribution affects selection performance. Among a set of 24 trials, the distance between the target and the starting bar was either $4\mathrm{\;{cm}}$ or $5\mathrm{\;{cm}}$ with ${50}\%$ probability for each distance, and the target was either above or below the starting bar, also with ${50}\%$ probability for each option.
|
| 265 |
+
|
| 266 |
+
< g r a p h i c s >
|
| 267 |
+
|
| 268 |
+
Figure 5: The controlled 1D gaze target selection experiment
|
| 269 |
+
|
| 270 |
+
§ 5.3 PROCEDURE
|
| 271 |
+
|
| 272 |
+
For each trial, the participant was instructed to select one of the five adjacent horizontal bars displayed on the iPad screen via eye-gaze. The tracked gaze position was rendered as a cross-hair cursor on the display, as shown in Figure 5. The target to be selected was shown in blue and other targets in cyan. A starting bar was also displayed, which served as the starting position for the gaze input. Prior to starting a trial, the participant was asked to move the cursor into the starting bar which was initially displayed in gray. The bar turned to green after three seconds, signaling the start of a trial. The participant then moved the cursor to select the target bar on the screen. The selected target then turned dark. If the user selected the wrong target, or did not select any target after 5 seconds after the beginning of the trial, it was considered a miss. The participant moved to the next trial regardless of the outcome of the trial. To alleviate eye fatigue, the participant was allowed to take a break no longer than 2 minutes every 15 trials. Fig. 5 shows a screenshot of the experiment and a participant performing a trial.
|
| 273 |
+
|
| 274 |
+
After each trial, BayesGaze updated the prior probability for each target candidate. We assumed that each condition corresponds to a particular interface, and when the experimental condition changes (e.g., target size, or $\alpha$ value in Zipf’s distribution), we reset all the prior information.
|
| 275 |
+
|
| 276 |
+
The participants were guided to select the target as accurately and quickly as possible. At the end of the study, participants were asked to rate their preference over the three methods on a scale of 1 to 5 (1: dislike, 5: like very much). They also answered a subset of NASA-TLX [12] questions to measure the workload of the gaze target selection task, including about mental and physical demand. The rating of the workload was from 1 to 10, from least to most demanding. The experiment lasted about 50 minutes.
|
| 277 |
+
|
| 278 |
+
To counterbalance the independent variables, the methods were fully balanced based on all 6 possible orders. For half of the users, $\alpha$ was set to 1 for the first half of the trials, and to 2 for the other half. For the other half of the users, it was the opposite order. Other factors were randomized. In total, we collected 18 users $\times 3$ methods $\times 2$ target sizes $\times {2\alpha } \times {24}$ trials $= {5184}$ trials.
|
| 279 |
+
|
| 280 |
+
§ 5.4 RESULTS
|
| 281 |
+
|
| 282 |
+
We evaluate the performance of the BayesGaze, CM, and Dwell by the success rate and selection time.
|
| 283 |
+
|
| 284 |
+
§ 5.4.1 SUCCESS RATE
|
| 285 |
+
|
| 286 |
+
The success rate measures the ratio of correct selections over the total number of trials. The results (Fig. 6a) show that: 1) BayesGaze always has the highest success rate and Dwell has the lowest success rate, which confirms the effectiveness of Bayesian approach and the benefit of using the prior. 2) Large targets(2cm)have higher Online Submission ID: 26 success rate than small targets(1cm), because it is much easier to move one's gaze into a large target.
|
| 287 |
+
|
| 288 |
+
max width=
|
| 289 |
+
|
| 290 |
+
2*Target Selection Method 5|c|Frequencies when $\alpha = 1$ 5|c|Frequencies when $\alpha = 2$
|
| 291 |
+
|
| 292 |
+
2-11
|
| 293 |
+
11 5 4 3 1 16 4 2 1 1
|
| 294 |
+
|
| 295 |
+
1-11
|
| 296 |
+
BayesGaze 88.1 86.1 88.9 84.3 77.8 90.6 87.5 90.3 88.9 83.3
|
| 297 |
+
|
| 298 |
+
1-11
|
| 299 |
+
CM 85.6 82.8 84.0 86.1 86.1 85.9 87.5 93.1 88.9 88.9
|
| 300 |
+
|
| 301 |
+
1-11
|
| 302 |
+
Dwell 83.1 85.6 75.7 84.3 88.9 79.9 85.4 83.3 86.1 83.3
|
| 303 |
+
|
| 304 |
+
1-11
|
| 305 |
+
|
| 306 |
+
Table 2: The success rate (%) for different target selection frequencies (the lowest success rate is marked in bold)
|
| 307 |
+
|
| 308 |
+
A repeated measures ANOVA on success rate shows two significant main effects: target selection method $\left( {{F}_{2,{34}} = {11.45},p < {0.001}}\right)$ and target size $\left( {{F}_{1,{17}} = {30.76},p < {0.001}}\right)$ . The test does not show a significant main effect of Zipf’s Law’s $\alpha \left( {{F}_{1,{17}} = {1.722},p = {0.207}}\right)$ . There is no significant interaction effect. Pairwise comparisons with Holm adjustment [13] on the success rate show significant differences between BayesGaze vs. Dwell $\left( {p < {0.01}}\right) ,\mathrm{{CM}}$ vs. Dwell $\left( {p < {0.05}}\right)$ , and BayesGaze vs. CM $\left( {p < {0.05}}\right)$ .
|
| 309 |
+
|
| 310 |
+
The overall mean $\pm {95}\%$ confidence interval (CI) of success rate among all target sizes and $\alpha$ is ${88.3}\% \pm {3.6}$ for BayesGaze, ${85.9}\% \pm {4.3}$ for CM, and ${82.1}\% \pm {5.2}$ for Dwell. In total, BayesGaze improves the success rate by ${6.2}\%$ over Dwell, and by ${2.4}\%$ over CM.
|
| 311 |
+
|
| 312 |
+
< g r a p h i c s >
|
| 313 |
+
|
| 314 |
+
(b) Decomposition of the error rate for target size $\times$ Zipf’s Law’s $\alpha$
|
| 315 |
+
|
| 316 |
+
Figure 6: The average success rate with ${95}\% \mathrm{{CI}}$ and the decomposition of the error rate (Mis-Selection (MS) and Non-Selection (NS))
|
| 317 |
+
|
| 318 |
+
In addition to the success rate, we also look into the error rate, which measures the ratio of the cases where the right target is not selected. There are two types of errors: (1) Mis-Selection (MS), where a wrong target is selected, and (2) Non-Selection (NS), where no target is selected. We examine the error rates of these two types of errors separately. Fig. 6b shows the decomposition of the error rate. The major part of the error rate of BayesGaze and CM comes from mis-selection, and the same for Dwell when the target size is $2\mathrm{\;{cm}}$ . However, when the target size is $1\mathrm{\;{cm}}$ , Dwell suffers from not selecting any target. The result implies that using a Bayesian framework can alleviate the problem of not being able to select target.
|
| 319 |
+
|
| 320 |
+
With BayesGaze, a potential side effect of incorporating the prior might be that less frequent targets are more difficult to select. Table 2 shows the success rates by target frequency. Although the success rates for items with a frequency of 1 are lower than for the high frequency items, they are still near ${80}\%$ . A repeated measures ANOVA does not show significant main effects of frequency on success rate for BayesGaze $\left( {{F}_{9.153} = {0.776},p = {0.639}}\right) ,\mathrm{{CM}}$ $\left( {{F}_{9,{153}} = {1.248},p = {0.27}}\right)$ , or Dwell $\left( {{F}_{9,{153}} = {0.669},p = {0.736}}\right)$ , indicating that this potential side effect is minor.
|
| 321 |
+
|
| 322 |
+
§ 5.4.2 SELECTION TIME
|
| 323 |
+
|
| 324 |
+
Fig. 7 shows the results for selection time, which measures the time to select the target from the start of the trial. As with the success rate, we observe that: 1) BayesGaze has the lowest selection time, and Dwell has the longest one; 2) Small targets(1cm)take longer to select than large ones (2cm), especially for Dwell.
|
| 325 |
+
|
| 326 |
+
< g r a p h i c s >
|
| 327 |
+
|
| 328 |
+
Figure 7: The average selection time (with ${95}\%$ CI) by target size $\times$ Zipf’s Law’s $\alpha$
|
| 329 |
+
|
| 330 |
+
A repeated measures ANOVA on selection time shows two significant main effects: target selection method $\left( {{F}_{2.34} = {21.19},p < {0.001}}\right)$ and target size $\left( {{F}_{1.17} = {116.9},p < {0.001}}\right)$ . The test does not show a significant main effect of Zipf’s Law’s $\alpha \left( {{F}_{1,{17}} = {1.685},p = {0.212}}\right)$ . The only significant interaction effect is target size $\times$ target selection method $\left( {{F}_{2,{34}} = {31.81},p < {0.001}}\right)$ . Pairwise comparisons with Holm adjustment on selection time show significant differences for BayesGaze vs. Dwell $\left( {p < {0.001}}\right)$ and CM vs. Dwell $\left( {p < {0.01}}\right)$ . The pairwise comparisions does not show a significant difference for BayesGaze vs. CM $\left( {p = {0.09}}\right)$ .
|
| 331 |
+
|
| 332 |
+
The overall mean $\pm {95}\%$ CI selection time among all target sizes and $\sigma$ is ${2.23} \pm {0.15}$ seconds for BayesGaze, ${2.30} \pm {0.15}$ seconds for $\mathrm{{CM}}$ , and ${2.49} \pm {0.18}$ seconds for Dwell. In total, BayesGaze can save ${10.4}\%$ selection time over Dwell, and 3% over CM.
|
| 333 |
+
|
| 334 |
+
§ 5.4.3 SUBJECTIVE FEEDBACK
|
| 335 |
+
|
| 336 |
+
The result of subjective feedback is shown in Fig. 8. For overall preference, the median ratings for BayesGaze, CM and Dwell are 4, 3.5 and 3 respectively. BayesGaze has the highest median rating. For mental and physical demand, the medians are 6.5 and 5.5 for BayesGaze, 6 and 6 for CM, and 7.5 and 7.5 for Dwell. Nonparametric Friedman tests do not show significant main effects of selection method on three metrics: overall preference $\left( {{X}_{r}^{2}\left( 2\right) = }\right.$ ${1.11},p = {0.57})$ , physical demand $\left( {{X}_{r}^{2}\left( 2\right) = {2.93},p = {0.085}}\right)$ , and mental demand $\left( {{X}_{r}^{2}\left( 2\right) = {5.24},p = {0.073}}\right)$ . The $p$ values for physical and mental demanding are approaching statistical significance.
|
| 337 |
+
|
| 338 |
+
< g r a p h i c s >
|
| 339 |
+
|
| 340 |
+
Figure 8: The median of subjective ratings of overall preference, mental demand and physical demand. For overall preference, higher ratings are better. For mental and physical demand, lower ratings are better.
|
| 341 |
+
|
| 342 |
+
§ 5.5 DISCUSSION
|
| 343 |
+
|
| 344 |
+
Performance. The experiment results show that BayesGaze outperformed both the Dwell and CM methods, in both selection accuracy and speed. BayesGaze improved the success rate of Dwell from 82.1% to 88.3%, i.e. a 6.2% increase, and reduced selection time from 2.49 seconds to 2.23 seconds, i.e. a 10.4% deduction. BayesGaze also improved the success rate of CM by ${2.4}\%$ , and reduced the selection time by $3\%$ . Pairwise comparisons with Holm adjustment showed all these differences to be significant $\left( {p < {0.05}}\right)$ , except for selection time between BayesGaze vs. CM $\left( {p = {0.09}}\right)$ .
|
| 345 |
+
|
| 346 |
+
The promising performance of BayesGaze first shows that incorporating the prior significantly improves target selection performance. Compared with CM, which can be viewed as BayesGaze without prior, BayesGaze performed better in both accuracy and speed across all conditions. This suggests that incorporating the prior distribution of targets is effective in improving the performance of gaze-based target selection tasks. Second, both BayesGaze and CM outperformed Dwell, indicating that accumulating the interest, which is represented by the posterior in BayesGaze and by the likelihood in CM, is also effective for gaze-based target selection.
|
| 347 |
+
|
| 348 |
+
Prior. Incorporating the prior might make less frequent targets more difficult to select, even though we did not observe it in our experiment, as shown in Table 2. There are several ways to prevent this potential problem: (1) Set a lower bound for the target frequency so that no target will become hard to select. (2) In real-world applications, leverage user actions to address the problem. For example, if the previous selection is incorrect (back/cancel action is performed immediately), reduce the probability of the incorrect target. (3) Similar to what we do in this paper, use a small $\sigma$ for the likelihood model in order to decrease interference between neighboring targets.
|
| 349 |
+
|
| 350 |
+
Midas-Touch Problem. Our method can also work with existing approaches to solve the Midas-Touch problem in gaze target selection, For example: (1) We can use methods like [5, 43] to infer whether a user is reading content on the UI or controlling their gaze to select a target. These methods will classify gaze positions into content reading phase and target selection phase. BayesGaze can discard the gaze positions in the content reading phase, and use only the gaze positions in the target selection phase to decide the target. (2) We can increase the threshold of accumulated posterior for selection to mitigate the Midas-Touch problem. Reading content on UI tends to take a shorter period of time than controlling gaze to select a target. Increasing the threshold could prevent falsely activating the selection, and the actual threshold should be set based on specific scenarios. This approach is also adopted by dwell-based methods (e.g., [28]) to mitigate the Midas-Touch problem.
|
| 351 |
+
|
| 352 |
+
Target Dimension. This paper considers 1D targets to show that Bayes' theorem can be adopted to improve the performance of gaze-based target selection. In real applications, there are many linear menus on computers and smartphones where our method can be directly applied. However, the underlying principle (Eq. 2 - 5) is not tied to a specific type of target and works for both 1D and 2D targets. The main difference between 1D and 2D target selection lies in the likelihood function (Eq. 5). For 1D targets, we adopted a 1D Gaussian; for 2D targets it should be replaced by a 2D Gaussian distribution. The rest of the method, including updating priors, accumulating weighted posterior, and using Pareto optimization to balance accuracy and selection time will remain the same.
|
| 353 |
+
|
| 354 |
+
Scalability. BayesGaze uses a gaze position buffer to store the gaze trajectory and empties it after each selection action. Our study (Fig. 7) shows that most selections happen within 3 seconds, which takes only a small amount of memory to store gaze data. In real-world applications, we may set a rolling-window with a size of 3 seconds to store gaze position. It can then scale up and handle long gaze-based input.
|
| 355 |
+
|
| 356 |
+
§ 6 CONCLUSION
|
| 357 |
+
|
| 358 |
+
In this paper, we introduced BayesGaze, a Bayesian approach to determining the selected target given an eye-gaze trajectory. This approach views each sampling point in a gaze trajectory as a signal for selecting a target, uses Bayes' theorem to calculate the posterior probability of selecting a target given a sampling point, and accumulates the posterior probabilities weighted by the sampling interval over all sampling points to determine the selected target. The selection results are fed back to update the prior distribution of targets, which is modeled by a categorical distribution with a Dirichlet prior. Our controlled experiment showed that BayesGaze improves target selection accuracy from 82.1% to 88.3% and selection time from 2.49 seconds per selection to 2.23 seconds over the widely adopted dwell-based selection method. It also improves selection accuracy and selection time over the CM method [4] (85.9%, 2.3 seconds per selection), a high-performance gaze target selection algorithm. Overall, our research shows that both incorporating the prior and accumulating the posterior are effective in improving the performance of gaze-based target selection.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,249 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Improved Low-cost 3D Reconstruction Pipeline by Merging Data From Different Color and Depth Cameras
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
The performance of traditional 3D capture methods directly influences the quality of digitally reconstructed 3D models. To obtain complete and well-detailed low-cost three-dimensional models, this paper proposes a 3D reconstruction pipeline using point clouds from different sensors, combining captures of a low-cost depth sensor post-processed by Super-Resolution techniques with high-resolution RGB images from an external camera using Structure-from-Motion and Multi-View Stereo output data. The main contribution of this work includes the description of a complete pipeline that improves the stage of information acquisition and makes the data merging from different sensors. Several phases of the $3\mathrm{D}$ reconstruction pipeline were also specialized to improve the model's visual quality. The experimental evaluation demonstrates that the developed method produces good and reliable results for low-cost 3D reconstruction of an object.
|
| 8 |
+
|
| 9 |
+
Keywords: Low-Cost 3D Reconstruction, Depth Sensor, Photogrammetry.
|
| 10 |
+
|
| 11 |
+
Index Terms: Computer graphics, Shape modeling, 3D Reconstruction.
|
| 12 |
+
|
| 13 |
+
## 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
3D reconstruction makes it possible to capture the geometry and appearance of an object or scene allowing us to inspect details without risk of damaging, measure properties, and reproduce 3D models in different material [21]. In recent years, numerous advances in 3D digitization have been observed, mainly by applying pipelines for three-dimensional reconstruction using costly high-precision 3D scanners. In addition, recent researches have sought to reconstruct objects or scenes using depth images from low-cost acquisition devices (e.g., the Microsoft Kinect sensor [17]) or using Structure from Motion (SFM) [24] combined with Multi-View Stereo (MVS) [5] from RGB-images.
|
| 16 |
+
|
| 17 |
+
Good quality 3D reconstructions require a large number of financial resources, as they require state-of-the-art equipment to capture object data in high precision and detail. On the other hand, low-resolution equipment implies a lower quality capture, even being financially more viable. Even with the ease of operation, lightweight, and portability, low-cost approaches must consider the limitations of the scanning equipment used [20].
|
| 18 |
+
|
| 19 |
+
The acquisition step of a 3D reconstruction pipeline refers to the use of devices to capture data from objects in a scene such as their geometry and color [22]. One result of 3D geometry capture is the production of discrete points collection that demonstrates the model shape. We call it point clouds. The data obtained by this step will be used in all other phases of the 3D reconstruction process [2].
|
| 20 |
+
|
| 21 |
+
Active capture methods use equipment such as scanners to infer objects geometry through a beam of light, inside or outside the visible spectrum. The scanner sensor has the advantages of fast measuring speed, robustness regarding external factors, and ease of acquiring information. Active sensors also have good performance in reconstructing texture-less and featureless surfaces [6,22]. The sensors need to be sensitive to small variations in the information acquired, since for small differences in distance, the variation in the time it takes to reach two different points is very low, requiring low equipment latency and good response time. For this reason, these systems tend to be slightly noisy [21]. Considering low-cost reconstruction approaches difficulties to capture color in high precision are disadvantage [10].
|
| 22 |
+
|
| 23 |
+
Passive methods are based on optical imaging techniques. They are highly flexible and work well with any modern digital camera. Image-based 3D reconstruction is practical, non-intrusive, low-cost and easily deployable outdoors. Various properties of the images can be used to retrieve the target shape, such as material, viewpoints and illumination. As opposed to active techniques, image-based techniques provide an efficient and easy way to acquire the color of a target object [10]. Although passive reconstructions mainly using SFM and MVS produce excellent results, they have limitations like the difficulty of distinguishing the target object from the background [25] and require the target object to have detailed geometry [6]. A controlled environment is needed to obtain better reconstruction results $\left\lbrack {{12},{24}}\right\rbrack$ .
|
| 24 |
+
|
| 25 |
+
Considering the limitations imposed by the presented approaches, it is important to note that a target whose geometry has been described by only a low-cost capture method has a real challenge in expressing its completeness, with rich and small details [6].
|
| 26 |
+
|
| 27 |
+
This paper proposes a hybrid pipeline from a low-cost depth camera (low-resolution images) and an external color capture camera (digital camera with high-resolution RGB images) to estimate and reconstruct the surface of an object and apply a high-quality texture. Such limitations of each data acquisition approach are bypass, generating a complete and well-detailed replica of the target model with high visual quality. To achieve this effect, this project uses a variation and combination of Structure from Motion, Multi-View Stereo and depth camera capture techniques.
|
| 28 |
+
|
| 29 |
+
Although there are mature projects aimed at low-cost 3D reconstruction, few are those who describing step-by-step how to overcome the limitations from low-cost three-dimensional data capture using the best features in all phases of the pipeline to obtain the model as realistic as possible. The main contribution of this work is the description of a complete pipeline that makes use of post-processed depth captures and merging data from different sensors, in which depth sensor data and high-resolution color images do not need to be synchronized.
|
| 30 |
+
|
| 31 |
+
As it is a post-processed task (after capture/estimate depth data), this work also includes the detection of the region of interest, based on the average distance of the scene, removing points not belonging to the target object and allows the inclusion of new images containing regions of the target object not previously photographed to improve the texturing step results.
|
| 32 |
+
|
| 33 |
+
In addition to this introductory section, this work is organized as follows: Section 2 presents related works, while section 3 describes the proposed pipeline. The experiments and evaluation of the pipeline are presented in section 4. Finally, section 5 discusses the final considerations and results achieved by this research.
|
| 34 |
+
|
| 35 |
+
## 2 RELATED WORK
|
| 36 |
+
|
| 37 |
+
Prokos et al. [19] proposed a hybrid approach combining shape from stereo (with additional geometric constraints) and laser scanning techniques. Using two cameras and a portable laser beam, they achieved accuracy as good as some high-end laser triangulation scanners. Although, they do not include automatically detecting outliers in their results.
|
| 38 |
+
|
| 39 |
+
The KinectFusion system [17] tracks the pose of portable depth cameras (Kinect) as they move through space and perform good three-dimensional surface reconstructions in real-time. The Kinect sensor has considerable limitations, including temporal inconsistency and the low resolution of the captured color and depth images [22]. Real-time reconstruction is not a requirement for well-detailed, accurate, and complete reconstructions.
|
| 40 |
+
|
| 41 |
+
Silva et al. [26] provides a guided reconstruction process using Super-Resolution (SR) techniques, helping to increase the quality of the low-resolution data captured with a low-cost sensor. The method of data acquisition using low-cost depth cameras and SR is also improved by Raimundo [22]. Even with depth image improvements, a poor registration of captures can affect the final model's shape.
|
| 42 |
+
|
| 43 |
+
Falkingham [9] demonstrates the potential applications of low-cost technology in the field of paleontology. The Microsoft Kinect was used to digitalize specimens of various sizes, and the resulting digital models were compared with models produced using SFM and MVS. The work pointed out that although Kinect generally registers morphology at a lower resolution capturing less detail than photogrammetry techniques, it offers advantages in the speed of data acquisition and generation of the $3\mathrm{D}$ mesh completed in real-time during data capture. Also, they did not use Super-Resolution to improve captures from low-cost devices and the models produced by the Kinect lack any color information.
|
| 44 |
+
|
| 45 |
+
Zollhöfer et al. [28] used a Kinect sensor to capture the geometry of an excavation site and took advantage of a topographic map to distort the reconstructed model, significantly increasing the quality of the scene. The global distortion, with Super-Resolution techniques applied to raw scans, significantly increased the fidelity and realism of its results but is too specialized for large scales scenes.
|
| 46 |
+
|
| 47 |
+
Paola and Inzerillo [8] in order to digitally produce the Egyptian stone from Palermo, proposed a method with a structured light scanner, smartphones and SFM to apply texture in the highly accurate mesh generated by the scanner. The main challenges were the dark color of the material and the superficiality of the groove of the hi-eroglyphs that some capture approaches have difficulty recognizing. The level of detail of the texture application showed up quite accurately. This reference work used a high-resolution 3D scanner, not aiming at low-cost reconstruction.
|
| 48 |
+
|
| 49 |
+
Jo and Hong [13] use a combination of terrestrial laser scanning and Unmanned Aerial Vehicle (UAV) photogrammetry to establish a three-dimensional model of the Magoksa Temple in Korea. The scans were used to acquire the perpendicular geometry of buildings and locations, being aligned and merged with the photogrammetry output, producing a hybrid point cloud. The photogrammetry adds value to the $3\mathrm{D}$ model, complementing the point cloud with the upper parts of buildings, which are difficult to acquire through laser scanning.
|
| 50 |
+
|
| 51 |
+
Chen [6] proposes a registration method to combine the data of a laser scanner and photogrammetry to reconstruct the real outdoor 3D scene. They managed greatly increasing the accuracy and convenience of operation. The two sensors can work independently, the method fuses their data even if in different scales. Mesh reconstruction and texturing were not explored by this work.
|
| 52 |
+
|
| 53 |
+
Raimundo et al. [21] point out in their bibliographic review several studies that successfully used advanced rendering techniques such as global illumination, ambient occlusion, normal mapping, shadow baking, per-vertex lighting, and level of detail. These rendering techniques also improve the final presentation of 3D reconstructions.
|
| 54 |
+
|
| 55 |
+
## 3 PIPELINE PROPOSAL
|
| 56 |
+
|
| 57 |
+
To overcome limitations of the low-cost three-dimensional data acquisition process, the following pipeline is proposed: capturing depth and color images (using a low-cost depth sensor and a digital camera); generation of point clouds from low-cost RGB-D camera depth images (using SR techniques [22]); shape estimation from RGB images (using SFM [24] and MVS [5]); merging of data from these different capture techniques; mesh generation; and texturing with high quality photos (Fig. 1). Several phases of the pipeline were specialized to achieve better accuracy and visual quality of 3D reconstructions of small and medium scale objects. The proposed pipeline works offline.
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
Figure 1: Schematic diagram for the proposed pipeline and the 3D reconstruction processes of an object.
|
| 62 |
+
|
| 63 |
+
### 3.1 Data acquisition
|
| 64 |
+
|
| 65 |
+
For capture using a low-cost depth sensor is established the following acquisition protocol: take several depths captures, moving the sensor around the object, and defining the limits of the capture volume. Furthermore, a turntable can also be used, obtaining a more controlled capture and align process. The number of views captured is less than that of real-time approaches due to the additional processing required to ensure the quality of each capture. Considering the quality requirements for this proposed work, an interactive tool [20] is used to acquire the raw data from the depth sensor (Fig. 2).
|
| 66 |
+
|
| 67 |
+
The depth capture method will present results proportional to the better the captures by the device, that is, the lower the incidence of noise and the better the accuracy of the inferred depth. With this in mind, each depth image goes through a filtering step with the application of Super-Resolution [22]. To provide high-resolution information beyond what is possible with a specific sensor, several low-resolution captures are merging, recreating as much detail as possible.
|
| 68 |
+
|
| 69 |
+
To add 3D information in greater detail and to apply a simple high-quality texturing process, photographs are taken from a digital camera around the target object. In our pipeline, these captures are independent of the depth sensor, we need just to take pictures with the fixed object, in a free movement of the camera. The set of images must be sufficient to cover most of the object's surface and the images must portray, in pairs, common parts of it. The color images will be used in the SFM pipeline.
|
| 70 |
+
|
| 71 |
+
The SFM pipeline detects characteristics in the images (feature detection), mapping these characteristics between images and finding descriptors capable of representing a distinguishable region (feature matching). These descriptors represent vertices of the reconstruction of the 3D scene (sparse reconstruction). The greater the number of matches found between the images, the greater the degree of accuracy of calculating a $3\mathrm{D}$ transformation matrix between the images, providing the estimation of the relative position between camera poses $\left\lbrack {3,{10}}\right\rbrack$ .
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Figure 2: Software to acquire and process depth images. The slider controls the capture limits (in millimeters) and the cut limits (in pixels), effectively determining the capture volume.
|
| 76 |
+
|
| 77 |
+
Photographs with good resolution and objects with a higher level of detail tend to bring greater precision to the photogrammetry algorithms. For objects with fewer details and features, the environment can be used to achieve better results [24]. In addition to the estimated structure to improve the depth sensor captured geometry, we use these cameras' pose estimation to apply easily and directly texture over the final model surface.
|
| 78 |
+
|
| 79 |
+
The Multi-View Stereo process is used to improve the point cloud obtained by SFM, resulting in a dense reconstruction. As the camera parameters such as position, rotation, and focal length are already known from SFM, the MVS computes 3D vertices in regions not detected by the descriptors. Multi-View Stereo algorithms generally have good accuracy, even with few images [10].
|
| 80 |
+
|
| 81 |
+
For this image-based point cloud result, to highlight the target object, a method of detecting the region of interest can be used. A simple algorithm is used to detect the centroid of the set of 3D points and remove points based on a radius from it. If the floor below the object is discernible, it is also possible to use a planar segmentation algorithm to remove the plane. A statistical removal algorithm can also be used to remove outliers. If even more accurate outlier removal is required, a manual process using a user interface tool can be performed. Most of the discrepancies and the background are removed using the proposed steps, minimizing working time.
|
| 82 |
+
|
| 83 |
+
Although image-based 3D reconstructions get greater detail than using low-cost depth sensors [9], this approach may not be able to estimate the completeness of the object (Fig. 3). This is a common result when the captures do not fully describe the target model, or it does not have a very distinguishable texture or detail.
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
Figure 3: Some parts of the surface may not be estimated by the photogrammetry process. In (a) the white and smooth painting of the object (b) prevents the MVS algorithm from obtaining a greater number of points that define this part of the structure of the model, leaving this featureless surface region with a fewer density of points than others.
|
| 88 |
+
|
| 89 |
+
The algorithms used in the next steps require a guided set of data, thus, the normals of the point clouds are estimated before performing the alignment step. A normal estimation k-neighbor algorithm is used for this task.
|
| 90 |
+
|
| 91 |
+
### 3.2 Alignment
|
| 92 |
+
|
| 93 |
+
To deal with the problem of aligning the point clouds of the acquisition phase, transformations are applied to place all captures in a global coordinate system. This alignment is usually performed in a coarse and fine alignment step.
|
| 94 |
+
|
| 95 |
+
To perform the initial alignment between the point clouds obtained by the depth sensor we use global alignment algorithms where the pairs of three-dimensional captures are roughly aligned [15]. Given the initial alignment between the captured views, the Iterative Closest Point (ICP) algorithm [11] is executed to obtain a fine alignment. After pairwise incremental registration, an algorithm for global minimization of the accumulated error is run.
|
| 96 |
+
|
| 97 |
+
The initial alignment step may not produce good alignment results due to the nature of the depth data utilized, as the low amount of discernible points between two point clouds [20], so the registration may present drifts. With this in mind, we use the point cloud obtained by photogrammetry as an auxiliary to apply a new alignment over the depth sensors point clouds, distorting the transformation, propagating the accumulation of errors between consecutive alignments and the loop closure, improving the global registration and the quality of the aligned point cloud.
|
| 98 |
+
|
| 99 |
+

|
| 100 |
+
|
| 101 |
+
Figure 4: Porcelain horse. With the richness of details that this object has, as in the head and saddle, we use the photogrammetry method for distinguishing them with the highest level of detail. At the same time it has a low number of characteristics in predominantly smoothness regions, as the base of the structure and the body of the animal, we use the depth sensor capture approach where this factor does not influence the 3D acquisition process. The data captured by the low-cost depth sensor aggregated information where there are few visible features, as can be seen at the base and legs of the horse.
|
| 102 |
+
|
| 103 |
+
The point cloud generated by the image-based 3D reconstruction pipeline and the one obtained with the depth sensor captures are created from different image spectrum and are very common to have different scales. The point clouds obtained using the depth sensor must be aligned with the corresponding points of the object in the photogrammetry point cloud.
|
| 104 |
+
|
| 105 |
+
As the depth sensor captures are already in a global coordinate system, to carry out this alignment, it is sufficient just to scale and transform a single capture to fit the cloud obtained by MVS and apply the same transformation to the others, speeding up the registration process. After that, the ICP algorithm can be reapplied, including the photogrammetry output point cloud. This last point cloud is not to be transformed, only the rest of the captures is aligned to it because the camera positions that we will utilize for texturing will use this model's coordinate system.
|
| 106 |
+
|
| 107 |
+
The merging of point clouds from both data capture approaches will increase the information that defines the object geometry. This resulting point cloud is used on the next steps of the pipeline.
|
| 108 |
+
|
| 109 |
+
### 3.3 Surface reconstruction
|
| 110 |
+
|
| 111 |
+
The mesh generation step is characterized by the reconstruction of the surface, a process in which a 3D continuous surface is inferred from a collection of discrete points that prove its shape [1].
|
| 112 |
+
|
| 113 |
+
For this step, we use the algorithm Screened Poisson Surface Reconstruction [14]. This algorithm seeks to find a surface in which the gradient of its points is the closest to the normals of the vertices of the input point cloud. The choice of a parametric method for the surface reconstruction is justified by the robustness and the possibility of using numerical methods to improve the results. Also, the resulting meshes are almost regular and smooth.
|
| 114 |
+
|
| 115 |
+
### 3.4 Texture synthesis
|
| 116 |
+
|
| 117 |
+
Applying textures to reconstructed $3\mathrm{D}$ models is one of the keys to realism [27]. High-quality texture mapping aims to avoid seams, smoothing the transition of an image used for applying texture and its adjacent one [16].
|
| 118 |
+
|
| 119 |
+
The texture synthesis phase of the proposed pipeline comprises the combination of the high-resolution pictures captured with an external digital camera with the integrated model obtained from the previous step of the pipeline.
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
|
| 123 |
+
Figure 5: Jaguar pan replica. Even with some visual characteristics generated by the 3D printing process, the object has very few distinguishable features because of its predominantly white texture. This factor makes difficult the reconstruction process by SFM and MVS. With this, we use the environment to assist in detecting the positions and orientation of the cameras. The captures with the depth sensor added information in the legs of the jaguar and the belly (bottom) not acquired by photogrammetry.
|
| 124 |
+
|
| 125 |
+
The high-resolution photos taken with a digital camera with the poses calculated using SFM, will be used to perform the generation of texture coordinates and atlas of the model, avoiding a time-consuming manual process.
|
| 126 |
+
|
| 127 |
+
The images with respective poses from SFM may not be able to apply a texture on faces not visible by any image used for the reconstruction, causing non-textured mesh surfaces in the three-dimensional model. To overcome this limitation, we post-apply the texture, merging camera relative poses result from SFM with new photos, calculating the new poses using photogrammetry result relative coordinate system.
|
| 128 |
+
|
| 129 |
+
## 4 EXPERIMENTS AND EVALUATION
|
| 130 |
+
|
| 131 |
+
For evaluation, we run the proposed pipeline on some objects varying size and complexity: a porcelain horse-shaped object ("Porcelain horse", Fig. 4), a jaguar and a turtle-shaped clay pan replicas ("Jaguar pan", Fig. 5 and "Turtle pan", Fig. 6 respectively). The remnant objects used in this study are replicas of cultural objects from the Waurá tribe and belong to the collection of Federal University of Bahia Brazilian Museum of Archaeology and Ethnology (MAE/UFBA). The replicas were three-dimensionally reconstructed by Raimundo [20] and 3D printed. In addition, the turtle replica was colored by hydrographic printing.
|
| 132 |
+
|
| 133 |
+
In our experiments we used Microsoft Kinect version 1, however, any other low-cost sensor can be used to capture depth images. This sensor is affordable and captures color and depth information with a resolution of ${640} \times {480}$ pixels. To produce point clouds from the low-cost 3D scanner, we used the Super-Resolution approach proposed by Raimundo [22] with 16 Low-Resolution (LR) depth frames.
|
| 134 |
+
|
| 135 |
+
The photos used as input to the passive 3D reconstruction method were taken with a Redmi Note 8 camera for all evaluated models. The number of photos was arbitrarily chosen to maximize coverage of the object. For the SFM pipeline, the RGB images were processed using COLMAP [24] to calculate camera poses and sparse shape reconstruction. OpenMVS [5] was used for dense reconstruction. For the texturing stage, we used the algorithm proposed by Waechter et al. [27].
|
| 136 |
+
|
| 137 |
+
Some software tools were developed from third-party libraries for various purposes. For instance, OpenCV [4] and PCL [23] were used to handle and process depth images and point clouds, libfreenect [18] was used on the depth acquisition application to access and retrieve data from the Microsoft Kinect. Meshlab system [7] has been used for Poisson reconstruction and adjustments in 3D point clouds and meshes when necessary.
|
| 138 |
+
|
| 139 |
+
Table 1: Algorithms and main components of each experiment.
|
| 140 |
+
|
| 141 |
+
<table><tr><td>Object</td><td>Porcelain horse</td><td>Jaguar pan</td><td>Turtle pan</td></tr><tr><td>Dimensions (cm)</td><td>${35} \times {12} \times {31}$</td><td>${21.5} \times {15} \times 7$</td><td>$9 \times {6.5} \times {3.5}$</td></tr><tr><td>Texture</td><td>Handmade</td><td>Predominantly white</td><td>Hydrographic printing</td></tr><tr><td>Num. of RGB images</td><td>108</td><td>65</td><td>29</td></tr><tr><td>RGB images resolution</td><td>${8000} \times {6000}\mathrm{{px}}$</td><td>4000 x1844px</td><td>${8000} \times {6000}\mathrm{{px}}$</td></tr><tr><td>SFM algorithm</td><td>COLMAP [24]</td><td>COLMAP [24]</td><td>COLMAP [24]</td></tr><tr><td>MVS algorithm</td><td>OpenMVS [5]</td><td>OpenMVS [5]</td><td>OpenMVS [5]</td></tr><tr><td>Depth sensor</td><td>Kinect V1</td><td>Kinect V1</td><td>Kinect V1</td></tr><tr><td>LR frames per capture</td><td>16</td><td>16</td><td>16</td></tr><tr><td>SR point clouds</td><td>26</td><td>22</td><td>20</td></tr></table>
|
| 142 |
+
|
| 143 |
+
The Figures 4 and 5 show the acquisition, merging, and reconstruction steps proposed by this pipeline for the Porcelain Horse and Jaguar Pan. The figures also bring the discussion of the main challenges for each reconstruction and how they were handled by the pipeline. The algorithms and main components of each experiment are described in Table 1.
|
| 144 |
+
|
| 145 |
+
The resolution of clouds obtained by the low-cost sensor with SR is considerably lower than in clouds obtained by photogrammetry. This is evident in the turtle's captures and reconstructions (Fig. 6(b)). In such figure, is shown that the low-cost sensor presented a scale limitation. However, it has the advantage of making new captures of the object even if it has moved in the scene. The photogrammetry also presented limitations when it try to describe featureless regions of any object (as shown in Fig. 3 and Fig. 5(f)). However, this does not happen with the depth sensor since the coloring does not influence on capture. The resolution of the images used on the SFM pipeline is also a factor that directly influences the quality and details of the $3\mathrm{D}$ reconstruction. The point clouds obtained by photogrammetry were capable of representing, with good quality, distinguishable details on a millimeter scale. The merging of point clouds was helpful to express in greater detail the objects that were reconstructed, taking the advantages of both captures.
|
| 146 |
+
|
| 147 |
+
The merged point clouds have been down-sampled to facilitate visualization and meshing generation since the aligned and combined point clouds may have an excessive and redundant number of vertices and there is no guarantee that the sampling density is sufficient for proper reconstruction [2]. Point clouds were meshed using the Screened Poisson Surface Reconstruction feature in Meshlab [7] using reconstruction depth 7 and 3 as the minimum number of samples. It is important to note that the production of a mesh is a highly dependent process on the variables used to generate the surface. We will consider as standard for all reconstructions the Poisson Surface Reconstruction the parameters defined in this paragraph.
|
| 148 |
+
|
| 149 |
+
For quantitative validation, the 3D surfaces reconstructions of the Turtle (Fig. 6) were compared with the model used for 3D printing (ground truth in Fig. 6(d)). For this comparison, we used the Hausdorff Distance tool of Meshlab [7]. The results are discussed on Table 2 and graphically represented on Fig. 7.
|
| 150 |
+
|
| 151 |
+
The same quantitative validation was carried out with the reconstructions of the Jaguar's 3D surfaces and its respective model used for 3D printing. The results are presented in Table 3 and as like the turtle's Hausdorff Distances, the reconstruction of the jaguar with this pipeline achieves better mean and lower values of maximum and minimum when compared with individual approaches.
|
| 152 |
+
|
| 153 |
+
All objects studied benefited from the merging of point clouds as Poisson's surface reconstruction identifies and differentiates nearby geometric details, some of them are added by the merging. It was noticed that, when the points are linearly spaced, the resulting mesh is smoother and more accurate.
|
| 154 |
+
|
| 155 |
+
Table 2: Hausdorff Distances for 3D surface reconstructions of the Turtle pan. Each vertex sampled from the source mesh is searched to the closest vertex on ground truth. Values in the mesh units and concerning the diagonal of the bounding box of the mesh.
|
| 156 |
+
|
| 157 |
+
<table><tr><td>$\mathbf{{Mesh}}$</td><td>MVS (Filtered)</td><td>Kinect (SR)</td><td>Merged</td></tr><tr><td>Samples</td><td>17928 pts</td><td>20639 pts</td><td>20455 pts</td></tr><tr><td>Minimum</td><td>0.000000</td><td>0.000003</td><td>0.000000</td></tr><tr><td>Maximum</td><td>0.687741</td><td>0.172765</td><td>0.124484</td></tr><tr><td>$\mathbf{{Mean}}$</td><td>0.026021</td><td>0.028209</td><td>0.012780</td></tr><tr><td>RMS</td><td>0.082436</td><td>0.038791</td><td>0.023629</td></tr><tr><td>Reference</td><td>Fig. 6(a)</td><td>Fig. 6(b)</td><td>Fig. 6(c)</td></tr></table>
|
| 158 |
+
|
| 159 |
+
Table 3: Hausdorff Distances for 3D surface reconstructions of the Jaguar pan.
|
| 160 |
+
|
| 161 |
+
<table><tr><td>$\mathbf{{Mesh}}$</td><td>MVS (Filtered)</td><td>Kinect (SR)</td><td>Merged</td></tr><tr><td>Samples</td><td>12513 pts</td><td>13034 pts</td><td>13147 pts</td></tr><tr><td>Minimum</td><td>0.000005</td><td>0.000002</td><td>0.000001</td></tr><tr><td>Maximum</td><td>0.750001</td><td>0.173569</td><td>0.139575</td></tr><tr><td>$\mathbf{{Mean}}$</td><td>0.051147</td><td>0.017597</td><td>0.019753</td></tr><tr><td>RMS</td><td>0.091608</td><td>0.028266</td><td>0.026867</td></tr></table>
|
| 162 |
+
|
| 163 |
+
Texturing results using surfaces from merged point clouds are shown in Fig. 8. This stage is satisfactory due to the high quality of the images used and from the camera positions correctly aligned and undistorted with the target object from SFM results.
|
| 164 |
+
|
| 165 |
+
The images with respective poses used by the SFM system did not be able to apply a texture on the bottom of the objects since bottom view was not visible. A new camera pose was manually added with the image of the bottom view on the SFM output, re-applying the texturing on this uncovered angle.
|
| 166 |
+
|
| 167 |
+
Every procedure described in this section was performed on a notebook Avell G1550 MUV, Intel Core i7-9750H CPU @ 2.60GHz x 12, 16GB of RAM, GeForce RTX 2070 graphics card, on Ubuntu 16.04 64-bits.
|
| 168 |
+
|
| 169 |
+
## 5 CONCLUSION
|
| 170 |
+
|
| 171 |
+
With the proposed pipeline, it is possible to add 3D capture information, reconstructing details beyond what a single low-cost capture method initially provides. A low-cost depth sensor allows preliminary verification of data during acquisition. The Super-Resolution methodology reduces the incidence of noise and mitigates the low amount of details from depth maps acquired using low-cost RGB-D hardware. Photogrammetry despite capturing a higher level of detail has certain limitations related to the number of resources, like geometric and feature details.
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
|
| 175 |
+
Figure 6: Screened Poisson Surface Reconstruction results for the Turtle pan point clouds. The reconstruction depth is 7, while the minimum number of samples is 3 for all experiments. In (a) the limiting factor was the bottom part of the object that is not inferred by the photogrammetry process. (b) shows that the low-cost depth sensor was unable to identify details of the model, this is due to the small size of the object, making it difficult to obtain details, however, this mesh was able to represent the model in all directions, including the bottom. The merged mesh (c) was able to reproduce all the small details found by photogrammetry and include regions that were represented only by depth sensor captures. For comparison (d) presents the model’s ground truth used for 3D printing.
|
| 176 |
+
|
| 177 |
+

|
| 178 |
+
|
| 179 |
+
Figure 7: Hausdorff Distance of Turtle pan mesh result using the proposed pipeline (Fig. 6(c)). 20455 sampled vertices were searched to the closest vertices on ground truth. Minimum of 0.0 (red), maximum of 0.124484 (blue), mean of 0.012780 and RMS 0.023629. Values in the mesh units and concerning the diagonal of the bounding box of the mesh. The main limitation of the results was the bottom part, which was inferred only by the depth sensor.
|
| 180 |
+
|
| 181 |
+
The texturing process using high definition images from SFM output, adding possible missing parts, if needed, also helps to achieve greater visual realism to the reconstructed $3\mathrm{D}$ model.
|
| 182 |
+
|
| 183 |
+
Future research involves a quantitative analysis of the 3D reconstruction after the texturing step. It is also projected an automation to align point clouds using the scale-based iterative closest point algorithm (scaled PCA-ICP) and the application of this pipeline to digital preservation of artifacts from the cultural heritage of the MAE/UFBA.
|
| 184 |
+
|
| 185 |
+
## REFERENCES
|
| 186 |
+
|
| 187 |
+
[1] M. Berger, A. Tagliasacchi, L. M. Seversky, P. Alliez, G. Guennebaud, J. A. Levine, A. Sharf, and C. T. Silva. A survey of surface reconstruction from point clouds. Computer Graphics Forum, 36(1):301-329, 2017. doi: 10.1111/cgf. 12802
|
| 188 |
+
|
| 189 |
+
[2] F. Bernardini and H. Rushmeier. The 3d model acquisition pipeline. Computer Graphics Forum, 21(2):149-172, 2002. doi: 10.1111/1467 -8659.00574
|
| 190 |
+
|
| 191 |
+
[3] S. Bianco, G. Ciocca, and D. Marelli. Evaluating the performance of structure from motion pipelines. Journal of Imaging, 4(8), 2018. doi: 10.3390/jimaging4080098
|
| 192 |
+
|
| 193 |
+
[4] G. Bradski. The OpenCV Library. Dr. Dobb's Journal of Software Tools, 2000.
|
| 194 |
+
|
| 195 |
+
[5] D. Cernea. OpenMVS: Multi-view stereo reconstruction library, 2020.
|
| 196 |
+
|
| 197 |
+
[6] H. Chen, Y. Feng, J. Yang, and C. Cui. 3d reconstruction approach for outdoor scene based on multiple point cloud fusion. Journal of the Indian Society of Remote Sensing, 47(10):1761-1772, 2019. doi: 10. 1007/s12524-019-01029-y
|
| 198 |
+
|
| 199 |
+
[7] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia. MeshLab: an Open-Source Mesh Processing Tool. In V. Scarano, R. D. Chiara, and U. Erra, eds., Eurographics Italian Chapter Conference. The Eurographics Association, 2008. doi: 10. 2312/LocalChapterEvents/ItalChap/ItalianChapConf2008/129-136
|
| 200 |
+
|
| 201 |
+
[8] F. Di Paola and L. Inzerillo. 3d reconstruction-reverse engineering - digital fabrication of the egyptian palermo stone using by smartphone and light structured scanner. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2:311-318, 2018. doi: 10.5194/isprs-archives-XLII-2-311-2018
|
| 202 |
+
|
| 203 |
+
[9] P. Falkingham. Low cost 3d scanning using off-the-shelf video gaming peripherals. Journal of Paleontological Techniques, 11:1-9, 2013.
|
| 204 |
+
|
| 205 |
+
[10] C. Hernández and G. Vogiatzis. Shape from Photographs: A Multiview Stereo Pipeline, pp. 281-311. Springer Berlin Heidelberg, Berlin, Heidelberg, 2010. doi: 10.1007/978-3-642-12848-6_11
|
| 206 |
+
|
| 207 |
+
[11] D. Holz, A. E. Ichim, F. Tombari, R. B. Rusu, and S. Behnke. Registration with the point cloud library: A modular framework for aligning in 3-d. IEEE Robotics Automation Magazine, 22(4):110-124, Dec 2015. doi: 10.1109/MRA.2015.2432331
|
| 208 |
+
|
| 209 |
+
[12] A. Hosseininaveh Ahmadabadian, A. Karami, and R. Yazdan. An automatic $3\mathrm{\;d}$ reconstruction system for texture-less objects. Robotics
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+
Figure 8: Texturing results using SFM cameras poses estimation over Screened Poisson Reconstruction models of the merged point clouds. In addition to the objects previously evaluated, other models generated through our pipeline are presented in this figure.
|
| 214 |
+
|
| 215 |
+
and Autonomous Systems, 117:29 - 39, 2019. doi: 10.1016/j.robot. 2019.04.001
|
| 216 |
+
|
| 217 |
+
[13] Y. H. Jo and S. Hong. Three-dimensional digital documentation of cultural heritage site based on the convergence of terrestrial laser scanning and unmanned aerial vehicle photogrammetry. ISPRS International Journal of Geo-Information, 8(2), 2019. doi: 10.3390/ijgi8020053
|
| 218 |
+
|
| 219 |
+
[14] M. Kazhdan and H. Hoppe. Screened poisson surface reconstruction. ACM Trans. Graph., 32(3):29:1-29:13, July 2013. doi: 10.1145/ 2487228.2487237
|
| 220 |
+
|
| 221 |
+
[15] N. Mellado, D. Aiger, and N. J. Mitra. Super 4pcs fast global pointcloud registration via smart indexing. In Proceedings of the Symposium on Geometry Processing, SGP '14, p. 205-215. Eurographics Association, Goslar, DEU, 2014. doi: 10.1111/cgf. 12446
|
| 222 |
+
|
| 223 |
+
[16] O. Muratov, Y. Slynko, V. Chernov, M. Lyubimtseva, A. Shamsuarov, and V. Bucha. 3dcapture: $3\mathrm{\;d}$ reconstruction for a smartphone. In 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 893-900, June 2016. doi: 10.1109/CVPRW. 2016.116
|
| 224 |
+
|
| 225 |
+
[17] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shotton, S. Hodges, and A. Fitzgibbon. Kinect-fusion: Real-time dense surface mapping and tracking. In 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp. 127-136, Oct 2011. doi: 10.1109/ISMAR.2011.6092378
|
| 226 |
+
|
| 227 |
+
[18] OpenKinect. libfreenect, 2012.
|
| 228 |
+
|
| 229 |
+
[19] A. Prokos, G. Karras, and L. Grammatikopoulos. Design and evaluation of a photogrammetric $3\mathrm{\;d}$ surface scanner. hand, $2 : 2,{2009}$ .
|
| 230 |
+
|
| 231 |
+
[20] P. Raimundo. Low-cost 3d reconstruction of cultural heritage. Master's thesis, Universidade Federal da Bahia, Salvador, 2018.
|
| 232 |
+
|
| 233 |
+
[21] P. Raimundo et al. Low-cost 3d reconstruction of cultural heritage artifacts. Revista Brasileira de Computação Aplicada, 10(1):66-75, maio 2018. doi: 10.5335/rbca.v10i1.7791
|
| 234 |
+
|
| 235 |
+
[22] P. Raimundo et al. Improved point clouds from a heritage artifact depth low-cost acquisition. Revista Brasileira de Computação Aplicada, 12(1):84-94, fev. 2020. doi: 10.5335/rbca.v12i1.10019
|
| 236 |
+
|
| 237 |
+
[23] R. B. Rusu and S. Cousins. 3D is here: Point Cloud Library (PCL). In IEEE International Conference on Robotics and Automation (ICRA). Shanghai, China, May 9-13 2011.
|
| 238 |
+
|
| 239 |
+
[24] J. L. Schönberger and J.-M. Frahm. Structure-from-motion revisited.
|
| 240 |
+
|
| 241 |
+
In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
|
| 242 |
+
|
| 243 |
+
[25] A. D. Sergeeva and V. A. Sablina. Using structure from motion for monument $3\mathrm{\;d}$ reconstruction from images with heterogeneous background. In 2018 7th Mediterranean Conference on Embedded Computing (MECO), pp. 1-4, June 2018. doi: 10.1109/MECO.2018.8406058
|
| 244 |
+
|
| 245 |
+
[26] J. W. Silva, L. Gomes, K. A. Agüero, O. R. P. Bellon, and L. Silva. Real-time acquisition and super-resolution techniques on $3\mathrm{\;d}$ reconstruction. In 2013 IEEE International Conference on Image Processing, pp. 2135- 2139, Sep. 2013. doi: 10.1109/ICIP.2013.6738440
|
| 246 |
+
|
| 247 |
+
[27] M. Waechter, N. Moehrle, and M. Goesele. Let there be color! large-scale texturing of $3\mathrm{\;d}$ reconstructions. In D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, eds., Computer Vision-ECCV 2014, pp. 836-850. Springer International Publishing, Cham, 2014.
|
| 248 |
+
|
| 249 |
+
[28] M. Zollhöfer, C. Siegl, B. Riffelmacher, M. Vetter, B. Dreyer, M. Stam-minger, and F. Bauer. Low-Cost Real-Time 3D Reconstruction of Large-Scale Excavation Sites using an RGB-D Camera. In R. Klein and P. Santos, eds., Eurographics Workshop on Graphics and Cultural Heritage. The Eurographics Association, 2014. doi: 10.2312/gch. 20141298
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/wLtLeiJNRKb/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,255 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ IMPROVED LOW-COST 3D RECONSTRUCTION PIPELINE BY MERGING DATA FROM DIFFERENT COLOR AND DEPTH CAMERAS
|
| 2 |
+
|
| 3 |
+
Category: Research
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
The performance of traditional 3D capture methods directly influences the quality of digitally reconstructed 3D models. To obtain complete and well-detailed low-cost three-dimensional models, this paper proposes a 3D reconstruction pipeline using point clouds from different sensors, combining captures of a low-cost depth sensor post-processed by Super-Resolution techniques with high-resolution RGB images from an external camera using Structure-from-Motion and Multi-View Stereo output data. The main contribution of this work includes the description of a complete pipeline that improves the stage of information acquisition and makes the data merging from different sensors. Several phases of the $3\mathrm{D}$ reconstruction pipeline were also specialized to improve the model's visual quality. The experimental evaluation demonstrates that the developed method produces good and reliable results for low-cost 3D reconstruction of an object.
|
| 8 |
+
|
| 9 |
+
Keywords: Low-Cost 3D Reconstruction, Depth Sensor, Photogrammetry.
|
| 10 |
+
|
| 11 |
+
Index Terms: Computer graphics, Shape modeling, 3D Reconstruction.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
3D reconstruction makes it possible to capture the geometry and appearance of an object or scene allowing us to inspect details without risk of damaging, measure properties, and reproduce 3D models in different material [21]. In recent years, numerous advances in 3D digitization have been observed, mainly by applying pipelines for three-dimensional reconstruction using costly high-precision 3D scanners. In addition, recent researches have sought to reconstruct objects or scenes using depth images from low-cost acquisition devices (e.g., the Microsoft Kinect sensor [17]) or using Structure from Motion (SFM) [24] combined with Multi-View Stereo (MVS) [5] from RGB-images.
|
| 16 |
+
|
| 17 |
+
Good quality 3D reconstructions require a large number of financial resources, as they require state-of-the-art equipment to capture object data in high precision and detail. On the other hand, low-resolution equipment implies a lower quality capture, even being financially more viable. Even with the ease of operation, lightweight, and portability, low-cost approaches must consider the limitations of the scanning equipment used [20].
|
| 18 |
+
|
| 19 |
+
The acquisition step of a 3D reconstruction pipeline refers to the use of devices to capture data from objects in a scene such as their geometry and color [22]. One result of 3D geometry capture is the production of discrete points collection that demonstrates the model shape. We call it point clouds. The data obtained by this step will be used in all other phases of the 3D reconstruction process [2].
|
| 20 |
+
|
| 21 |
+
Active capture methods use equipment such as scanners to infer objects geometry through a beam of light, inside or outside the visible spectrum. The scanner sensor has the advantages of fast measuring speed, robustness regarding external factors, and ease of acquiring information. Active sensors also have good performance in reconstructing texture-less and featureless surfaces [6,22]. The sensors need to be sensitive to small variations in the information acquired, since for small differences in distance, the variation in the time it takes to reach two different points is very low, requiring low equipment latency and good response time. For this reason, these systems tend to be slightly noisy [21]. Considering low-cost reconstruction approaches difficulties to capture color in high precision are disadvantage [10].
|
| 22 |
+
|
| 23 |
+
Passive methods are based on optical imaging techniques. They are highly flexible and work well with any modern digital camera. Image-based 3D reconstruction is practical, non-intrusive, low-cost and easily deployable outdoors. Various properties of the images can be used to retrieve the target shape, such as material, viewpoints and illumination. As opposed to active techniques, image-based techniques provide an efficient and easy way to acquire the color of a target object [10]. Although passive reconstructions mainly using SFM and MVS produce excellent results, they have limitations like the difficulty of distinguishing the target object from the background [25] and require the target object to have detailed geometry [6]. A controlled environment is needed to obtain better reconstruction results $\left\lbrack {{12},{24}}\right\rbrack$ .
|
| 24 |
+
|
| 25 |
+
Considering the limitations imposed by the presented approaches, it is important to note that a target whose geometry has been described by only a low-cost capture method has a real challenge in expressing its completeness, with rich and small details [6].
|
| 26 |
+
|
| 27 |
+
This paper proposes a hybrid pipeline from a low-cost depth camera (low-resolution images) and an external color capture camera (digital camera with high-resolution RGB images) to estimate and reconstruct the surface of an object and apply a high-quality texture. Such limitations of each data acquisition approach are bypass, generating a complete and well-detailed replica of the target model with high visual quality. To achieve this effect, this project uses a variation and combination of Structure from Motion, Multi-View Stereo and depth camera capture techniques.
|
| 28 |
+
|
| 29 |
+
Although there are mature projects aimed at low-cost 3D reconstruction, few are those who describing step-by-step how to overcome the limitations from low-cost three-dimensional data capture using the best features in all phases of the pipeline to obtain the model as realistic as possible. The main contribution of this work is the description of a complete pipeline that makes use of post-processed depth captures and merging data from different sensors, in which depth sensor data and high-resolution color images do not need to be synchronized.
|
| 30 |
+
|
| 31 |
+
As it is a post-processed task (after capture/estimate depth data), this work also includes the detection of the region of interest, based on the average distance of the scene, removing points not belonging to the target object and allows the inclusion of new images containing regions of the target object not previously photographed to improve the texturing step results.
|
| 32 |
+
|
| 33 |
+
In addition to this introductory section, this work is organized as follows: Section 2 presents related works, while section 3 describes the proposed pipeline. The experiments and evaluation of the pipeline are presented in section 4. Finally, section 5 discusses the final considerations and results achieved by this research.
|
| 34 |
+
|
| 35 |
+
§ 2 RELATED WORK
|
| 36 |
+
|
| 37 |
+
Prokos et al. [19] proposed a hybrid approach combining shape from stereo (with additional geometric constraints) and laser scanning techniques. Using two cameras and a portable laser beam, they achieved accuracy as good as some high-end laser triangulation scanners. Although, they do not include automatically detecting outliers in their results.
|
| 38 |
+
|
| 39 |
+
The KinectFusion system [17] tracks the pose of portable depth cameras (Kinect) as they move through space and perform good three-dimensional surface reconstructions in real-time. The Kinect sensor has considerable limitations, including temporal inconsistency and the low resolution of the captured color and depth images [22]. Real-time reconstruction is not a requirement for well-detailed, accurate, and complete reconstructions.
|
| 40 |
+
|
| 41 |
+
Silva et al. [26] provides a guided reconstruction process using Super-Resolution (SR) techniques, helping to increase the quality of the low-resolution data captured with a low-cost sensor. The method of data acquisition using low-cost depth cameras and SR is also improved by Raimundo [22]. Even with depth image improvements, a poor registration of captures can affect the final model's shape.
|
| 42 |
+
|
| 43 |
+
Falkingham [9] demonstrates the potential applications of low-cost technology in the field of paleontology. The Microsoft Kinect was used to digitalize specimens of various sizes, and the resulting digital models were compared with models produced using SFM and MVS. The work pointed out that although Kinect generally registers morphology at a lower resolution capturing less detail than photogrammetry techniques, it offers advantages in the speed of data acquisition and generation of the $3\mathrm{D}$ mesh completed in real-time during data capture. Also, they did not use Super-Resolution to improve captures from low-cost devices and the models produced by the Kinect lack any color information.
|
| 44 |
+
|
| 45 |
+
Zollhöfer et al. [28] used a Kinect sensor to capture the geometry of an excavation site and took advantage of a topographic map to distort the reconstructed model, significantly increasing the quality of the scene. The global distortion, with Super-Resolution techniques applied to raw scans, significantly increased the fidelity and realism of its results but is too specialized for large scales scenes.
|
| 46 |
+
|
| 47 |
+
Paola and Inzerillo [8] in order to digitally produce the Egyptian stone from Palermo, proposed a method with a structured light scanner, smartphones and SFM to apply texture in the highly accurate mesh generated by the scanner. The main challenges were the dark color of the material and the superficiality of the groove of the hi-eroglyphs that some capture approaches have difficulty recognizing. The level of detail of the texture application showed up quite accurately. This reference work used a high-resolution 3D scanner, not aiming at low-cost reconstruction.
|
| 48 |
+
|
| 49 |
+
Jo and Hong [13] use a combination of terrestrial laser scanning and Unmanned Aerial Vehicle (UAV) photogrammetry to establish a three-dimensional model of the Magoksa Temple in Korea. The scans were used to acquire the perpendicular geometry of buildings and locations, being aligned and merged with the photogrammetry output, producing a hybrid point cloud. The photogrammetry adds value to the $3\mathrm{D}$ model, complementing the point cloud with the upper parts of buildings, which are difficult to acquire through laser scanning.
|
| 50 |
+
|
| 51 |
+
Chen [6] proposes a registration method to combine the data of a laser scanner and photogrammetry to reconstruct the real outdoor 3D scene. They managed greatly increasing the accuracy and convenience of operation. The two sensors can work independently, the method fuses their data even if in different scales. Mesh reconstruction and texturing were not explored by this work.
|
| 52 |
+
|
| 53 |
+
Raimundo et al. [21] point out in their bibliographic review several studies that successfully used advanced rendering techniques such as global illumination, ambient occlusion, normal mapping, shadow baking, per-vertex lighting, and level of detail. These rendering techniques also improve the final presentation of 3D reconstructions.
|
| 54 |
+
|
| 55 |
+
§ 3 PIPELINE PROPOSAL
|
| 56 |
+
|
| 57 |
+
To overcome limitations of the low-cost three-dimensional data acquisition process, the following pipeline is proposed: capturing depth and color images (using a low-cost depth sensor and a digital camera); generation of point clouds from low-cost RGB-D camera depth images (using SR techniques [22]); shape estimation from RGB images (using SFM [24] and MVS [5]); merging of data from these different capture techniques; mesh generation; and texturing with high quality photos (Fig. 1). Several phases of the pipeline were specialized to achieve better accuracy and visual quality of 3D reconstructions of small and medium scale objects. The proposed pipeline works offline.
|
| 58 |
+
|
| 59 |
+
< g r a p h i c s >
|
| 60 |
+
|
| 61 |
+
Figure 1: Schematic diagram for the proposed pipeline and the 3D reconstruction processes of an object.
|
| 62 |
+
|
| 63 |
+
§ 3.1 DATA ACQUISITION
|
| 64 |
+
|
| 65 |
+
For capture using a low-cost depth sensor is established the following acquisition protocol: take several depths captures, moving the sensor around the object, and defining the limits of the capture volume. Furthermore, a turntable can also be used, obtaining a more controlled capture and align process. The number of views captured is less than that of real-time approaches due to the additional processing required to ensure the quality of each capture. Considering the quality requirements for this proposed work, an interactive tool [20] is used to acquire the raw data from the depth sensor (Fig. 2).
|
| 66 |
+
|
| 67 |
+
The depth capture method will present results proportional to the better the captures by the device, that is, the lower the incidence of noise and the better the accuracy of the inferred depth. With this in mind, each depth image goes through a filtering step with the application of Super-Resolution [22]. To provide high-resolution information beyond what is possible with a specific sensor, several low-resolution captures are merging, recreating as much detail as possible.
|
| 68 |
+
|
| 69 |
+
To add 3D information in greater detail and to apply a simple high-quality texturing process, photographs are taken from a digital camera around the target object. In our pipeline, these captures are independent of the depth sensor, we need just to take pictures with the fixed object, in a free movement of the camera. The set of images must be sufficient to cover most of the object's surface and the images must portray, in pairs, common parts of it. The color images will be used in the SFM pipeline.
|
| 70 |
+
|
| 71 |
+
The SFM pipeline detects characteristics in the images (feature detection), mapping these characteristics between images and finding descriptors capable of representing a distinguishable region (feature matching). These descriptors represent vertices of the reconstruction of the 3D scene (sparse reconstruction). The greater the number of matches found between the images, the greater the degree of accuracy of calculating a $3\mathrm{D}$ transformation matrix between the images, providing the estimation of the relative position between camera poses $\left\lbrack {3,{10}}\right\rbrack$ .
|
| 72 |
+
|
| 73 |
+
< g r a p h i c s >
|
| 74 |
+
|
| 75 |
+
Figure 2: Software to acquire and process depth images. The slider controls the capture limits (in millimeters) and the cut limits (in pixels), effectively determining the capture volume.
|
| 76 |
+
|
| 77 |
+
Photographs with good resolution and objects with a higher level of detail tend to bring greater precision to the photogrammetry algorithms. For objects with fewer details and features, the environment can be used to achieve better results [24]. In addition to the estimated structure to improve the depth sensor captured geometry, we use these cameras' pose estimation to apply easily and directly texture over the final model surface.
|
| 78 |
+
|
| 79 |
+
The Multi-View Stereo process is used to improve the point cloud obtained by SFM, resulting in a dense reconstruction. As the camera parameters such as position, rotation, and focal length are already known from SFM, the MVS computes 3D vertices in regions not detected by the descriptors. Multi-View Stereo algorithms generally have good accuracy, even with few images [10].
|
| 80 |
+
|
| 81 |
+
For this image-based point cloud result, to highlight the target object, a method of detecting the region of interest can be used. A simple algorithm is used to detect the centroid of the set of 3D points and remove points based on a radius from it. If the floor below the object is discernible, it is also possible to use a planar segmentation algorithm to remove the plane. A statistical removal algorithm can also be used to remove outliers. If even more accurate outlier removal is required, a manual process using a user interface tool can be performed. Most of the discrepancies and the background are removed using the proposed steps, minimizing working time.
|
| 82 |
+
|
| 83 |
+
Although image-based 3D reconstructions get greater detail than using low-cost depth sensors [9], this approach may not be able to estimate the completeness of the object (Fig. 3). This is a common result when the captures do not fully describe the target model, or it does not have a very distinguishable texture or detail.
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Figure 3: Some parts of the surface may not be estimated by the photogrammetry process. In (a) the white and smooth painting of the object (b) prevents the MVS algorithm from obtaining a greater number of points that define this part of the structure of the model, leaving this featureless surface region with a fewer density of points than others.
|
| 88 |
+
|
| 89 |
+
The algorithms used in the next steps require a guided set of data, thus, the normals of the point clouds are estimated before performing the alignment step. A normal estimation k-neighbor algorithm is used for this task.
|
| 90 |
+
|
| 91 |
+
§ 3.2 ALIGNMENT
|
| 92 |
+
|
| 93 |
+
To deal with the problem of aligning the point clouds of the acquisition phase, transformations are applied to place all captures in a global coordinate system. This alignment is usually performed in a coarse and fine alignment step.
|
| 94 |
+
|
| 95 |
+
To perform the initial alignment between the point clouds obtained by the depth sensor we use global alignment algorithms where the pairs of three-dimensional captures are roughly aligned [15]. Given the initial alignment between the captured views, the Iterative Closest Point (ICP) algorithm [11] is executed to obtain a fine alignment. After pairwise incremental registration, an algorithm for global minimization of the accumulated error is run.
|
| 96 |
+
|
| 97 |
+
The initial alignment step may not produce good alignment results due to the nature of the depth data utilized, as the low amount of discernible points between two point clouds [20], so the registration may present drifts. With this in mind, we use the point cloud obtained by photogrammetry as an auxiliary to apply a new alignment over the depth sensors point clouds, distorting the transformation, propagating the accumulation of errors between consecutive alignments and the loop closure, improving the global registration and the quality of the aligned point cloud.
|
| 98 |
+
|
| 99 |
+
< g r a p h i c s >
|
| 100 |
+
|
| 101 |
+
Figure 4: Porcelain horse. With the richness of details that this object has, as in the head and saddle, we use the photogrammetry method for distinguishing them with the highest level of detail. At the same time it has a low number of characteristics in predominantly smoothness regions, as the base of the structure and the body of the animal, we use the depth sensor capture approach where this factor does not influence the 3D acquisition process. The data captured by the low-cost depth sensor aggregated information where there are few visible features, as can be seen at the base and legs of the horse.
|
| 102 |
+
|
| 103 |
+
The point cloud generated by the image-based 3D reconstruction pipeline and the one obtained with the depth sensor captures are created from different image spectrum and are very common to have different scales. The point clouds obtained using the depth sensor must be aligned with the corresponding points of the object in the photogrammetry point cloud.
|
| 104 |
+
|
| 105 |
+
As the depth sensor captures are already in a global coordinate system, to carry out this alignment, it is sufficient just to scale and transform a single capture to fit the cloud obtained by MVS and apply the same transformation to the others, speeding up the registration process. After that, the ICP algorithm can be reapplied, including the photogrammetry output point cloud. This last point cloud is not to be transformed, only the rest of the captures is aligned to it because the camera positions that we will utilize for texturing will use this model's coordinate system.
|
| 106 |
+
|
| 107 |
+
The merging of point clouds from both data capture approaches will increase the information that defines the object geometry. This resulting point cloud is used on the next steps of the pipeline.
|
| 108 |
+
|
| 109 |
+
§ 3.3 SURFACE RECONSTRUCTION
|
| 110 |
+
|
| 111 |
+
The mesh generation step is characterized by the reconstruction of the surface, a process in which a 3D continuous surface is inferred from a collection of discrete points that prove its shape [1].
|
| 112 |
+
|
| 113 |
+
For this step, we use the algorithm Screened Poisson Surface Reconstruction [14]. This algorithm seeks to find a surface in which the gradient of its points is the closest to the normals of the vertices of the input point cloud. The choice of a parametric method for the surface reconstruction is justified by the robustness and the possibility of using numerical methods to improve the results. Also, the resulting meshes are almost regular and smooth.
|
| 114 |
+
|
| 115 |
+
§ 3.4 TEXTURE SYNTHESIS
|
| 116 |
+
|
| 117 |
+
Applying textures to reconstructed $3\mathrm{D}$ models is one of the keys to realism [27]. High-quality texture mapping aims to avoid seams, smoothing the transition of an image used for applying texture and its adjacent one [16].
|
| 118 |
+
|
| 119 |
+
The texture synthesis phase of the proposed pipeline comprises the combination of the high-resolution pictures captured with an external digital camera with the integrated model obtained from the previous step of the pipeline.
|
| 120 |
+
|
| 121 |
+
< g r a p h i c s >
|
| 122 |
+
|
| 123 |
+
Figure 5: Jaguar pan replica. Even with some visual characteristics generated by the 3D printing process, the object has very few distinguishable features because of its predominantly white texture. This factor makes difficult the reconstruction process by SFM and MVS. With this, we use the environment to assist in detecting the positions and orientation of the cameras. The captures with the depth sensor added information in the legs of the jaguar and the belly (bottom) not acquired by photogrammetry.
|
| 124 |
+
|
| 125 |
+
The high-resolution photos taken with a digital camera with the poses calculated using SFM, will be used to perform the generation of texture coordinates and atlas of the model, avoiding a time-consuming manual process.
|
| 126 |
+
|
| 127 |
+
The images with respective poses from SFM may not be able to apply a texture on faces not visible by any image used for the reconstruction, causing non-textured mesh surfaces in the three-dimensional model. To overcome this limitation, we post-apply the texture, merging camera relative poses result from SFM with new photos, calculating the new poses using photogrammetry result relative coordinate system.
|
| 128 |
+
|
| 129 |
+
§ 4 EXPERIMENTS AND EVALUATION
|
| 130 |
+
|
| 131 |
+
For evaluation, we run the proposed pipeline on some objects varying size and complexity: a porcelain horse-shaped object ("Porcelain horse", Fig. 4), a jaguar and a turtle-shaped clay pan replicas ("Jaguar pan", Fig. 5 and "Turtle pan", Fig. 6 respectively). The remnant objects used in this study are replicas of cultural objects from the Waurá tribe and belong to the collection of Federal University of Bahia Brazilian Museum of Archaeology and Ethnology (MAE/UFBA). The replicas were three-dimensionally reconstructed by Raimundo [20] and 3D printed. In addition, the turtle replica was colored by hydrographic printing.
|
| 132 |
+
|
| 133 |
+
In our experiments we used Microsoft Kinect version 1, however, any other low-cost sensor can be used to capture depth images. This sensor is affordable and captures color and depth information with a resolution of ${640} \times {480}$ pixels. To produce point clouds from the low-cost 3D scanner, we used the Super-Resolution approach proposed by Raimundo [22] with 16 Low-Resolution (LR) depth frames.
|
| 134 |
+
|
| 135 |
+
The photos used as input to the passive 3D reconstruction method were taken with a Redmi Note 8 camera for all evaluated models. The number of photos was arbitrarily chosen to maximize coverage of the object. For the SFM pipeline, the RGB images were processed using COLMAP [24] to calculate camera poses and sparse shape reconstruction. OpenMVS [5] was used for dense reconstruction. For the texturing stage, we used the algorithm proposed by Waechter et al. [27].
|
| 136 |
+
|
| 137 |
+
Some software tools were developed from third-party libraries for various purposes. For instance, OpenCV [4] and PCL [23] were used to handle and process depth images and point clouds, libfreenect [18] was used on the depth acquisition application to access and retrieve data from the Microsoft Kinect. Meshlab system [7] has been used for Poisson reconstruction and adjustments in 3D point clouds and meshes when necessary.
|
| 138 |
+
|
| 139 |
+
Table 1: Algorithms and main components of each experiment.
|
| 140 |
+
|
| 141 |
+
max width=
|
| 142 |
+
|
| 143 |
+
Object Porcelain horse Jaguar pan Turtle pan
|
| 144 |
+
|
| 145 |
+
1-4
|
| 146 |
+
Dimensions (cm) ${35} \times {12} \times {31}$ ${21.5} \times {15} \times 7$ $9 \times {6.5} \times {3.5}$
|
| 147 |
+
|
| 148 |
+
1-4
|
| 149 |
+
Texture Handmade Predominantly white Hydrographic printing
|
| 150 |
+
|
| 151 |
+
1-4
|
| 152 |
+
Num. of RGB images 108 65 29
|
| 153 |
+
|
| 154 |
+
1-4
|
| 155 |
+
RGB images resolution ${8000} \times {6000}\mathrm{{px}}$ 4000 x1844px ${8000} \times {6000}\mathrm{{px}}$
|
| 156 |
+
|
| 157 |
+
1-4
|
| 158 |
+
SFM algorithm COLMAP [24] COLMAP [24] COLMAP [24]
|
| 159 |
+
|
| 160 |
+
1-4
|
| 161 |
+
MVS algorithm OpenMVS [5] OpenMVS [5] OpenMVS [5]
|
| 162 |
+
|
| 163 |
+
1-4
|
| 164 |
+
Depth sensor Kinect V1 Kinect V1 Kinect V1
|
| 165 |
+
|
| 166 |
+
1-4
|
| 167 |
+
LR frames per capture 16 16 16
|
| 168 |
+
|
| 169 |
+
1-4
|
| 170 |
+
SR point clouds 26 22 20
|
| 171 |
+
|
| 172 |
+
1-4
|
| 173 |
+
|
| 174 |
+
The Figures 4 and 5 show the acquisition, merging, and reconstruction steps proposed by this pipeline for the Porcelain Horse and Jaguar Pan. The figures also bring the discussion of the main challenges for each reconstruction and how they were handled by the pipeline. The algorithms and main components of each experiment are described in Table 1.
|
| 175 |
+
|
| 176 |
+
The resolution of clouds obtained by the low-cost sensor with SR is considerably lower than in clouds obtained by photogrammetry. This is evident in the turtle's captures and reconstructions (Fig. 6(b)). In such figure, is shown that the low-cost sensor presented a scale limitation. However, it has the advantage of making new captures of the object even if it has moved in the scene. The photogrammetry also presented limitations when it try to describe featureless regions of any object (as shown in Fig. 3 and Fig. 5(f)). However, this does not happen with the depth sensor since the coloring does not influence on capture. The resolution of the images used on the SFM pipeline is also a factor that directly influences the quality and details of the $3\mathrm{D}$ reconstruction. The point clouds obtained by photogrammetry were capable of representing, with good quality, distinguishable details on a millimeter scale. The merging of point clouds was helpful to express in greater detail the objects that were reconstructed, taking the advantages of both captures.
|
| 177 |
+
|
| 178 |
+
The merged point clouds have been down-sampled to facilitate visualization and meshing generation since the aligned and combined point clouds may have an excessive and redundant number of vertices and there is no guarantee that the sampling density is sufficient for proper reconstruction [2]. Point clouds were meshed using the Screened Poisson Surface Reconstruction feature in Meshlab [7] using reconstruction depth 7 and 3 as the minimum number of samples. It is important to note that the production of a mesh is a highly dependent process on the variables used to generate the surface. We will consider as standard for all reconstructions the Poisson Surface Reconstruction the parameters defined in this paragraph.
|
| 179 |
+
|
| 180 |
+
For quantitative validation, the 3D surfaces reconstructions of the Turtle (Fig. 6) were compared with the model used for 3D printing (ground truth in Fig. 6(d)). For this comparison, we used the Hausdorff Distance tool of Meshlab [7]. The results are discussed on Table 2 and graphically represented on Fig. 7.
|
| 181 |
+
|
| 182 |
+
The same quantitative validation was carried out with the reconstructions of the Jaguar's 3D surfaces and its respective model used for 3D printing. The results are presented in Table 3 and as like the turtle's Hausdorff Distances, the reconstruction of the jaguar with this pipeline achieves better mean and lower values of maximum and minimum when compared with individual approaches.
|
| 183 |
+
|
| 184 |
+
All objects studied benefited from the merging of point clouds as Poisson's surface reconstruction identifies and differentiates nearby geometric details, some of them are added by the merging. It was noticed that, when the points are linearly spaced, the resulting mesh is smoother and more accurate.
|
| 185 |
+
|
| 186 |
+
Table 2: Hausdorff Distances for 3D surface reconstructions of the Turtle pan. Each vertex sampled from the source mesh is searched to the closest vertex on ground truth. Values in the mesh units and concerning the diagonal of the bounding box of the mesh.
|
| 187 |
+
|
| 188 |
+
max width=
|
| 189 |
+
|
| 190 |
+
$\mathbf{{Mesh}}$ MVS (Filtered) Kinect (SR) Merged
|
| 191 |
+
|
| 192 |
+
1-4
|
| 193 |
+
Samples 17928 pts 20639 pts 20455 pts
|
| 194 |
+
|
| 195 |
+
1-4
|
| 196 |
+
Minimum 0.000000 0.000003 0.000000
|
| 197 |
+
|
| 198 |
+
1-4
|
| 199 |
+
Maximum 0.687741 0.172765 0.124484
|
| 200 |
+
|
| 201 |
+
1-4
|
| 202 |
+
$\mathbf{{Mean}}$ 0.026021 0.028209 0.012780
|
| 203 |
+
|
| 204 |
+
1-4
|
| 205 |
+
RMS 0.082436 0.038791 0.023629
|
| 206 |
+
|
| 207 |
+
1-4
|
| 208 |
+
Reference Fig. 6(a) Fig. 6(b) Fig. 6(c)
|
| 209 |
+
|
| 210 |
+
1-4
|
| 211 |
+
|
| 212 |
+
Table 3: Hausdorff Distances for 3D surface reconstructions of the Jaguar pan.
|
| 213 |
+
|
| 214 |
+
max width=
|
| 215 |
+
|
| 216 |
+
$\mathbf{{Mesh}}$ MVS (Filtered) Kinect (SR) Merged
|
| 217 |
+
|
| 218 |
+
1-4
|
| 219 |
+
Samples 12513 pts 13034 pts 13147 pts
|
| 220 |
+
|
| 221 |
+
1-4
|
| 222 |
+
Minimum 0.000005 0.000002 0.000001
|
| 223 |
+
|
| 224 |
+
1-4
|
| 225 |
+
Maximum 0.750001 0.173569 0.139575
|
| 226 |
+
|
| 227 |
+
1-4
|
| 228 |
+
$\mathbf{{Mean}}$ 0.051147 0.017597 0.019753
|
| 229 |
+
|
| 230 |
+
1-4
|
| 231 |
+
RMS 0.091608 0.028266 0.026867
|
| 232 |
+
|
| 233 |
+
1-4
|
| 234 |
+
|
| 235 |
+
Texturing results using surfaces from merged point clouds are shown in Fig. 8. This stage is satisfactory due to the high quality of the images used and from the camera positions correctly aligned and undistorted with the target object from SFM results.
|
| 236 |
+
|
| 237 |
+
The images with respective poses used by the SFM system did not be able to apply a texture on the bottom of the objects since bottom view was not visible. A new camera pose was manually added with the image of the bottom view on the SFM output, re-applying the texturing on this uncovered angle.
|
| 238 |
+
|
| 239 |
+
Every procedure described in this section was performed on a notebook Avell G1550 MUV, Intel Core i7-9750H CPU @ 2.60GHz x 12, 16GB of RAM, GeForce RTX 2070 graphics card, on Ubuntu 16.04 64-bits.
|
| 240 |
+
|
| 241 |
+
§ 5 CONCLUSION
|
| 242 |
+
|
| 243 |
+
With the proposed pipeline, it is possible to add 3D capture information, reconstructing details beyond what a single low-cost capture method initially provides. A low-cost depth sensor allows preliminary verification of data during acquisition. The Super-Resolution methodology reduces the incidence of noise and mitigates the low amount of details from depth maps acquired using low-cost RGB-D hardware. Photogrammetry despite capturing a higher level of detail has certain limitations related to the number of resources, like geometric and feature details.
|
| 244 |
+
|
| 245 |
+
< g r a p h i c s >
|
| 246 |
+
|
| 247 |
+
Figure 6: Screened Poisson Surface Reconstruction results for the Turtle pan point clouds. The reconstruction depth is 7, while the minimum number of samples is 3 for all experiments. In (a) the limiting factor was the bottom part of the object that is not inferred by the photogrammetry process. (b) shows that the low-cost depth sensor was unable to identify details of the model, this is due to the small size of the object, making it difficult to obtain details, however, this mesh was able to represent the model in all directions, including the bottom. The merged mesh (c) was able to reproduce all the small details found by photogrammetry and include regions that were represented only by depth sensor captures. For comparison (d) presents the model’s ground truth used for 3D printing.
|
| 248 |
+
|
| 249 |
+
< g r a p h i c s >
|
| 250 |
+
|
| 251 |
+
Figure 7: Hausdorff Distance of Turtle pan mesh result using the proposed pipeline (Fig. 6(c)). 20455 sampled vertices were searched to the closest vertices on ground truth. Minimum of 0.0 (red), maximum of 0.124484 (blue), mean of 0.012780 and RMS 0.023629. Values in the mesh units and concerning the diagonal of the bounding box of the mesh. The main limitation of the results was the bottom part, which was inferred only by the depth sensor.
|
| 252 |
+
|
| 253 |
+
The texturing process using high definition images from SFM output, adding possible missing parts, if needed, also helps to achieve greater visual realism to the reconstructed $3\mathrm{D}$ model.
|
| 254 |
+
|
| 255 |
+
Future research involves a quantitative analysis of the 3D reconstruction after the texturing step. It is also projected an automation to align point clouds using the scale-based iterative closest point algorithm (scaled PCA-ICP) and the application of this pipeline to digital preservation of artifacts from the cultural heritage of the MAE/UFBA.
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Artistic Recoloring of Image Oversegmentations
|
| 2 |
+
|
| 3 |
+

|
| 4 |
+
|
| 5 |
+
Figure 1: Recolorization of region-based abstraction. The figure shows the original image on the left and the recolored images produced by our proposed method using three different palettes.
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
We propose a method to assign vivid colors to regions of an oversegmented image. We restrict the output colors to those found in an input palette, and seek to preserve the recognizability of structure in the image. Our strategy is to match the color distances between the colors of adjacent regions with the color differences between the assigned palette colors; thus, assigned colors may be very far from the original colors, but both large local differences (edges) and small ones (uniform areas) are maintained. We use the widest path algorithm on a graph-based structure to set a priority order for recoloring regions, and traverse the resulting tree to assign colors. Our method produces vivid colorizations of region-based abstraction using arbitrary palettes. We demonstrate a set of stylizations that can be generated by our algorithm.
|
| 10 |
+
|
| 11 |
+
Keywords: Non-photorealistic rendering, Image stylization, Recoloring, Abstraction.
|
| 12 |
+
|
| 13 |
+
Index Terms: I.3.3 [Picture/Image Generation]-; I.4.6 [Segmentation]
|
| 14 |
+
|
| 15 |
+
## 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Color plays an important role in image aesthetics. In representational art, artists employ colors that match the perceived colors of objects in the depicted scene. Conversely, abstraction provides more freedom to manipulate colors. Figure 2 shows a modern vector illustration of an owl, an example of Fauvism by André Derain, and an abstraction of the Eiffel Tower by Robert Delaunay. The artists have expressed the image content by vivid colors assigned carefully various image regions. We aim at a method that can generate colorful images, recoloring an image using an arbitrary input palette. In this paper, we describe a method for recoloring an image based on a subdivision of the image into distinct segments, recoloring each segment based on its relationship with its neighbours.
|
| 18 |
+
|
| 19 |
+
Our goal is to present different recoloring possibilities by assigning colors to each region of an oversegmented input image. We aim to maintain the image contrast and preserve strong edges so that the content of the scene remains recognizable. It is important to convey textures and small features, often too delicate to be preserved by existing abstraction methods. We would like to be able to create wild and vivid abstractions through use of unusual palettes.
|
| 20 |
+
|
| 21 |
+
Manual recoloring of oversegmented images would be tedious; the images contain hundreds or thousands of segments; clicking on every segment would take a long time, even leaving aside the cognitive and interaction overhead of making selections from the palette. We provide an automatic assignment of colors to regions. The assignment can be used as is, could be used in a fast manual assessment loop (for example, if an artist wanted to choose a suitable palette for coloring a scene), or could be a good starting point for a semi-automatic approach where a user made minor modifications to the automated results.
|
| 22 |
+
|
| 23 |
+
In this paper, we present an automatic recoloring approach for a region-based abstraction. The input is a desired palette and an oversegmented image. The method assigns a color from the palette to each region; it is based on the widest path algorithm [17], which organizes the regions into a tree based on the weight of the edges connecting them. We use color differences between adjacent regions both to order the regions and to select colors, trying to match the difference magnitude between the assigned palette colors and the original region colors. The use of color differences allows structures in the image to remain recognizable despite breaking the link between the original and depicted color.
|
| 24 |
+
|
| 25 |
+
Our main contributions are as follows:
|
| 26 |
+
|
| 27 |
+
- We designed a recoloring method for an oversegmented image, creating multiple abstractions colored with just one palette. Our method creates wild and high contrast images.
|
| 28 |
+
|
| 29 |
+
- Various styles can be created by our method. We experiment with color blending between regions and produce smooth images. By reducing the palette colors, we simplify the recolored images. Moreover, we generate new colorings from a palette by applying different metrics and color spaces.
|
| 30 |
+
|
| 31 |
+
The remainder of the paper is organized as follows. In Section 2, we briefly present related work. We describe our algorithm in Section 3. Section 4 shows results and provides some evaluation, and Section 5 gives some possible variations of the method. Finally, we conclude in Section 6 and suggest directions for future work.
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
Figure 2: Colorful representational images. A modern vector illustration of an owl; a Fauvist painting by André Derain; an abstraction of the Eiffel Tower by Robert Delaunay.
|
| 36 |
+
|
| 37 |
+
## 2 Previous Work
|
| 38 |
+
|
| 39 |
+
Although there is an existing body of work on recoloring photographs $\left\lbrack {5,8,{10},{12},{15},{19},{22},{26},{29}}\right\rbrack$ and researchers have investigated colorization of non-photorealistic renderings $\lbrack 2,7,{18},{23}$ , ${25},{30},{31}\rbrack$ , there is room for further exploration. Existing recoloring methods little address region-based abstractions. The closest approach, by Xu and Kaplan [28], used optimization to assign black or white to each region of an input image. On the other hand, there are a few approaches in the pattern colorization problem, intending to colorize the graphic patterns $\left\lbrack {3,{11}}\right\rbrack$ . Bohra and Gandhi $\left\lbrack 3\right\rbrack$ posed the colorization problem as an optimal graph matching problem over color groups in the reference and a grayscale template image. In generating playful palettes [6], the approximation results converge for a number of blobs beyond six or seven. Similarly, the ColorArt method of Bohra and Gandhi [3] enforce an equal number of colors in the reference palette and the template image, and cannot find matches otherwise.
|
| 40 |
+
|
| 41 |
+
Below, we review some of the previous research on recoloring methods in NPR and color palette selection. These approaches can be broadly classified into example-based recoloring and palette-based recoloring.
|
| 42 |
+
|
| 43 |
+
## Example based Recoloring (Color Transfer)
|
| 44 |
+
|
| 45 |
+
Recoloring methods were first proposed by Reinhard et al. [19], where the colors of one image were transferred to another. They converted the RGB signals to Ruderman et al.'s [20] perception-based color space ${L\alpha \beta }$ . To produce colors they shifted and scaled the ${L\alpha \beta }$ space using simple statistics.
|
| 46 |
+
|
| 47 |
+
Towards the color transfer and color style transfer techniques, Neumann and Neumann [15] extracted palette colors from an arbitrary target image and applied a 3D histogram matching. They attempt to keep the original hues after style transformation, where all of the colors, having the same hue on the original image would have the same hue after the matching step. They performed matching transformations on $3\mathrm{D}$ cumulative distribution functions belonging to the original and the style images. However, due to the lack of spatial information, the same histogram in itself looks to be not enough for a true style cloning. Especially, the problems caused by unpredictable noise and gradient effects. They suggest using image segmentation and 3-dimensional smoothing of the color histogram to improve the result.
|
| 48 |
+
|
| 49 |
+
Levin et al. [10] introduced an interactive colorization method for black and white images. They used a quadratic cost function derived from the color differences between a pixel and its weighted average neighborhood colors. The user by scribbling indicates the desired color in the interior of the region, instead of tracing out its precise boundary. Then the colors automatically propagate to the remaining pixels in the image sequence. However, sometimes it fails at strong edges, due to a sensitive scale parameter. Inspired by Levin et al. [10], Yatziv and Sapiro [29] proposed a fast colorization of images, which was based on the concepts of luminance-weighted chrominance blending and fast intrinsic geodesic distance computations. Sykora et al. [25] developed a tool called Lazybrush for the colorization of cartoons which integrates textures to the images to create 3D-like effects [24], while Casaca et al. [4] used Laplacian coordinates for image division and used a color theme for fast colorization. Fang et al. [7] proposed an interactive optimization method for colorization of the hand-drawn grayscale images. To maintain a smooth color transitions and control the color overflow in the texture areas, they used a smooth feature map to adjust the feature vectors.
|
| 50 |
+
|
| 51 |
+
A few works in NPR composite the colors for recoloring. The compositing could be applied by using alpha blending [13] or the Kubelka-Munk [9] equation (KM). For a paint-like effect NPR researchers mostly worked with the physic-based Kubelka-Munk equation to predict the reflectance of a layer of pigment. Some researchers tried to mimic the actual artist decision of mixing the colors to generate the palettes. For example, a data-driven color compositing framework by Lu et al. [12] derived three models based on optimized alpha blending, RBF interpolation and KM optimization to improve the prediction of compositing colors. Later, the KM pigment-Based model used for recoloring of styles such as watercolor painting [2]. The compositing by KM model leaves traces of overlapping stroke layers which can produce near natural painting effects.
|
| 52 |
+
|
| 53 |
+
## Palette based Recoloring
|
| 54 |
+
|
| 55 |
+
The interest in colorization and recoloring methods, opened up new research ideas on color palettes, rather than simple approaches like averaging colors of the active regions [8]. For many recoloring methods, usually a user would scribble each region of the image with a color, and the algorithm would do the rest of the work. Early methods to select the color palettes used Gaussian mixture models or K-means to cluster the image pixels. Chang et al. [5] introduced a photo-recoloring method by user-modified palettes. To select the color palettes, they compute the kmeans clustering on image colors, then discard the black entry. Tan et al. [26] proposed a technique to decompose an image into layers to extract the palette colors. Each layer of decomposition represents a coat of paint of a single color applied with varying opacity throughout the image. To determine a color palette capable of reproducing the image, they analyzed the image in RGB-space geometrically in a simplified convex hull.
|
| 56 |
+
|
| 57 |
+
Most recent methods on generating the palettes have more flexibility for editing, such as Playful Palette [23], which is a set of blobs of color that blend together to create gradients and gamuts. The editable palette keeps the history of previous palettes. DiVerdi et al. [6] also proposed an approximation of image colors based on the Playful Palette. In this technique, within an optimization framework, an objective function minimizes the distance between the original image and the recolored one by palette colors, based on the self organizing map. The approximation algorithm is an order of magnitude faster than Playful Palette [23], while the quality is lower due to small amounts of shrinkage which caused by both the self organizing map and the clustering step.
|
| 58 |
+
|
| 59 |
+
There are a limited number of works in the literature that assign colors to regions. For example, Qu et al. [18] proposed a colorization technique for the black and white manga using the Gabor wavelet filter. A user scribbles on the drawing to connect the regions; the algorithm then assigns colors to different hatching patterns, halfton-ing, and screening. The vector arts algorithms are used to recolor the distinct regions, which are more comparable to our recoloring method. Xu and Kaplan introduced artistic thresholding [28] where a graph data structure is applied on the segmentation of a source image. They employed an energy function to measure the quality of different black and white coloring of the segments. However, they failed to show the high-level features that crossing through the foreground objects. Lin et al. [11] proposed a palette-based recoloring method with a probabilistic model. They learn and predict the distribution of properties such as saturation, lightness, and contrast for individual regions and neighboring regions. Then score pattern colorings using the predicted distributions and color compatibility model by O'Donovan et al. [16]. Bohra and Gandhi [3] proposed an exemplar-based colorization algorithm for grayscale graphic arts from a reference image based on color graph and composition matching method. They retrieve palettes using the spatial features of the input image. They aim to preserve the artist's intent in the composition of different colors and spatial adjacency between these colors in the image.
|
| 60 |
+
|
| 61 |
+
## 3 RECOLORING ALGORITHM
|
| 62 |
+
|
| 63 |
+
We present a recoloring algorithm that automatically assigns colors to regions of an oversegmented image. Our system takes as input a set of regions and a palette cotaining a set of colors and assigns a color to each region. The recolored image should convey recognizable objects in the image. Edges are essential to the visibility of the structures. Neighboring regions will be assigned distinct colors to express an edge, and regions of similar colors will be assigned similar colors. The human visual system is sensitive to brightness contrast; to help preserve contrast in our recoloring, we take into account the regions' relative luminance changes when selecting region colors.
|
| 64 |
+
|
| 65 |
+
We take the strategy of emphasizing color differences between regions, seeking to match the original color distance value between adjacent regions without preserving the colors themselves. Neighbouring regions with large differences will be assigned distant colors, preserving the boundary, while similar-colored regions will be assigned similar output colors or even the same color.
|
| 66 |
+
|
| 67 |
+
We use a graph structure to organize the segmented image, where each region is a node and edges link adjacent regions. To simplify the color decisions, we will construct a tree over the graph, with a tree traversal assigning colors to nodes based on the decision for the parent node. We therefore assign weights to the edges reflecting their priority order, with large regions, regions of very similar color, and regions of very different color receiving high priority. Small regions and regions with intermediate color differences receive lower priority. Once weights have been assigned, we find the tree within the graph that maximizes the weight of the minimum-weight edges, a construction that corresponds to the widest path problem.
|
| 68 |
+
|
| 69 |
+
Our algorithm has two main steps. First, we create a tree by applying the widest path algorithm [17] on the region graph. Then we assign colors to regions by traversing the tree beginning from its root. For each region in the graph, we choose the color from the palette that best matches the color difference with its parent. Before starting, we apply histogram matching between the regions' color differences and the palette color differences. The histogram matching allows us to best convey the image content while using the full extent of an arbitrary input palette, even one that has a color distribution very different from that of the input image.
|
| 70 |
+
|
| 71 |
+
We show a flowchart of our recoloring approach in Figure 3.
|
| 72 |
+
|
| 73 |
+
The input is (a) an oversegmented image and (b) a palette to use in the recoloring. We compute (c) the adjacency graph of the segments and aggregate the differences between adjacent colors into the set $\{ Q\}$ . We also compute (e) the set of differences between palette colors, yielding $\{ {\Delta P}\}$ . The widest path algorithm (d) gives us a tree linking all nodes of the adjacency graph. We match the histogram (f) of $\{ Q\}$ to that of $\{ P\}$ and (g) assign colors to all regions by traversing the widest-path tree, resulting in (h) the fully recolored image.
|
| 74 |
+
|
| 75 |
+
Before we explain the recoloring proper, we introduce the widest path problem. We will employ the widest path algorithm to create a tree over the input oversegmentation. The tree structure organizes the regions with the largest edge weights to get processed earlier to maintain an edge and contrast preservation for the recolored image. We will traverse the tree and assign a color to each region, matching the edge's target color difference with the color difference with its parent's color. In practice, it can be practical to combine the tree creation and traversal, since the widest-path algorithm involves a best-first traversal of the tree as it is being built. Prior to color assignment, we apply histogram matching to align the regions' color differences with the palette's for better use of palette colors.
|
| 76 |
+
|
| 77 |
+
### 3.1 Tree Creation: The Widest Path Problem
|
| 78 |
+
|
| 79 |
+
Pollack [17] introduced the widest path problem. Consider a weighted graph consisting of nodes and edges $G = \left( {V, E}\right)$ , where an edge $\left( {u, v}\right) \in E$ connects node $u$ to $v$ . Let $w\left( {u, v}\right)$ be the weight, called capacity, of edge $\left( {u, v}\right) \in E$ ; capacity represents the maximum flow that can pass from $U$ to $V$ through that edge. The minimum weight among traversed edges defines the capacity of a path. Formally, the capacity $C\left( {u, v}\right)$ of a path between nodes $u$ and $v$ is given by the following:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
C\left( {u, v}\right) = \min \left( {w\left( {u, a}\right) , w\left( {a, b}\right) ,.., w\left( {d, v}\right) }\right) \tag{1}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $w\left( {u, a}\right) , w\left( {a, b}\right) ,\ldots , w\left( {d, v}\right)$ are the edge weights along the path. The widest path between $u$ and $v$ is the path with the maximum capacity among all possible paths.
|
| 86 |
+
|
| 87 |
+
In a single-source widest path problem, we calculate for each node $t \in V$ a value $B\left( t\right)$ , the maximum path capacity among all the paths from source $s$ to $t$ . The value $B\left( t\right)$ is the width of the node. The union of widest paths from the source to each node is a tree, which we use to order the color assignment process. We can choose any node as the source; our implementation uses the region containing the image centre.
|
| 88 |
+
|
| 89 |
+
The widest path algorithm can be implemented as a variant of Dijkstra's algorithm, building a tree outward from the source node $s$ to every node in the graph. All nodes of the graph $t \in V$ are given a tentative width value; the source node $S$ will be assigned to $B\left( s\right) = + \infty$ and all other nodes $v \neq s$ will have $B\left( v\right) = - \infty$ . A priority queue holds the nodes; at each step of the algorithm, we take the node with the highest current width from the queue and process it, stopping when the queue is empty. Suppose the node $u$ is on top of the queue with a width $B\left( u\right)$ ; for every outgoing edge(u, v), we update the value of the neighbour node $V$ as follows:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
B\left( v\right) \leftarrow \max \{ B\left( v\right) ,\min \{ B\left( u\right) , w\left( {u, v}\right) \} \} \tag{2}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where $w$ is the edge weight between nodes $u$ and $v$ . If the value $B\left( v\right)$ was changed, node $u$ will be set as the parent node and $v$ will be pushed into the queue. When the algorithm completes, all non-root nodes in the graph will have been assigned a single parent, thus providing a tree rooted at $S$ .
|
| 96 |
+
|
| 97 |
+
In our application, one possibility for edge weight is to use the difference in color values between the two regions. This would ensure that the widest path tree linked dissimilar regions, resulting in good edge preservation. However, regions of similar color could easily be divided. We want to preserve small color distances as well, so small color differences should yield a large edge weight. Distances intermediate between large and small are of the least importance. Hence, we base our edge weight on the difference from the median color distance, as follows.
|
| 98 |
+
|
| 99 |
+
We calculate the color distances across each edge in the adjacency graph; call the set of color distances $Q$ , with
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
Q = \left\{ {\Delta \left( {{c}_{i},{c}_{j}}\right) }\right\} = \left\{ {q}_{ij}\right\} ,\;i \neq j,\;{r}_{i},{r}_{j} \in R
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where ${c}_{i}$ and ${c}_{j}$ are the colors of regions ${r}_{i}$ and ${r}_{j}$ and $\Delta$ is the function computing the color distance. Compute the median value $\bar{q}$ from the distances in $Q$ .
|
| 106 |
+
|
| 107 |
+
We also want to take into account the size of the region, such that larger regions have greater importance; we prefer that a larger region have higher priority and thus influence the smaller regions that are processed afterwards, compared to the converse. Depending on the oversegmentation, action may not be necessary; we suggest a process to improve results on oversegmentations with a dramatic variation in region size.
|
| 108 |
+
|
| 109 |
+
We compute for each region a factor $b$ , the ratio of the region’s size (in pixels) to the average region size. Then, when we traverse an edge, we use the $b$ of the destination region to determine the weight. In our implementation, we compute and store a single edge weight; there is no ambiguity about the factor $b$ because we only ever traverse a given edge in one direction, moving outward from the source node.
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
|
| 113 |
+
Figure 3: Recoloring algorithm pipeline.
|
| 114 |
+
|
| 115 |
+
To summarize: when traversing an edge, the edge weight is the distance between its target color difference and the median color difference, multiplied by a the factor $\left( {1 + b}\right)$ for the destination region:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
w\left( {{r}_{i},{r}_{j}}\right) = \left( {1 + b}\right) \left| {\Delta \left( {{c}_{i},{c}_{j}}\right) - \bar{q}}\right| . \tag{3}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
The factor $\left( {1 + b}\right)$ takes size into account, but ensures the region’s color differences can still affect the traversal order even for very small regions (b near zero). Note that the function $\Delta$ depends on the colorspace used. A simple possibility is Euclidean distance in RGB, but more perceptually based color distances are possible. We discuss color distance metrics in section 5.2.
|
| 122 |
+
|
| 123 |
+
### 3.2 Histogram Matching
|
| 124 |
+
|
| 125 |
+
We plan to match color differences in the output to the color differences in the input. However, the input palette can have an arbitrary set of colors, and we also want to make use of the full palette. For example, imagine a low-contrast image recolored with a palette of more varied colors. The smallest palette difference might be quite large; if so, the muted areas of the original will be matched with difference zero, resulting in loss of detail in such regions. A narrow palette recoloring a high-contrast image will have similar problems in the opposite direction.
|
| 126 |
+
|
| 127 |
+
To adapt the palette usage to the input image color distribution, we apply histogram matching to color differences. We emphasize that we are not matching the colors themselves, but the distributions of differences. Histogram matching is applied between the region color differences (the distribution of values in $Q$ , computed in Section 3.1) and the pairwise color differences of the colors in the palette (call this dataset ${\Delta P}$ ).
|
| 128 |
+
|
| 129 |
+
The histogram matching computes a new target color difference for each graph edge; call this target ${q}^{\prime }\left( {u, v}\right)$ for the edge linking regions $u$ and $v$ . The matching ensures that the distribution of values $\left\{ {q}^{\prime }\right\}$ is the same as the distribution of values in $\{ {\Delta P}\}$ . The values ${q}^{\prime }$ are then used for color assignment, selecting color pairs from the palette which correspond to the same place in the distribution: medium palette differences where medium image color differences existed, small differences where the original image color differences were small, with the largest palette differences reserved for the largest differences in the original image.
|
| 130 |
+
|
| 131 |
+
Figure 4 shows the input image with average region colors and the recolored images before and after histogram matching. The bar graphs under each image show the proportion of each color in the image. We can see that the histogram matching used the palette colors more evenly, increasing contrast and highlighting more details. For example, strong edges on the leaf boundary became distinguishable from the nearby regions, and the markings on the lizard became more prominent.
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+
Figure 4: Histogram matching result. Left to right: original image, result without histogram matching, and result using histogram matching.
|
| 136 |
+
|
| 137 |
+
### 3.3 Color Assignment
|
| 138 |
+
|
| 139 |
+
The widest path algorithm provided a tree, and the histogram matching provided palette-customized target distances for the edges. We now traverse the tree and assign a color to each region along the way. We begin by assigning the closet palette color to the tree's root node; recall that the root was the most central region in the image. At each subsequent step, we assign a color from the palette $P$ to the current region $\alpha$ based on the palette color ${p}_{\beta }$ previously assigned to the parent region $\beta$ and the target color difference ${q}^{\prime }\left( {\alpha ,\beta }\right)$ . We also consider the luminance difference between regions ${r}_{\alpha }$ and ${r}_{\beta }$ so as to help maintain larger-scale intensity gradients. Recall our intention in preserving color differences: two regions with large color differences should be assigned two very different colors, and regions with a small color difference should get very similar colors, possibly the same color. Owing to histogram matching, "large" and "small" are calibrated to the content of the particular image and palette being combined.
|
| 140 |
+
|
| 141 |
+
We impose a luminance constraint on potential palette colors, in an effort to respect the relative ordering of the regions' luminances. Suppose the luminances of two regions ${r}_{\alpha }$ and ${r}_{\beta }$ are ${L}_{\alpha }$ and ${L}_{\beta }$ , where ${L}_{\alpha } < {L}_{\beta }$ . We then constrain the set of eligible palette colors for region $\alpha$ such that only colors ${p}_{\alpha }$ that satisfy ${L}_{{p}_{\alpha }} < {L}_{{p}_{\beta }}$ are considered. A similar constraint is imposed if ${L}_{\alpha } > {L}_{\beta }$ .
|
| 142 |
+
|
| 143 |
+
For region ${r}_{\alpha }$ and its parent ${r}_{\beta }$ , we have the target edge difference ${q}^{\prime }\left( {\alpha ,\beta }\right)$ . Denote by ${p}_{\beta }$ the palette color already assigned to the parent region ${r}_{\beta }$ . We choose the palette color ${p}_{\alpha }$ for region ${r}_{\alpha }$ so as to minimize the distance $D$ :
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
D = \left| {{q}^{\prime }\left( {i, j}\right) - \Delta \left( {{p}_{\alpha },{p}_{\beta }}\right) }\right| \tag{4}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where $\Delta$ is the distance metric between two colors. The only colors considered for ${p}_{\alpha }$ are those that satisfy the luminance constraint.
|
| 150 |
+
|
| 151 |
+
For colors with the lowest and highest luminance in the palette, there may be no available colors satisfying the luminance constraint. In such cases, the constraint is ignored and all palette colors are considered.
|
| 152 |
+
|
| 153 |
+
Since the source region has no parent, the above process can not be used to find its color. Instead, we assign the closest color from the palette, as determined by the difference metric $\Delta$ . The source region itself is the region containing the centre of the image; while the output is weakly dependent on the choice of starting region, we do not view the starting region as a critical decision. Figure 5 shows some examples of varied outcomes from moving the starting region.
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
|
| 157 |
+
Figure 5: Changing the source region locations. The starting region is indicated by a dot.
|
| 158 |
+
|
| 159 |
+
## 4 RESULTS AND DISCUSSION
|
| 160 |
+
|
| 161 |
+
In this section, we present some results. Figure 6 shows a variety of recolored images generated by our algorithm using various palettes. We succeeded in maintaining strong edges, and objects in the recolored abstractions remain recognizable. Our algorithm retains textures and produces vivid recolored images by selecting varied colors from the palette. It assigns the same colors over flat regions and distinct colors to illustrate structures. We ran our algorithm on a variety of images with different textures and contrasts. We obtained most of our palettes from the website COLRD (http://colrd.com/); others we created manually by sampling from colorful images.
|
| 162 |
+
|
| 163 |
+
In Figure 6, we present a set of examples from our recoloring algorithm, which were generated with four different palettes. From left to right, the columns show the original image, followed by recolored images using different palettes. We chose images presenting different features. The delicate features and textures in the abstractions stay visible after recoloring despite the input photographs having been radically altered by the recoloring.
|
| 164 |
+
|
| 165 |
+
In the starfish image, the structure and the patterns on the arms become visible. The uniform colors on the background turn to a vivid splash of colors, emphasizing the textureness of the terrain. The algorithm has chosen the darkest colors to assign to the shadows and the lightest ones to the surface of the creature.
|
| 166 |
+
|
| 167 |
+
The Venice canal is a crowded image composed of soft textures and structures with hard edges. The algorithm is able to preserve recognizable objects such as the boats and windows. Even tiny letters on the wall and pedestrians on the canal's side are visible. The recoloring process preserved the buildings' rigid structures; meanwhile, it captured shadows and the water's soft movements. In capturing such features, adopting a highly irregular oversegmentation was necessary.
|
| 168 |
+
|
| 169 |
+
The lizard image is an example of a low contrast image with textured areas covered by dull colors. The algorithm highlighted the textures by assigning wild colors to the homogeneous regions on the leaf. At the same time, the substantial edges like the lizard's body patterns and the leaf edges are preserved naturally by our algorithm.
|
| 170 |
+
|
| 171 |
+
In the next example, we used a high contrast image as input. The algorithm assigned the darkest colors from each palette to the coat of the man and separated it from the background using a very light color. Further, the small features on the face and the Chinese characters are mostly readable.
|
| 172 |
+
|
| 173 |
+
The rust image contains a different type of textures on the wall and the grass, plus soft textureless areas on the machinery. The brick patterns on the wall exaggerated through colorful palettes made the final images more interesting than the original flat image. The high-frequency details of the grass are retained. The smooth transition of colors on the top right of the image illustrated the shadows of the leaves.
|
| 174 |
+
|
| 175 |
+
We demonstrated strong edge preservation in all examples. Additionally, the image textures preserved, and palette colors are uniformly used to maintain a good contrast.
|
| 176 |
+
|
| 177 |
+
### 4.1 Comparison with Naïve Methods
|
| 178 |
+
|
| 179 |
+
Figure 7 gives a comparison between our method and two naïve alternatives. The first column shows the input image. The second column shows the input segments recolored by replacing the segment's color with the palette color closest to the segment's average color. The third column shows a random assignment of palette colors to segments. The final column shows our method. Recoloring with closest palette colors preserves some image content, but the result shows large regions of constant color; many of the palette colors are underused, an issue that can worsen when there is a significant mismatch between the original image color distribution and the palette, as in the upper example. Random assignment provides an even distribution of palette colors, but the image content can become unrecognizable for highly textured images, as in the lower example Our method uses the palette more effectively, showing local details and large-scale content and exercising the full range of available colors.
|
| 180 |
+
|
| 181 |
+
### 4.2 Comparison with ColorArt
|
| 182 |
+
|
| 183 |
+
In this section, we compare our recoloring method with ColorArt, an optimization-based recoloring method for graphic arts [3]. This method assigns colors to regions by solving a graph matching problem over color groups in the reference and the template image. In searching for a reference image, this algorithm uses the same number of color groups as in the template image.
|
| 184 |
+
|
| 185 |
+
Figure 8 shows the generated images by ColorArt method on the right and ours in the middle, both using the sunset palette. We created a colorful leaf surrounded by a light background as in the input image, showing the algorithm respects the changes in lightness Moreover, assignment of different colors on the leaf presented an interesting texture. The leaf image generated by the ColorArt method has reversed the image tones. In the sketch image, we preserved the edges and showed a recognizable face in the image. In contrast, the ColorArt algorithm had difficulty with the edges and the gradual gradients, resulting in a somewhat incoherent output.
|
| 186 |
+
|
| 187 |
+
### 4.3 Recoloring with SLIC0 Oversegmentation
|
| 188 |
+
|
| 189 |
+
Our recoloring algorithm does not make any assumptions about the input oversegmentation. Figure 9, shows results from an over-segmantation from SLIC0 [1]. The starfish and owl images have approximately 2000 and 5000 segments, respectively.
|
| 190 |
+
|
| 191 |
+
Note that more irregular regions can better represent complex image contours and textures, allowing the recolored abstractions to better display the image content. In starfish, the structures and shadows are represented by distinct colors that contrast between the object and the background. The strong edges, such as the arms of the starfish, are preserved; however, the thin features are not captured by SLIC0's uniform regions, and the background terrain does not present any significant information. In the owl image, the small regions on the chest convey the feather textures, while definite regions such as dark eyes kept their well-defined structures. Given a suitably detailed oversegmentation, we can produce appealing results.
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
|
| 195 |
+
Figure 6: Region recoloring results. Top row: visualization of palettes used. Images, top to bottom: starfish, Venice, lizard, lanterns, and rust.
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
|
| 199 |
+
Figure 7: A comparison with naïve recolorings. Left to right: The original image, the results from naïve method, random recoloring, and ours.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+
Figure 9: Recoloring with SLIC0 oversegmentation.
|
| 204 |
+
|
| 205 |
+
### 4.4 Performance
|
| 206 |
+
|
| 207 |
+
We ran our algorithm on an Intel(R) Core(TM) i7-6700 with a 3.4 $\mathrm{{GHz}}\mathrm{{CPU}}$ and ${16.0}\mathrm{{GB}}$ of $\mathrm{{RAM}}$ . The processing time increases with the number of regions and edges in the graph. The time complexity of single-source widest path is $\mathcal{O}\left( {m + n\log n}\right)$ for $m$ edges and $n$ vertices, using a heap-based priority queue in a Dijkstra search.
|
| 208 |
+
|
| 209 |
+
Table 1 shows the timing for creating the trees and color assignments of different images. For small images like the starfish, containing about ${1.4}\mathrm{\;K}$ regions, the recoloring algorithm takes about 0.007 seconds to construct the tree, while it takes about ${0.3}\mathrm{\;s}$ for larger images such as rust with ${7.2}\mathrm{K}$ regions. With a palette of 10 colors, the color assignments will take 0.05 and 1.2s to recolor the starfish and rust respectively. The color assignment will take longer for the larger palettes and images with a larger number of regions. We show the timing of tree creation and color assignments for all images in the gallery. The pre-processing steps of creating the graph adjacency list and histogram matching are also presented in the table.
|
| 210 |
+
|
| 211 |
+
### 4.5 Limitations
|
| 212 |
+
|
| 213 |
+
Although in our experience our method works well for most combinations of image and palette, there are cases where the output is unappealing. When two similar regions are not neighbours, they may receive different colors; e.g., a sky area may be broken up by branches and different parts of the sky could be colored differently. Even adjacent regions may not receive similar colors if their average colors differ, introducing spurious edges into regions with slowly changing colors such as gradients or smooth surfaces. Out-of-focus backgrounds and faces are common examples producing such effects,
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
|
| 217 |
+
Figure 8: Comparison with ColorArt. Left: input; middle: our results; right: ColorArt results.
|
| 218 |
+
|
| 219 |
+
Figure 10 shows two failure examples. On the top middle image, the face and background regions got very different colors with an unappealing outcome. However, a palette containing several similar colors allows the algorithm to better match the image gradients, with dramatically better results. In the bottom images, large regions of different colors appear in the sky, which does not look attractive. Because the original regions have different average colors, our algorithm is likely to separate them regardless of the palette.
|
| 220 |
+
|
| 221 |
+
Our algorithm is at present strictly automated with no provision for direct user control beyond choice of palette and parameter settings. While these parameters provide considerable scope for generating variant recolorings, so that a user would have a wide range of results to choose from, direct control is not yet implemented. One might imagine annotating the image to enforce specific color selections or linking regions to ensure that their output colors are always the same. While it would be straightforward to add some control of this type, we have not yet implemented such features.
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
|
| 225 |
+
Figure 10: Failure cases.
|
| 226 |
+
|
| 227 |
+
## 5 VARIATIONS
|
| 228 |
+
|
| 229 |
+
In this section, we explore some variations of our region recoloring algorithm. We apply a blending mechanism to create recolored images with smooth color transitions. Then, we demonstrate that different color spaces and distance metrics can generate various results from one palette.
|
| 230 |
+
|
| 231 |
+
<table><tr><td>Image</td><td>Tree creation</td><td>Color assignment</td><td>Graph</td><td>Total</td><td>#Vertices</td></tr><tr><td>lanterns</td><td>0.003s</td><td>0.02s</td><td>0.064s</td><td>0.087s</td><td>1K</td></tr><tr><td>starfish</td><td>0.007s</td><td>0.05s</td><td>0.07s</td><td>0.127s</td><td>1.4K</td></tr><tr><td>lizard</td><td>0.015s</td><td>0.08s</td><td>0.1s</td><td>0.195s</td><td>1.9K</td></tr><tr><td>rust</td><td>0.3s</td><td>1.2s</td><td>0.9s</td><td>2.4s</td><td>7.2K</td></tr><tr><td>Venice</td><td>0.3s</td><td>1.4s</td><td>0.9s</td><td>2.6s</td><td>7.7K</td></tr></table>
|
| 232 |
+
|
| 233 |
+
Table 1: Timing results for images with varying numbers of regions.
|
| 234 |
+
|
| 235 |
+
### 5.1 Blending
|
| 236 |
+
|
| 237 |
+
In transferring the colors, we have intended to strictly preserve the palette and not add colors. However, we can also blend the colors, giving a more painterly style and to adding a sense of depth to a flattened abstraction. Postprocessing the recolored image introduces new intermediate shades of colors. We suggest cross-filtering the recolored image with an edge-preserving filter [14]. This process smooths the areas away from edges while retaining the colors close to strong edges. In addition, blending across the jagged region boundaries smooths regions which originally possessed similar colors.
|
| 238 |
+
|
| 239 |
+
The cross-filtering mask size will affect the outcome. Larger masks will produce a stronger blending effect; small features will be smoothed out, and the output image will become blurry in regions lacking edges. Figure 11 illustrates examples of blending using masks of sizes $n = {20},{100}$ , and 300 . Blending with $n = {20}$ only slightly modifies the image; for larger masks, the blending is more apparent. At $n = {300}$ we can see a definite blurring in originally smooth areas, although blurring does not happen across original edges. Using a gray palette, can can obtain an effect resembling a charcoal drawing with larger masks.
|
| 240 |
+
|
| 241 |
+
### 5.2 Color Spaces and Distance Metrics
|
| 242 |
+
|
| 243 |
+
We can employ different functions for our color distance function $\Delta$ . Different choices of color space and distance metric can affect the colorization results. Changing the distance metric will cause both the widest-path tree and the color assignment to change.
|
| 244 |
+
|
| 245 |
+
We have experimented with computing color distances with the Euclidean distance in RGB as well as using perceptually uniform measures CIE94, CIEDE2000, CIE76, and CMC colorimetric distances $\left\lbrack {{21},{27}}\right\rbrack$ .
|
| 246 |
+
|
| 247 |
+
Both CIE94 and CIEDE2000 are defined in the Lch color space. However, CIE94 differences in lightness, chroma, and hue are calculated from Lab coordinates. CMC is quasimetric, designed based on the Lch color model. The CIE76 metric uses Euclidean distance in Lab space.
|
| 248 |
+
|
| 249 |
+
In Figure 12, we show different outcomes from different metrics using two palettes. We can observe the strong edge and contrast preservation, which is an apparent result of perceptual uniform metrics. More importantly, each metric gives a unique recolorization to the abstraction, which allows a user to choose from different output images. We can get interesting results from each metric. However, in our judgement, more attractive results are obtained from Euclidean and CMC metrics; the delicate features and image contrast are maintained, and objects are generally preserved. CIE94 and CIE2000 metrics are also effective, but we found that the CIE76 metric rarely creates interesting results.
|
| 250 |
+
|
| 251 |
+
## 6 CONCLUSIONS AND FUTURE WORK
|
| 252 |
+
|
| 253 |
+
In this paper, we presented a graph-based recoloring method that takes an oversegmented image and a palette as input, and then assigns colors to each region. The result uses the palette colors to portray the image content, but without attempting to match the input colors. Designing our algorithm with the widest path allowed us to maintain the image contrast and objects' recognizability. We demonstrated our results with different palettes. We achieved vivid recolorization effects, effective for most combinations of input images and palettes.
|
| 254 |
+
|
| 255 |
+
In the future, we would like to investigate non-convex palette color augmentation, adding new colors extending an input palette while matching the palette's theme. We would like to extend the color assignment to consider color harmony, scoring based on compatibility of colors and thus effecting the ability of certain colors to be neighbors. Furthermore, we would like to be able to recolor smoothly changing regions like the sky more uniformly. Adding elements of user control would allow for better cooperation between the present automated method and the user's intent.
|
| 256 |
+
|
| 257 |
+
## ACKNOWLEDGMENTS
|
| 258 |
+
|
| 259 |
+
The authors thank ...
|
| 260 |
+
|
| 261 |
+
## REFERENCES
|
| 262 |
+
|
| 263 |
+
[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. Slic superpixels compared to state-of-the-art superpixel methods. 2012.
|
| 264 |
+
|
| 265 |
+
[2] E. Aharoni-Mack, Y. Shambik, and D. Lischinski. Pigment-Based Recoloring of Watercolor Paintings. In H. Winnemöller and L. Bartram, eds., Non-Photorealistic Animation and Rendering. Association for Computing Machinery, Inc (ACM), 2017. doi: 10.1145/3092919. 3092926
|
| 266 |
+
|
| 267 |
+
[3] M. Bohra and V. Gandhi. Colorart: Suggesting colorizations for graphic arts using optimal color-graph matching. 2020.
|
| 268 |
+
|
| 269 |
+
[4] W. Casaca, M. Colnago, and L. G. Nonato. Interactive image colorization using Laplacian coordinates. In G. Azzopardi and N. Petkov, eds., Computer Analysis of Images and Patterns, pp. 675-686. Springer International Publishing, Cham, 2015.
|
| 270 |
+
|
| 271 |
+
[5] H. Chang, O. Fried, Y. Liu, S. DiVerdi, and A. Finkelstein. Palette-based photo recoloring. ACM Trans. Graph., 34(4):139:1-139:11, July 2015. doi: 10.1145/2766978
|
| 272 |
+
|
| 273 |
+
[6] S. DiVerdi, J. Lu, J. Echevarria, and M. Shugrina. Generating Playful Palettes from Images. In C. S. Kaplan, A. Forbes, and S. DiVerdi, eds., ACM/EG Expressive Symposium. The Eurographics Association, 2019. doi: 10.2312/exp.20191078
|
| 274 |
+
|
| 275 |
+
[7] L. Fang, J. Wang, G. Lu, D. Zhang, and J. Fu. Hand-drawn grayscale image colorful colorization based on natural image. The Visual Computer, 35(11):1667-1681, Nov 2019. doi: 10.1007/s00371-018-1613-8
|
| 276 |
+
|
| 277 |
+
[8] G. R. Greenfield and D. H. House. Image recoloring induced by palette color associations. Journal of WSCG, 11:189-196, 2003.
|
| 278 |
+
|
| 279 |
+
[9] P. Kubelka. New contributions to the optics of intensely light-scattering materials. part i. J. Opt. Soc. Am., 38(5):448-457, May 1948. doi: 10. 1364/JOSA.38.000448
|
| 280 |
+
|
| 281 |
+
[10] A. Levin, D. Lischinski, and Y. Weiss. Colorization using optimization. In ACM SIGGRAPH 2004 Papers, SIGGRAPH '04, pp. 689-694. ACM, New York, NY, USA, 2004. doi: 10.1145/1186562.1015780
|
| 282 |
+
|
| 283 |
+
[11] S. Lin, D. Ritchie, M. Fisher, and P. Hanrahan. Probabilistic color-by-numbers: Suggesting pattern colorizations using factor graphs. In ACM SIGGRAPH 2013 papers, SIGGRAPH '13, 2013.
|
| 284 |
+
|
| 285 |
+
[12] J. Lu, S. DiVerdi, W. A. Chen, C. Barnes, and A. Finkelstein. Realpig-ment: Paint compositing by example. In Proceedings of the Workshop on Non-Photorealistic Animation and Rendering, NPAR '14, pp. 21-30. ACM, New York, NY, USA, 2014. doi: 10.1145/2630397.2630401
|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
|
| 289 |
+
Figure 11: Postprocessing with an edge-aware filter. Left to right: original image; cross-filtered images with mask size 20,100, and 300.
|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
|
| 293 |
+
Figure 12: Colorization with different distance measures.
|
| 294 |
+
|
| 295 |
+
[13] J. McCann and N. S. Pollard. Soft stacking. In Computer Graphics Forum, vol. 31, pp. 469-478, 2012.
|
| 296 |
+
|
| 297 |
+
[14] D. Mould. Texture-preserving abstraction. In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, NPAR '12, pp. 75-82. Eurographics Association, Goslar Germany, Germany, 2012.
|
| 298 |
+
|
| 299 |
+
[15] L. Neumann and A. Neumann. Color style transfer techniques using hue, lightness and saturation histogram matching. In Proceedings of the First Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging, Computational Aesthetics' 05, pp. 111-122. Eurographics Association, 2005.
|
| 300 |
+
|
| 301 |
+
[16] P. O'Donovan, A. Agarwala, and A. Hertzmann. Color Compatibility From Large Datasets. ACM Transactions on Graphics, 30(4), 2011.
|
| 302 |
+
|
| 303 |
+
[17] M. Pollack. Letter to the editor-the maximum capacity through a network. Operations Research, 8(5):733-736, Oct. 1960.
|
| 304 |
+
|
| 305 |
+
[18] Y. Qu, T.-T. Wong, and P.-A. Heng. Manga colorization. ACM Trans. Graph., 25(3):1214-1220, July 2006.
|
| 306 |
+
|
| 307 |
+
[19] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley. Color transfer between images. IEEE Comput. Graph. Appl., 21(5):34-41, Sept. 2001.
|
| 308 |
+
|
| 309 |
+
[20] D. L. Ruderman, T. W. Cronin, and C.-C. Chiao. Statistics of cone responses to natural images: Implications for visual coding. Journal of the Optical Society of America A, 15:2036-2045, 1998.
|
| 310 |
+
|
| 311 |
+
[21] A. T. Sanda Mahama, A. Dossa, and P. Gouton. Choice of distance metrics for RGB color image analysis. Electronic Imaging, 2016:1-4, 022016.
|
| 312 |
+
|
| 313 |
+
[22] L. Shapira, A. Shamir, and D. Cohen-Or. Image appearance exploration by model-based navigation. Comput. Graph. Forum, 28:629-638, 2009.
|
| 314 |
+
|
| 315 |
+
[23] M. Shugrina, J. Lu, and S. Diverdi. Playful palette: An interactive parametric color mixer for artists. ACM Trans. Graph., 36(4):61:1- 61:10, July 2017. doi: 10.1145/3072959.3073690
|
| 316 |
+
|
| 317 |
+
[24] D. Sýkora, M. Ben-Chen, M. Cadík, B. Whited, and M. Simmons. Tex-
|
| 318 |
+
|
| 319 |
+
toons: practical texture mapping for hand-drawn cartoon animations. In ${NPAR},{2011}$ .
|
| 320 |
+
|
| 321 |
+
[25] D. Sýkora, J. Dingliana, and S. Collins. Lazybrush: Flexible painting tool for hand-drawn cartoons. Computer Graphics Forum, 28(2):599- 608, 2009.
|
| 322 |
+
|
| 323 |
+
[26] J. Tan, J.-M. Lien, and Y. Gingold. Decomposing images into layers via RGB-space geometry. ACM Trans. Graph., 36(1), Nov. 2016. doi: 10.1145/2988229
|
| 324 |
+
|
| 325 |
+
[27] Wikipedia contributors. Color difference - Wikipedia, the free encyclopedia, 2019. [Online; accessed 17-September-2019].
|
| 326 |
+
|
| 327 |
+
[28] J. Xu and C. S. Kaplan. Artistic thresholding. In Proceedings of the 6th International Symposium on Non-photorealistic Animation and Rendering, NPAR '08, pp. 39-47. ACM, New York, NY, USA, 2008. doi: 10.1145/1377980.1377990
|
| 328 |
+
|
| 329 |
+
[29] L. Yatziv and G. Sapiro. Fast image and video colorization using chrominance blending. Trans. Img. Proc., 15(5):1120-1129, May 2006. doi: 10.1109/TIP.2005.864231
|
| 330 |
+
|
| 331 |
+
[30] K. Zeng, M. Zhao, C. Xiong, and S.-C. Zhu. From image parsing to painterly rendering. ACM Trans. Graph., 29(1):2:1-2:11, Dec. 2009. doi: 10.1145/1640443.1640445
|
| 332 |
+
|
| 333 |
+
[31] M. Zhao and S.-C. Zhu. Sisley the abstract painter. In Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, NPAR '10, pp. 99-107. ACM, New York, NY, USA, 2010. doi: 10.1145/1809939.1809951
|
papers/Graphics_Interface/Graphics_Interface 2021/Graphics_Interface 2021 Conference/Graphics_Interface 2021 Conference Second_Cycle/z66fCE6_Ja0/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ARTISTIC RECOLORING OF IMAGE OVERSEGMENTATIONS
|
| 2 |
+
|
| 3 |
+
< g r a p h i c s >
|
| 4 |
+
|
| 5 |
+
Figure 1: Recolorization of region-based abstraction. The figure shows the original image on the left and the recolored images produced by our proposed method using three different palettes.
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
We propose a method to assign vivid colors to regions of an oversegmented image. We restrict the output colors to those found in an input palette, and seek to preserve the recognizability of structure in the image. Our strategy is to match the color distances between the colors of adjacent regions with the color differences between the assigned palette colors; thus, assigned colors may be very far from the original colors, but both large local differences (edges) and small ones (uniform areas) are maintained. We use the widest path algorithm on a graph-based structure to set a priority order for recoloring regions, and traverse the resulting tree to assign colors. Our method produces vivid colorizations of region-based abstraction using arbitrary palettes. We demonstrate a set of stylizations that can be generated by our algorithm.
|
| 10 |
+
|
| 11 |
+
Keywords: Non-photorealistic rendering, Image stylization, Recoloring, Abstraction.
|
| 12 |
+
|
| 13 |
+
Index Terms: I.3.3 [Picture/Image Generation]-; I.4.6 [Segmentation]
|
| 14 |
+
|
| 15 |
+
§ 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Color plays an important role in image aesthetics. In representational art, artists employ colors that match the perceived colors of objects in the depicted scene. Conversely, abstraction provides more freedom to manipulate colors. Figure 2 shows a modern vector illustration of an owl, an example of Fauvism by André Derain, and an abstraction of the Eiffel Tower by Robert Delaunay. The artists have expressed the image content by vivid colors assigned carefully various image regions. We aim at a method that can generate colorful images, recoloring an image using an arbitrary input palette. In this paper, we describe a method for recoloring an image based on a subdivision of the image into distinct segments, recoloring each segment based on its relationship with its neighbours.
|
| 18 |
+
|
| 19 |
+
Our goal is to present different recoloring possibilities by assigning colors to each region of an oversegmented input image. We aim to maintain the image contrast and preserve strong edges so that the content of the scene remains recognizable. It is important to convey textures and small features, often too delicate to be preserved by existing abstraction methods. We would like to be able to create wild and vivid abstractions through use of unusual palettes.
|
| 20 |
+
|
| 21 |
+
Manual recoloring of oversegmented images would be tedious; the images contain hundreds or thousands of segments; clicking on every segment would take a long time, even leaving aside the cognitive and interaction overhead of making selections from the palette. We provide an automatic assignment of colors to regions. The assignment can be used as is, could be used in a fast manual assessment loop (for example, if an artist wanted to choose a suitable palette for coloring a scene), or could be a good starting point for a semi-automatic approach where a user made minor modifications to the automated results.
|
| 22 |
+
|
| 23 |
+
In this paper, we present an automatic recoloring approach for a region-based abstraction. The input is a desired palette and an oversegmented image. The method assigns a color from the palette to each region; it is based on the widest path algorithm [17], which organizes the regions into a tree based on the weight of the edges connecting them. We use color differences between adjacent regions both to order the regions and to select colors, trying to match the difference magnitude between the assigned palette colors and the original region colors. The use of color differences allows structures in the image to remain recognizable despite breaking the link between the original and depicted color.
|
| 24 |
+
|
| 25 |
+
Our main contributions are as follows:
|
| 26 |
+
|
| 27 |
+
* We designed a recoloring method for an oversegmented image, creating multiple abstractions colored with just one palette. Our method creates wild and high contrast images.
|
| 28 |
+
|
| 29 |
+
* Various styles can be created by our method. We experiment with color blending between regions and produce smooth images. By reducing the palette colors, we simplify the recolored images. Moreover, we generate new colorings from a palette by applying different metrics and color spaces.
|
| 30 |
+
|
| 31 |
+
The remainder of the paper is organized as follows. In Section 2, we briefly present related work. We describe our algorithm in Section 3. Section 4 shows results and provides some evaluation, and Section 5 gives some possible variations of the method. Finally, we conclude in Section 6 and suggest directions for future work.
|
| 32 |
+
|
| 33 |
+
< g r a p h i c s >
|
| 34 |
+
|
| 35 |
+
Figure 2: Colorful representational images. A modern vector illustration of an owl; a Fauvist painting by André Derain; an abstraction of the Eiffel Tower by Robert Delaunay.
|
| 36 |
+
|
| 37 |
+
§ 2 PREVIOUS WORK
|
| 38 |
+
|
| 39 |
+
Although there is an existing body of work on recoloring photographs $\left\lbrack {5,8,{10},{12},{15},{19},{22},{26},{29}}\right\rbrack$ and researchers have investigated colorization of non-photorealistic renderings $\lbrack 2,7,{18},{23}$ , ${25},{30},{31}\rbrack$ , there is room for further exploration. Existing recoloring methods little address region-based abstractions. The closest approach, by Xu and Kaplan [28], used optimization to assign black or white to each region of an input image. On the other hand, there are a few approaches in the pattern colorization problem, intending to colorize the graphic patterns $\left\lbrack {3,{11}}\right\rbrack$ . Bohra and Gandhi $\left\lbrack 3\right\rbrack$ posed the colorization problem as an optimal graph matching problem over color groups in the reference and a grayscale template image. In generating playful palettes [6], the approximation results converge for a number of blobs beyond six or seven. Similarly, the ColorArt method of Bohra and Gandhi [3] enforce an equal number of colors in the reference palette and the template image, and cannot find matches otherwise.
|
| 40 |
+
|
| 41 |
+
Below, we review some of the previous research on recoloring methods in NPR and color palette selection. These approaches can be broadly classified into example-based recoloring and palette-based recoloring.
|
| 42 |
+
|
| 43 |
+
§ EXAMPLE BASED RECOLORING (COLOR TRANSFER)
|
| 44 |
+
|
| 45 |
+
Recoloring methods were first proposed by Reinhard et al. [19], where the colors of one image were transferred to another. They converted the RGB signals to Ruderman et al.'s [20] perception-based color space ${L\alpha \beta }$ . To produce colors they shifted and scaled the ${L\alpha \beta }$ space using simple statistics.
|
| 46 |
+
|
| 47 |
+
Towards the color transfer and color style transfer techniques, Neumann and Neumann [15] extracted palette colors from an arbitrary target image and applied a 3D histogram matching. They attempt to keep the original hues after style transformation, where all of the colors, having the same hue on the original image would have the same hue after the matching step. They performed matching transformations on $3\mathrm{D}$ cumulative distribution functions belonging to the original and the style images. However, due to the lack of spatial information, the same histogram in itself looks to be not enough for a true style cloning. Especially, the problems caused by unpredictable noise and gradient effects. They suggest using image segmentation and 3-dimensional smoothing of the color histogram to improve the result.
|
| 48 |
+
|
| 49 |
+
Levin et al. [10] introduced an interactive colorization method for black and white images. They used a quadratic cost function derived from the color differences between a pixel and its weighted average neighborhood colors. The user by scribbling indicates the desired color in the interior of the region, instead of tracing out its precise boundary. Then the colors automatically propagate to the remaining pixels in the image sequence. However, sometimes it fails at strong edges, due to a sensitive scale parameter. Inspired by Levin et al. [10], Yatziv and Sapiro [29] proposed a fast colorization of images, which was based on the concepts of luminance-weighted chrominance blending and fast intrinsic geodesic distance computations. Sykora et al. [25] developed a tool called Lazybrush for the colorization of cartoons which integrates textures to the images to create 3D-like effects [24], while Casaca et al. [4] used Laplacian coordinates for image division and used a color theme for fast colorization. Fang et al. [7] proposed an interactive optimization method for colorization of the hand-drawn grayscale images. To maintain a smooth color transitions and control the color overflow in the texture areas, they used a smooth feature map to adjust the feature vectors.
|
| 50 |
+
|
| 51 |
+
A few works in NPR composite the colors for recoloring. The compositing could be applied by using alpha blending [13] or the Kubelka-Munk [9] equation (KM). For a paint-like effect NPR researchers mostly worked with the physic-based Kubelka-Munk equation to predict the reflectance of a layer of pigment. Some researchers tried to mimic the actual artist decision of mixing the colors to generate the palettes. For example, a data-driven color compositing framework by Lu et al. [12] derived three models based on optimized alpha blending, RBF interpolation and KM optimization to improve the prediction of compositing colors. Later, the KM pigment-Based model used for recoloring of styles such as watercolor painting [2]. The compositing by KM model leaves traces of overlapping stroke layers which can produce near natural painting effects.
|
| 52 |
+
|
| 53 |
+
§ PALETTE BASED RECOLORING
|
| 54 |
+
|
| 55 |
+
The interest in colorization and recoloring methods, opened up new research ideas on color palettes, rather than simple approaches like averaging colors of the active regions [8]. For many recoloring methods, usually a user would scribble each region of the image with a color, and the algorithm would do the rest of the work. Early methods to select the color palettes used Gaussian mixture models or K-means to cluster the image pixels. Chang et al. [5] introduced a photo-recoloring method by user-modified palettes. To select the color palettes, they compute the kmeans clustering on image colors, then discard the black entry. Tan et al. [26] proposed a technique to decompose an image into layers to extract the palette colors. Each layer of decomposition represents a coat of paint of a single color applied with varying opacity throughout the image. To determine a color palette capable of reproducing the image, they analyzed the image in RGB-space geometrically in a simplified convex hull.
|
| 56 |
+
|
| 57 |
+
Most recent methods on generating the palettes have more flexibility for editing, such as Playful Palette [23], which is a set of blobs of color that blend together to create gradients and gamuts. The editable palette keeps the history of previous palettes. DiVerdi et al. [6] also proposed an approximation of image colors based on the Playful Palette. In this technique, within an optimization framework, an objective function minimizes the distance between the original image and the recolored one by palette colors, based on the self organizing map. The approximation algorithm is an order of magnitude faster than Playful Palette [23], while the quality is lower due to small amounts of shrinkage which caused by both the self organizing map and the clustering step.
|
| 58 |
+
|
| 59 |
+
There are a limited number of works in the literature that assign colors to regions. For example, Qu et al. [18] proposed a colorization technique for the black and white manga using the Gabor wavelet filter. A user scribbles on the drawing to connect the regions; the algorithm then assigns colors to different hatching patterns, halfton-ing, and screening. The vector arts algorithms are used to recolor the distinct regions, which are more comparable to our recoloring method. Xu and Kaplan introduced artistic thresholding [28] where a graph data structure is applied on the segmentation of a source image. They employed an energy function to measure the quality of different black and white coloring of the segments. However, they failed to show the high-level features that crossing through the foreground objects. Lin et al. [11] proposed a palette-based recoloring method with a probabilistic model. They learn and predict the distribution of properties such as saturation, lightness, and contrast for individual regions and neighboring regions. Then score pattern colorings using the predicted distributions and color compatibility model by O'Donovan et al. [16]. Bohra and Gandhi [3] proposed an exemplar-based colorization algorithm for grayscale graphic arts from a reference image based on color graph and composition matching method. They retrieve palettes using the spatial features of the input image. They aim to preserve the artist's intent in the composition of different colors and spatial adjacency between these colors in the image.
|
| 60 |
+
|
| 61 |
+
§ 3 RECOLORING ALGORITHM
|
| 62 |
+
|
| 63 |
+
We present a recoloring algorithm that automatically assigns colors to regions of an oversegmented image. Our system takes as input a set of regions and a palette cotaining a set of colors and assigns a color to each region. The recolored image should convey recognizable objects in the image. Edges are essential to the visibility of the structures. Neighboring regions will be assigned distinct colors to express an edge, and regions of similar colors will be assigned similar colors. The human visual system is sensitive to brightness contrast; to help preserve contrast in our recoloring, we take into account the regions' relative luminance changes when selecting region colors.
|
| 64 |
+
|
| 65 |
+
We take the strategy of emphasizing color differences between regions, seeking to match the original color distance value between adjacent regions without preserving the colors themselves. Neighbouring regions with large differences will be assigned distant colors, preserving the boundary, while similar-colored regions will be assigned similar output colors or even the same color.
|
| 66 |
+
|
| 67 |
+
We use a graph structure to organize the segmented image, where each region is a node and edges link adjacent regions. To simplify the color decisions, we will construct a tree over the graph, with a tree traversal assigning colors to nodes based on the decision for the parent node. We therefore assign weights to the edges reflecting their priority order, with large regions, regions of very similar color, and regions of very different color receiving high priority. Small regions and regions with intermediate color differences receive lower priority. Once weights have been assigned, we find the tree within the graph that maximizes the weight of the minimum-weight edges, a construction that corresponds to the widest path problem.
|
| 68 |
+
|
| 69 |
+
Our algorithm has two main steps. First, we create a tree by applying the widest path algorithm [17] on the region graph. Then we assign colors to regions by traversing the tree beginning from its root. For each region in the graph, we choose the color from the palette that best matches the color difference with its parent. Before starting, we apply histogram matching between the regions' color differences and the palette color differences. The histogram matching allows us to best convey the image content while using the full extent of an arbitrary input palette, even one that has a color distribution very different from that of the input image.
|
| 70 |
+
|
| 71 |
+
We show a flowchart of our recoloring approach in Figure 3.
|
| 72 |
+
|
| 73 |
+
The input is (a) an oversegmented image and (b) a palette to use in the recoloring. We compute (c) the adjacency graph of the segments and aggregate the differences between adjacent colors into the set $\{ Q\}$ . We also compute (e) the set of differences between palette colors, yielding $\{ {\Delta P}\}$ . The widest path algorithm (d) gives us a tree linking all nodes of the adjacency graph. We match the histogram (f) of $\{ Q\}$ to that of $\{ P\}$ and (g) assign colors to all regions by traversing the widest-path tree, resulting in (h) the fully recolored image.
|
| 74 |
+
|
| 75 |
+
Before we explain the recoloring proper, we introduce the widest path problem. We will employ the widest path algorithm to create a tree over the input oversegmentation. The tree structure organizes the regions with the largest edge weights to get processed earlier to maintain an edge and contrast preservation for the recolored image. We will traverse the tree and assign a color to each region, matching the edge's target color difference with the color difference with its parent's color. In practice, it can be practical to combine the tree creation and traversal, since the widest-path algorithm involves a best-first traversal of the tree as it is being built. Prior to color assignment, we apply histogram matching to align the regions' color differences with the palette's for better use of palette colors.
|
| 76 |
+
|
| 77 |
+
§ 3.1 TREE CREATION: THE WIDEST PATH PROBLEM
|
| 78 |
+
|
| 79 |
+
Pollack [17] introduced the widest path problem. Consider a weighted graph consisting of nodes and edges $G = \left( {V,E}\right)$ , where an edge $\left( {u,v}\right) \in E$ connects node $u$ to $v$ . Let $w\left( {u,v}\right)$ be the weight, called capacity, of edge $\left( {u,v}\right) \in E$ ; capacity represents the maximum flow that can pass from $U$ to $V$ through that edge. The minimum weight among traversed edges defines the capacity of a path. Formally, the capacity $C\left( {u,v}\right)$ of a path between nodes $u$ and $v$ is given by the following:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
C\left( {u,v}\right) = \min \left( {w\left( {u,a}\right) ,w\left( {a,b}\right) ,..,w\left( {d,v}\right) }\right) \tag{1}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $w\left( {u,a}\right) ,w\left( {a,b}\right) ,\ldots ,w\left( {d,v}\right)$ are the edge weights along the path. The widest path between $u$ and $v$ is the path with the maximum capacity among all possible paths.
|
| 86 |
+
|
| 87 |
+
In a single-source widest path problem, we calculate for each node $t \in V$ a value $B\left( t\right)$ , the maximum path capacity among all the paths from source $s$ to $t$ . The value $B\left( t\right)$ is the width of the node. The union of widest paths from the source to each node is a tree, which we use to order the color assignment process. We can choose any node as the source; our implementation uses the region containing the image centre.
|
| 88 |
+
|
| 89 |
+
The widest path algorithm can be implemented as a variant of Dijkstra's algorithm, building a tree outward from the source node $s$ to every node in the graph. All nodes of the graph $t \in V$ are given a tentative width value; the source node $S$ will be assigned to $B\left( s\right) = + \infty$ and all other nodes $v \neq s$ will have $B\left( v\right) = - \infty$ . A priority queue holds the nodes; at each step of the algorithm, we take the node with the highest current width from the queue and process it, stopping when the queue is empty. Suppose the node $u$ is on top of the queue with a width $B\left( u\right)$ ; for every outgoing edge(u, v), we update the value of the neighbour node $V$ as follows:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
B\left( v\right) \leftarrow \max \{ B\left( v\right) ,\min \{ B\left( u\right) ,w\left( {u,v}\right) \} \} \tag{2}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where $w$ is the edge weight between nodes $u$ and $v$ . If the value $B\left( v\right)$ was changed, node $u$ will be set as the parent node and $v$ will be pushed into the queue. When the algorithm completes, all non-root nodes in the graph will have been assigned a single parent, thus providing a tree rooted at $S$ .
|
| 96 |
+
|
| 97 |
+
In our application, one possibility for edge weight is to use the difference in color values between the two regions. This would ensure that the widest path tree linked dissimilar regions, resulting in good edge preservation. However, regions of similar color could easily be divided. We want to preserve small color distances as well, so small color differences should yield a large edge weight. Distances intermediate between large and small are of the least importance. Hence, we base our edge weight on the difference from the median color distance, as follows.
|
| 98 |
+
|
| 99 |
+
We calculate the color distances across each edge in the adjacency graph; call the set of color distances $Q$ , with
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
Q = \left\{ {\Delta \left( {{c}_{i},{c}_{j}}\right) }\right\} = \left\{ {q}_{ij}\right\} ,\;i \neq j,\;{r}_{i},{r}_{j} \in R
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where ${c}_{i}$ and ${c}_{j}$ are the colors of regions ${r}_{i}$ and ${r}_{j}$ and $\Delta$ is the function computing the color distance. Compute the median value $\bar{q}$ from the distances in $Q$ .
|
| 106 |
+
|
| 107 |
+
We also want to take into account the size of the region, such that larger regions have greater importance; we prefer that a larger region have higher priority and thus influence the smaller regions that are processed afterwards, compared to the converse. Depending on the oversegmentation, action may not be necessary; we suggest a process to improve results on oversegmentations with a dramatic variation in region size.
|
| 108 |
+
|
| 109 |
+
We compute for each region a factor $b$ , the ratio of the region’s size (in pixels) to the average region size. Then, when we traverse an edge, we use the $b$ of the destination region to determine the weight. In our implementation, we compute and store a single edge weight; there is no ambiguity about the factor $b$ because we only ever traverse a given edge in one direction, moving outward from the source node.
|
| 110 |
+
|
| 111 |
+
< g r a p h i c s >
|
| 112 |
+
|
| 113 |
+
Figure 3: Recoloring algorithm pipeline.
|
| 114 |
+
|
| 115 |
+
To summarize: when traversing an edge, the edge weight is the distance between its target color difference and the median color difference, multiplied by a the factor $\left( {1 + b}\right)$ for the destination region:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
w\left( {{r}_{i},{r}_{j}}\right) = \left( {1 + b}\right) \left| {\Delta \left( {{c}_{i},{c}_{j}}\right) - \bar{q}}\right| . \tag{3}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
The factor $\left( {1 + b}\right)$ takes size into account, but ensures the region’s color differences can still affect the traversal order even for very small regions (b near zero). Note that the function $\Delta$ depends on the colorspace used. A simple possibility is Euclidean distance in RGB, but more perceptually based color distances are possible. We discuss color distance metrics in section 5.2.
|
| 122 |
+
|
| 123 |
+
§ 3.2 HISTOGRAM MATCHING
|
| 124 |
+
|
| 125 |
+
We plan to match color differences in the output to the color differences in the input. However, the input palette can have an arbitrary set of colors, and we also want to make use of the full palette. For example, imagine a low-contrast image recolored with a palette of more varied colors. The smallest palette difference might be quite large; if so, the muted areas of the original will be matched with difference zero, resulting in loss of detail in such regions. A narrow palette recoloring a high-contrast image will have similar problems in the opposite direction.
|
| 126 |
+
|
| 127 |
+
To adapt the palette usage to the input image color distribution, we apply histogram matching to color differences. We emphasize that we are not matching the colors themselves, but the distributions of differences. Histogram matching is applied between the region color differences (the distribution of values in $Q$ , computed in Section 3.1) and the pairwise color differences of the colors in the palette (call this dataset ${\Delta P}$ ).
|
| 128 |
+
|
| 129 |
+
The histogram matching computes a new target color difference for each graph edge; call this target ${q}^{\prime }\left( {u,v}\right)$ for the edge linking regions $u$ and $v$ . The matching ensures that the distribution of values $\left\{ {q}^{\prime }\right\}$ is the same as the distribution of values in $\{ {\Delta P}\}$ . The values ${q}^{\prime }$ are then used for color assignment, selecting color pairs from the palette which correspond to the same place in the distribution: medium palette differences where medium image color differences existed, small differences where the original image color differences were small, with the largest palette differences reserved for the largest differences in the original image.
|
| 130 |
+
|
| 131 |
+
Figure 4 shows the input image with average region colors and the recolored images before and after histogram matching. The bar graphs under each image show the proportion of each color in the image. We can see that the histogram matching used the palette colors more evenly, increasing contrast and highlighting more details. For example, strong edges on the leaf boundary became distinguishable from the nearby regions, and the markings on the lizard became more prominent.
|
| 132 |
+
|
| 133 |
+
< g r a p h i c s >
|
| 134 |
+
|
| 135 |
+
Figure 4: Histogram matching result. Left to right: original image, result without histogram matching, and result using histogram matching.
|
| 136 |
+
|
| 137 |
+
§ 3.3 COLOR ASSIGNMENT
|
| 138 |
+
|
| 139 |
+
The widest path algorithm provided a tree, and the histogram matching provided palette-customized target distances for the edges. We now traverse the tree and assign a color to each region along the way. We begin by assigning the closet palette color to the tree's root node; recall that the root was the most central region in the image. At each subsequent step, we assign a color from the palette $P$ to the current region $\alpha$ based on the palette color ${p}_{\beta }$ previously assigned to the parent region $\beta$ and the target color difference ${q}^{\prime }\left( {\alpha ,\beta }\right)$ . We also consider the luminance difference between regions ${r}_{\alpha }$ and ${r}_{\beta }$ so as to help maintain larger-scale intensity gradients. Recall our intention in preserving color differences: two regions with large color differences should be assigned two very different colors, and regions with a small color difference should get very similar colors, possibly the same color. Owing to histogram matching, "large" and "small" are calibrated to the content of the particular image and palette being combined.
|
| 140 |
+
|
| 141 |
+
We impose a luminance constraint on potential palette colors, in an effort to respect the relative ordering of the regions' luminances. Suppose the luminances of two regions ${r}_{\alpha }$ and ${r}_{\beta }$ are ${L}_{\alpha }$ and ${L}_{\beta }$ , where ${L}_{\alpha } < {L}_{\beta }$ . We then constrain the set of eligible palette colors for region $\alpha$ such that only colors ${p}_{\alpha }$ that satisfy ${L}_{{p}_{\alpha }} < {L}_{{p}_{\beta }}$ are considered. A similar constraint is imposed if ${L}_{\alpha } > {L}_{\beta }$ .
|
| 142 |
+
|
| 143 |
+
For region ${r}_{\alpha }$ and its parent ${r}_{\beta }$ , we have the target edge difference ${q}^{\prime }\left( {\alpha ,\beta }\right)$ . Denote by ${p}_{\beta }$ the palette color already assigned to the parent region ${r}_{\beta }$ . We choose the palette color ${p}_{\alpha }$ for region ${r}_{\alpha }$ so as to minimize the distance $D$ :
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
D = \left| {{q}^{\prime }\left( {i,j}\right) - \Delta \left( {{p}_{\alpha },{p}_{\beta }}\right) }\right| \tag{4}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where $\Delta$ is the distance metric between two colors. The only colors considered for ${p}_{\alpha }$ are those that satisfy the luminance constraint.
|
| 150 |
+
|
| 151 |
+
For colors with the lowest and highest luminance in the palette, there may be no available colors satisfying the luminance constraint. In such cases, the constraint is ignored and all palette colors are considered.
|
| 152 |
+
|
| 153 |
+
Since the source region has no parent, the above process can not be used to find its color. Instead, we assign the closest color from the palette, as determined by the difference metric $\Delta$ . The source region itself is the region containing the centre of the image; while the output is weakly dependent on the choice of starting region, we do not view the starting region as a critical decision. Figure 5 shows some examples of varied outcomes from moving the starting region.
|
| 154 |
+
|
| 155 |
+
< g r a p h i c s >
|
| 156 |
+
|
| 157 |
+
Figure 5: Changing the source region locations. The starting region is indicated by a dot.
|
| 158 |
+
|
| 159 |
+
§ 4 RESULTS AND DISCUSSION
|
| 160 |
+
|
| 161 |
+
In this section, we present some results. Figure 6 shows a variety of recolored images generated by our algorithm using various palettes. We succeeded in maintaining strong edges, and objects in the recolored abstractions remain recognizable. Our algorithm retains textures and produces vivid recolored images by selecting varied colors from the palette. It assigns the same colors over flat regions and distinct colors to illustrate structures. We ran our algorithm on a variety of images with different textures and contrasts. We obtained most of our palettes from the website COLRD (http://colrd.com/); others we created manually by sampling from colorful images.
|
| 162 |
+
|
| 163 |
+
In Figure 6, we present a set of examples from our recoloring algorithm, which were generated with four different palettes. From left to right, the columns show the original image, followed by recolored images using different palettes. We chose images presenting different features. The delicate features and textures in the abstractions stay visible after recoloring despite the input photographs having been radically altered by the recoloring.
|
| 164 |
+
|
| 165 |
+
In the starfish image, the structure and the patterns on the arms become visible. The uniform colors on the background turn to a vivid splash of colors, emphasizing the textureness of the terrain. The algorithm has chosen the darkest colors to assign to the shadows and the lightest ones to the surface of the creature.
|
| 166 |
+
|
| 167 |
+
The Venice canal is a crowded image composed of soft textures and structures with hard edges. The algorithm is able to preserve recognizable objects such as the boats and windows. Even tiny letters on the wall and pedestrians on the canal's side are visible. The recoloring process preserved the buildings' rigid structures; meanwhile, it captured shadows and the water's soft movements. In capturing such features, adopting a highly irregular oversegmentation was necessary.
|
| 168 |
+
|
| 169 |
+
The lizard image is an example of a low contrast image with textured areas covered by dull colors. The algorithm highlighted the textures by assigning wild colors to the homogeneous regions on the leaf. At the same time, the substantial edges like the lizard's body patterns and the leaf edges are preserved naturally by our algorithm.
|
| 170 |
+
|
| 171 |
+
In the next example, we used a high contrast image as input. The algorithm assigned the darkest colors from each palette to the coat of the man and separated it from the background using a very light color. Further, the small features on the face and the Chinese characters are mostly readable.
|
| 172 |
+
|
| 173 |
+
The rust image contains a different type of textures on the wall and the grass, plus soft textureless areas on the machinery. The brick patterns on the wall exaggerated through colorful palettes made the final images more interesting than the original flat image. The high-frequency details of the grass are retained. The smooth transition of colors on the top right of the image illustrated the shadows of the leaves.
|
| 174 |
+
|
| 175 |
+
We demonstrated strong edge preservation in all examples. Additionally, the image textures preserved, and palette colors are uniformly used to maintain a good contrast.
|
| 176 |
+
|
| 177 |
+
§ 4.1 COMPARISON WITH NAÏVE METHODS
|
| 178 |
+
|
| 179 |
+
Figure 7 gives a comparison between our method and two naïve alternatives. The first column shows the input image. The second column shows the input segments recolored by replacing the segment's color with the palette color closest to the segment's average color. The third column shows a random assignment of palette colors to segments. The final column shows our method. Recoloring with closest palette colors preserves some image content, but the result shows large regions of constant color; many of the palette colors are underused, an issue that can worsen when there is a significant mismatch between the original image color distribution and the palette, as in the upper example. Random assignment provides an even distribution of palette colors, but the image content can become unrecognizable for highly textured images, as in the lower example Our method uses the palette more effectively, showing local details and large-scale content and exercising the full range of available colors.
|
| 180 |
+
|
| 181 |
+
§ 4.2 COMPARISON WITH COLORART
|
| 182 |
+
|
| 183 |
+
In this section, we compare our recoloring method with ColorArt, an optimization-based recoloring method for graphic arts [3]. This method assigns colors to regions by solving a graph matching problem over color groups in the reference and the template image. In searching for a reference image, this algorithm uses the same number of color groups as in the template image.
|
| 184 |
+
|
| 185 |
+
Figure 8 shows the generated images by ColorArt method on the right and ours in the middle, both using the sunset palette. We created a colorful leaf surrounded by a light background as in the input image, showing the algorithm respects the changes in lightness Moreover, assignment of different colors on the leaf presented an interesting texture. The leaf image generated by the ColorArt method has reversed the image tones. In the sketch image, we preserved the edges and showed a recognizable face in the image. In contrast, the ColorArt algorithm had difficulty with the edges and the gradual gradients, resulting in a somewhat incoherent output.
|
| 186 |
+
|
| 187 |
+
§ 4.3 RECOLORING WITH SLIC0 OVERSEGMENTATION
|
| 188 |
+
|
| 189 |
+
Our recoloring algorithm does not make any assumptions about the input oversegmentation. Figure 9, shows results from an over-segmantation from SLIC0 [1]. The starfish and owl images have approximately 2000 and 5000 segments, respectively.
|
| 190 |
+
|
| 191 |
+
Note that more irregular regions can better represent complex image contours and textures, allowing the recolored abstractions to better display the image content. In starfish, the structures and shadows are represented by distinct colors that contrast between the object and the background. The strong edges, such as the arms of the starfish, are preserved; however, the thin features are not captured by SLIC0's uniform regions, and the background terrain does not present any significant information. In the owl image, the small regions on the chest convey the feather textures, while definite regions such as dark eyes kept their well-defined structures. Given a suitably detailed oversegmentation, we can produce appealing results.
|
| 192 |
+
|
| 193 |
+
< g r a p h i c s >
|
| 194 |
+
|
| 195 |
+
Figure 6: Region recoloring results. Top row: visualization of palettes used. Images, top to bottom: starfish, Venice, lizard, lanterns, and rust.
|
| 196 |
+
|
| 197 |
+
< g r a p h i c s >
|
| 198 |
+
|
| 199 |
+
Figure 7: A comparison with naïve recolorings. Left to right: The original image, the results from naïve method, random recoloring, and ours.
|
| 200 |
+
|
| 201 |
+
< g r a p h i c s >
|
| 202 |
+
|
| 203 |
+
Figure 9: Recoloring with SLIC0 oversegmentation.
|
| 204 |
+
|
| 205 |
+
§ 4.4 PERFORMANCE
|
| 206 |
+
|
| 207 |
+
We ran our algorithm on an Intel(R) Core(TM) i7-6700 with a 3.4 $\mathrm{{GHz}}\mathrm{{CPU}}$ and ${16.0}\mathrm{{GB}}$ of $\mathrm{{RAM}}$ . The processing time increases with the number of regions and edges in the graph. The time complexity of single-source widest path is $\mathcal{O}\left( {m + n\log n}\right)$ for $m$ edges and $n$ vertices, using a heap-based priority queue in a Dijkstra search.
|
| 208 |
+
|
| 209 |
+
Table 1 shows the timing for creating the trees and color assignments of different images. For small images like the starfish, containing about ${1.4}\mathrm{\;K}$ regions, the recoloring algorithm takes about 0.007 seconds to construct the tree, while it takes about ${0.3}\mathrm{\;s}$ for larger images such as rust with ${7.2}\mathrm{K}$ regions. With a palette of 10 colors, the color assignments will take 0.05 and 1.2s to recolor the starfish and rust respectively. The color assignment will take longer for the larger palettes and images with a larger number of regions. We show the timing of tree creation and color assignments for all images in the gallery. The pre-processing steps of creating the graph adjacency list and histogram matching are also presented in the table.
|
| 210 |
+
|
| 211 |
+
§ 4.5 LIMITATIONS
|
| 212 |
+
|
| 213 |
+
Although in our experience our method works well for most combinations of image and palette, there are cases where the output is unappealing. When two similar regions are not neighbours, they may receive different colors; e.g., a sky area may be broken up by branches and different parts of the sky could be colored differently. Even adjacent regions may not receive similar colors if their average colors differ, introducing spurious edges into regions with slowly changing colors such as gradients or smooth surfaces. Out-of-focus backgrounds and faces are common examples producing such effects,
|
| 214 |
+
|
| 215 |
+
< g r a p h i c s >
|
| 216 |
+
|
| 217 |
+
Figure 8: Comparison with ColorArt. Left: input; middle: our results; right: ColorArt results.
|
| 218 |
+
|
| 219 |
+
Figure 10 shows two failure examples. On the top middle image, the face and background regions got very different colors with an unappealing outcome. However, a palette containing several similar colors allows the algorithm to better match the image gradients, with dramatically better results. In the bottom images, large regions of different colors appear in the sky, which does not look attractive. Because the original regions have different average colors, our algorithm is likely to separate them regardless of the palette.
|
| 220 |
+
|
| 221 |
+
Our algorithm is at present strictly automated with no provision for direct user control beyond choice of palette and parameter settings. While these parameters provide considerable scope for generating variant recolorings, so that a user would have a wide range of results to choose from, direct control is not yet implemented. One might imagine annotating the image to enforce specific color selections or linking regions to ensure that their output colors are always the same. While it would be straightforward to add some control of this type, we have not yet implemented such features.
|
| 222 |
+
|
| 223 |
+
< g r a p h i c s >
|
| 224 |
+
|
| 225 |
+
Figure 10: Failure cases.
|
| 226 |
+
|
| 227 |
+
§ 5 VARIATIONS
|
| 228 |
+
|
| 229 |
+
In this section, we explore some variations of our region recoloring algorithm. We apply a blending mechanism to create recolored images with smooth color transitions. Then, we demonstrate that different color spaces and distance metrics can generate various results from one palette.
|
| 230 |
+
|
| 231 |
+
max width=
|
| 232 |
+
|
| 233 |
+
Image Tree creation Color assignment Graph Total #Vertices
|
| 234 |
+
|
| 235 |
+
1-6
|
| 236 |
+
lanterns 0.003s 0.02s 0.064s 0.087s 1K
|
| 237 |
+
|
| 238 |
+
1-6
|
| 239 |
+
starfish 0.007s 0.05s 0.07s 0.127s 1.4K
|
| 240 |
+
|
| 241 |
+
1-6
|
| 242 |
+
lizard 0.015s 0.08s 0.1s 0.195s 1.9K
|
| 243 |
+
|
| 244 |
+
1-6
|
| 245 |
+
rust 0.3s 1.2s 0.9s 2.4s 7.2K
|
| 246 |
+
|
| 247 |
+
1-6
|
| 248 |
+
Venice 0.3s 1.4s 0.9s 2.6s 7.7K
|
| 249 |
+
|
| 250 |
+
1-6
|
| 251 |
+
|
| 252 |
+
Table 1: Timing results for images with varying numbers of regions.
|
| 253 |
+
|
| 254 |
+
§ 5.1 BLENDING
|
| 255 |
+
|
| 256 |
+
In transferring the colors, we have intended to strictly preserve the palette and not add colors. However, we can also blend the colors, giving a more painterly style and to adding a sense of depth to a flattened abstraction. Postprocessing the recolored image introduces new intermediate shades of colors. We suggest cross-filtering the recolored image with an edge-preserving filter [14]. This process smooths the areas away from edges while retaining the colors close to strong edges. In addition, blending across the jagged region boundaries smooths regions which originally possessed similar colors.
|
| 257 |
+
|
| 258 |
+
The cross-filtering mask size will affect the outcome. Larger masks will produce a stronger blending effect; small features will be smoothed out, and the output image will become blurry in regions lacking edges. Figure 11 illustrates examples of blending using masks of sizes $n = {20},{100}$ , and 300 . Blending with $n = {20}$ only slightly modifies the image; for larger masks, the blending is more apparent. At $n = {300}$ we can see a definite blurring in originally smooth areas, although blurring does not happen across original edges. Using a gray palette, can can obtain an effect resembling a charcoal drawing with larger masks.
|
| 259 |
+
|
| 260 |
+
§ 5.2 COLOR SPACES AND DISTANCE METRICS
|
| 261 |
+
|
| 262 |
+
We can employ different functions for our color distance function $\Delta$ . Different choices of color space and distance metric can affect the colorization results. Changing the distance metric will cause both the widest-path tree and the color assignment to change.
|
| 263 |
+
|
| 264 |
+
We have experimented with computing color distances with the Euclidean distance in RGB as well as using perceptually uniform measures CIE94, CIEDE2000, CIE76, and CMC colorimetric distances $\left\lbrack {{21},{27}}\right\rbrack$ .
|
| 265 |
+
|
| 266 |
+
Both CIE94 and CIEDE2000 are defined in the Lch color space. However, CIE94 differences in lightness, chroma, and hue are calculated from Lab coordinates. CMC is quasimetric, designed based on the Lch color model. The CIE76 metric uses Euclidean distance in Lab space.
|
| 267 |
+
|
| 268 |
+
In Figure 12, we show different outcomes from different metrics using two palettes. We can observe the strong edge and contrast preservation, which is an apparent result of perceptual uniform metrics. More importantly, each metric gives a unique recolorization to the abstraction, which allows a user to choose from different output images. We can get interesting results from each metric. However, in our judgement, more attractive results are obtained from Euclidean and CMC metrics; the delicate features and image contrast are maintained, and objects are generally preserved. CIE94 and CIE2000 metrics are also effective, but we found that the CIE76 metric rarely creates interesting results.
|
| 269 |
+
|
| 270 |
+
§ 6 CONCLUSIONS AND FUTURE WORK
|
| 271 |
+
|
| 272 |
+
In this paper, we presented a graph-based recoloring method that takes an oversegmented image and a palette as input, and then assigns colors to each region. The result uses the palette colors to portray the image content, but without attempting to match the input colors. Designing our algorithm with the widest path allowed us to maintain the image contrast and objects' recognizability. We demonstrated our results with different palettes. We achieved vivid recolorization effects, effective for most combinations of input images and palettes.
|
| 273 |
+
|
| 274 |
+
In the future, we would like to investigate non-convex palette color augmentation, adding new colors extending an input palette while matching the palette's theme. We would like to extend the color assignment to consider color harmony, scoring based on compatibility of colors and thus effecting the ability of certain colors to be neighbors. Furthermore, we would like to be able to recolor smoothly changing regions like the sky more uniformly. Adding elements of user control would allow for better cooperation between the present automated method and the user's intent.
|
| 275 |
+
|
| 276 |
+
§ ACKNOWLEDGMENTS
|
| 277 |
+
|
| 278 |
+
The authors thank ...
|